No results found

Your search did not match any results.

We suggest you try the following to help find what you’re looking for:

  • Check the spelling of your keyword search.
  • Use synonyms for the keyword you typed, for example, try “application” instead of “software.”
  • Try one of the popular searches shown below.
  • Start a new search.
Trending Questions
 

Oracle and the Global Disinformation Index combat a rash of disinformation

Disinformation sites can collect ad revenue and damage brand reputations

By Alex Chan | August 2021


Oracle and the Global Disinformation Index

Oracle and the Global Disinformation Index are helping brands safeguard their ad spend from sites with disinformation surrounding topics like COVID-19, elections, and more.

Companies spent at least $25 million buying ads on nearly 500 English-language websites that were touting COVID-19 disinformation and conspiracy content over the last 12 months, research by the Global Disinformation Index (GDI) finds.

And that’s just a slice of the estimated $235 million paid to disinformation sites via adtech firms on behalf of well-known brands, according to GDI.

“This kind of tells you that a guy in his basement with a WordPress site and some Facebook savvy can make a pretty good living in this,” warns Danny Rogers, cofounder and CTO of GDI.

The chaos caused by disinformation around topics such as COVID-19 is reason enough to work to stamp it out. But for companies that unknowingly advertise alongside such content, it can also cost them their brand reputation and turn off would-be customers.

In today’s intricate adtech ecosystem, brands are using advanced technology to track their ad placements to avoid inadvertently advertising on and funding sites that promote disinformation.

“There’s a business risk to allowing your brand messaging to appear and to be associated with the type of content produced by these sites,” says Jay Pinho, senior manager of product management for Oracle Moat measurement suite. “Every ad dollar that goes toward one of these sites, even if it’s inadvertent, is effectively siphoning away monetization from higher-quality publications.”

But how can you identify a disinformation site? In an effort to help brands do that, Oracle Advertising is collaborating with GDI, an independent nonprofit that evaluates the disinformation risk across the open web. Together, they’re providing ways for brands to keep disinformation sites out of the inventory of ad slots that a brand might bid on, in order to avoid unintentionally supporting them.

GDI’s risk-rating analysis is powering a new Oracle safety segment that marketers can use to block domains categorized as high risk for disinformation and avoid using these sites for future campaigns. This new segment will also help marketers comprehend, measure, and enhance campaigns when using Moat Measurement, using metrics provided by an analytics dashboard.

Disinformation creates risk for brands

Besides potentially wasting ad spending, and the ethical reasons to avoid disinformation, a brand’s image can suffer when their messaging runs alongside content produced by these questionable sites.

According to a 2020 Adweek survey, 51% of millennials and Gen Xers are four times less likely to purchase and three times less likely to recommend a brand when they see ads in unsafe environments, despite the fact that the placement may not be the brand’s fault.

Pinho emphasized that the average consumer is likely unaware of how adtech uses automation to place ads within the media ecosystem, making it even more crucial for brands to ensure that their ad activity reflects their values.

“This ad appeared on this site, so therefore, that brand must have meant to put it there,” Pinho says of the consumer’s typical thought process. “It might sound ridiculous among people who understand adtech, but it is probably the most likely explanation if you don’t understand the technology.”

How to define disinformation

One of the most difficult things about combating disinformation is defining what disinformation is. It could simply be interpreted as lies on the internet. However, specific facts that are favored or omitted from a narrative can also be used to promote falsehoods.

Examining the ways audiences can be misled by both lies and facts allowed GDI to establish a solid set of criteria to define disinformation more usefully.

 

“There are a lot of opportunities to save money and also promote your brand in the best environment for your message.”

Jay Pinho, Senior Manager of Product Management, Oracle Moat

“We here at the GDI view disinformation through the lens of what we call ‘adversarial narrative conflict,’” Rogers says. “Basically, anytime somebody is peddling a narrative, either implicitly or explicitly, that is adversarial in nature against an at-risk group or institution and creates a risk of harm, they are creating disinformation. And they could be demonstrating it through pieces of content that cherry-pick elements of the truth, or they could be mixed in with wholesale fabrication. That confusion between the two is part of the goal, to whittle away our ability to tell the difference between truth and fiction.”

GDI’s definition is meant to identify the problems caused by disinformation, how they affect brand safety implications, and why certain things intuitively come across as brand unsafe.

Oracle tech against disinformation

Oracle Advertising’s all-hands-on-deck approach to combating disinformation utilizes three pillars: strategy, technology, and partnerships.

Strategy involves understanding what kind of content a brand deems acceptable to run their ads next to, auditing disinformation sites to avoid them in the future, aligning the brand toward positive environments, promoting responsible media, and verifying that brand messaging is running adjacent to safe and suitable content.

Technology is responsible for keeping up with the spread of content as soon as it goes online and either blocking ads from running alongside disinformation or allowing them to run with safe content.

For example, Oracle Advertising technology looks at the full web page of content to assess what the content is about and whether it’s safe for a brand’s advertising.

“Something we really emphasize at Oracle is that full-page categorization is absolutely vital,” says Pinho. “Looking at things like the text of the URL or only the domain and not actually categorizing individual pages…that’s just not sufficient.”

Partnerships encourage brands to work with the right tools and people to help prevent their content from appearing in undesirable places. GDI’s approach to this incorporates human assessment and machine learning, where human assessment determines what a brand considers as disinformation, and machine learning applies modeling to capture the disinformation happening at scale on the internet.

Oracle Advertising and GDI’s new partnership can help brands with a wide range of today’s challenges to accurately define disinformation; safeguard their content on a global scale across regions, languages, and media types; validate where ads are running and optimize for the next campaign; and define brand goals through consultation.

“There is a lot to think about when it comes to brand safety, and there’s a lot of risk,” says Pinho. “But there are a lot of opportunities to save money and also promote your brand in the best environment for your message.”

Dig deeper

Photography: Filippo Bacci/Getty Images

Alex Chan

Alex Chan

Alex Chan is a writer at Oracle. She was previously a reporter for The Orange County Register and subsidiaries of the Los Angeles Times.