Ending Invalid Traffic: Publisher and Exchange Best Practices

Introduction

Late last year, the industry was captivated by Methbot, a sophisticated bot network responsible for $3-5 million of ad fraud losses on a daily basis.1 The news revived interest in fighting against invalid traffic, or SIVT, which some estimates say accounts for over 40% of all traffic on the web.2

Sophisticated Invalid Traffic is disastrous for marketers — it wastes significant portions of advertisers’ budgets on impressions where no human even has an opportunity to see an ad, let alone make a purchase. An estimated $6.5 billion in advertising spend will be wasted on invalid traffic in 2017 alone.

Benefits to Publishers

Though publishers could ostensibly benefit from the inflated revenue caused by invalid traffic in the short-term, IVT lowers the value of legitimate impressions in the long-term and will become a huge liability as the industry takes steps to eliminate it. Not only are services like White Ops, IAS and MOAT monitoring and reporting on SIVT across broad swaths of inventory, marketers are responding by scrutinizing where ad spend is allocated and applying strict standards based on ad campaign performance. Publishers with significant amounts of invalid traffic will perform poorly on downstream metrics and are at risk of having their inventory devalued over time.

What is Sophisticated Invalid Traffic?

The Media Rating Council defines invalid traffic as “traffic that does not meet certain ad serving quality or completeness criteria, or otherwise does not represent legitimate ad traffic that could be included in measurement counts.”3 The two general categories of invalid traffic are:

General Invalid Traffic (GIVT), which consists of traffic that can be caught through routine means of filtration, such as lists and standardized parameter checks. IVT in this category can be accidental and non-malicious, and generally identifies itself proactively.

Examples include:

  • Known data center traffic (a consistent source of NHT or non-human traffic)
  • Search engine spiders and crawlers
  • Non-browser user-agents (as identified by the user-agent string in the header of a request)
  • Pre-fetch or browser pre-rendering traffic

Sophisticated Invalid Traffic (SIVT) is more difficult to detect and requires advanced analytics and/or significant human intervention to analyze and identify. Examples include:

  • Bots or scrapers masquerading as legitimate users
  • Hijacked devices (i.e. a device is altered to make non user-generated ad requests), hijacked sessions within hijacked devices, hijacked ad tags and hijacked creative
  • Hidden, stacked, covered or otherwise intentionally obfuscated ads
  • Invalid proxy traffic
  • Adware and malware
  • Incentivized manipulation of measurements
  • Misappropriated content and falsely represented sites or impressions
  • Cookie stuffing, recycling or harvesting
  • Manipulation or falsification of location data

While much focus is placed on SIVT, any attempts publishers make to limit invalid traffic should eliminate GIVT as well.

Four Habits of Highly Successful Publishers

In 2015, White Ops conducted an analysis of members of Digital Content Next, a trade organization of high-quality digital content companies, to identify practices that have proven particularly successful in limiting levels of SIVT.4 Their findings are illuminating:

  1. Strictly vet traffic from third parties or opt out of sourced traffic.
    • While traffic sourcing from third parties can result in high bot percentages, publishers that restrict traffic purchases to top-line recommenders can do so without seeing increased bot traffic.
    • Publishers can safely source traffic while maintaining high valid impression levels in their inventory. Three of the top five DCN publishers with the lowest sophisticated bot percentages sourced traffic from large suppliers and maintained sophisticated bot percentages below 2%, with general bot percentages of 0.5 to 1.3%.
    • Another well-known publisher highlighted carefully vetted sourced traffic options and found that the available inventory did not meet quality standards. This publisher opted not to source traffic from third parties. This publisher has a sophisticated bot rate of 2.6%, far better than average.
  2. Do not use viewability as a sole measure of whether impression was caused by a bot.
    • Viewability and fraud are two very different metrics. Two top-volume publishers with bot rates below the average of the 32 DCN members analyzed did not bill based solely on viewability. Strategies to maintain high inventory quality for these publishers include managing all factors, including viewability, with a strong foundation of policies that limit the ingress of bots and other inventory defects through all channels.
  3. Maintain visibility and control over your inventory.
    • Inventory classified by media type, such as video, had bot percentages well below the average of unclassified inventory. This may indicate that inventory that is easier to tag and monitor is also the easiest to manage for quality factors such as humanity percentages.
  4. Careful use of retargeting.
    • High volume publishers that did not retarget site visitors had an average sophisticated bot percentage of 1.8%, compared to the higher sophisticated bot average of 2.7% for similar publishers who retarget site visitors.

Limiting SIVT: The Exchange Level

In order for the industry to seriously address ad fraud, invalid traffic must be identified and filtered out at every stage of the supply chain. At Index Exchange, we limit SIVT by screening out publishers that do not meet a standardized list of criteria. As it turns out, high quality publishers tend to share the following qualities:

  1. Engaged Viewer Base: High bounce rates are an indicator that a publisher’s inventory isn’t premium and that its traffic may not be legitimate. Visitors should be checking out multiple parts of the site, not just one page then leaving quickly.
  2. Modern-Looking Site: Most premium publishers have transitioned to the more efficient, minimalist visual style of Web 2.0. Preferred publishers take pride in the presentation and organization of their own content.
  3. Seamless Ad Experience: The ad experience should also be refined. The publisher doesn’t feature disruptive ads on their pages, making it as easy as possible for users to enjoy their content.
  4. Reasonable Ad Quantity: Generally speaking, sites should have no more than five ads per page. Additional ads can run the risk of distracting from site content and devaluing the site’s inventory.
  5. Explicit Permission: Publishers should leverage initiatives like ads.txt to make clear which exchanges are authorized to sell their inventory and which are not.

The publishers featured in this whitepaper all have exceptionally low SIVT according to data from White Ops, and they have shared their own strategies for identifying and eliminating it. Marketers will want to consider these strategies when vetting publishers and the exchanges that work with them.

Case Study: Canadian Broadcasting Corporation (CBC)

Despite being one of the largest media properties in North America, the Canadian Broadcasting Corporation and its broadcast service Radio Canada have SIVT levels far below the industry average. With an SIVT level consistently below 2% for their desktop display inventory according to data from White Ops, the CBC has done an excellent job of making sure the traffic to their sites is legitimate.

Jeff MacPherson, Director of Monetization Platforms & Services at CBC, has a simple explanation for these low SIVT numbers: The CBC does not purchase traffic merely to meet sales goals. “There’s no good way to buy traffic,” he explains. “You just have to live by the merit of your content. If that content works, you won’t need to buy traffic.”

MacPherson raises a significant concern about IVT for purchased traffic. Traffic that has been purchased or sourced through a third party has a significantly higher incidence of non-human traffic than “organic” traffic.

Rather than purchase traffic for promotional purposes, the CBC leverages their agency of record and agency trade desk for standard ad buys, strategic social media campaigns and multi-platform messaging across their owned and operated sites.

To detect IVT, the CBC employs a team internally that monitors system health, including suspicious traffic. When suspicious activity is identified, the user agent and IP are blocked. CBC also employs trusted technology solutions at key stages in the programmatic supply chain. Each tech partner measures and detects IVT and entrusts DFP to filter a significant portion out. Publishers will want to devote significant resources and forge similar relationships to take ad fraud seriously.

Case Study: Publishers Clearing House (PCH)

Leading interactive media and digital entertainment company PCH markets merchandise and magazine subscriptions through sweepstakes and prized-based games on their sites. As such, it is a business imperative that the users who visit their site are human and engaged. Therefore, PCH has successfully reduced the prevalence of SIVT on its sites to more than 80% below the industry average for desktop display, according to Index Exchange data measured by White Ops. To achieve this, PCH put measures in place to carefully vet the users who view their content.

“We restrict any advertiser-supported content to registered users only,” said Sal Tripi, AVP of Digital Operations and Compliance at PCH. “Consumers must validate personal information such as email and postal address before being exposed to advertising.” Not only does this approach ensure viewers of PCH’s content are legitimate and engaged, but it gives PCH first party data they can use to target ads appropriately.

PCH also employs IP address blacklisting, keeping visitors outside their target geo from visiting their sites and blocking IP addresses they have identified as distributing IVT. PCH has at least three industry-leading tools active on their sites to monitor, track and act upon invalid traffic at any one time. Such commitment to limiting IVT has earned PCH Certified Against Fraud Seal certification from the Trustworthy Accountability Group dedicated to eliminating ad fraud.

Conclusion

Though the ad industry has moved past the particular crisis created by Methbot, it was a reminder that we must remain vigilant in recognizing and eliminating invalid traffic. Marketers continue to waste a significant amount of budget on impressions never actually seen. However, they can take steps to combat bad practices and incentivize clean inventory by being selective with their ad spend, demanding ad fraud data and partnering with publishers and exchanges actively engaged in fighting IVT.

References

1 Schiff, A. (2016, December 20). White ops blows the lid off a $1 billion-plus russian botnet. AdExchanger. Retrieved from https://adexchanger.com/online-advertising/white-ops-blows-lid-off-1-billion-plus-russian-botnet/

2 Rodgers, Z. (2015, June 10). AppNexus details how it is policing ad fraud and domain masking. AdExchanger. Retrieved from https://adexchanger.com/online-advertising/appnexus-details-how-it-is-policing-ad-fraud-and-domain-masking/

3 Media Rating Council Inc. (2015) Invalid traffic detection and filtration guidelines addendum. Retrieved from http://mediaratingcouncil.org/101515_IVT%20Addendum%20FINAL%20(Version%201.0).pdf

White Ops, DCN. (2015) Bot benchmark report. Retrieved from https://marketing.adobe.com/resources/etc/commerce/products/exchange/fraud_detection_andmeasurement/doc21481132941777/DCN_WO_Bot_Benchmark-Report_2015.pdf

White Ops Internal Data. (2017).

Leave a Reply

Your email address will not be published. Required fields are marked *