“I hate the term header bidding,” a friend and industry resource told me over a cold beer. “It’s too catchy—it sounds like another piece of ad-tech buzzword BS.”
I’d argue “tagless tech”—the first name I heard in reference to header-based executions—was far worse (and horribly untrue). But my friend’s dislike really stems from the ad tech industry’s tendency to latch onto a term and flail it anywhere and everywhere until it’s completely drained of all meaning. It makes something revolutionary sound like a pittance, and encourages reactionaries to dismiss a seismic shift as merely a fad.
No, the header insurgency (and header bidding is just one flavor) is something far more substantial, banishing the contrived waterfall in favor of real, nearly level markets. For those who have been watching, that’s a first for the programmatic trading space, although it was always the ideal.
And we’re just getting rolling—the header is reaching a fascinating stage of development where efforts to curb latency, advances in server-to-server technology, and industry consolidation promise to reshape a landscape that just finished a major transformation. Buckle up, buckaroos.
Latency Slows You Down
Latency has long been the chief complaint about header integrations, and the dread that has kept some premium publishers far, far away from the space. Yes, it’s great to get smarter bids from a wider array of demand sources, but my page loads are already being weighed down by tags and third-party code… I can hear the ad blockers pounding on the door!
User experience concerns have gripped publisher revenue efforts, with particular focus on latency and data usage. Well, header bidding has issues with both of those.
Auctions occur within the user’s browser, which can only do so much simultaneously (e.g., Chrome will only make 10 requests—6 per host—before pausing). To fight back against latency, publishers install timeouts that sacrifice beneficial bids to keep down user ire. But even with timeouts, every second counts considering users have become accustomed to zippy Internet speeds.
This gets worse on mobile, where loading can also be slowed by lackluster network speeds. And the extra auction weight on the browser leads to data drain, an ongoing battle on the mobile web. Users aren’t fond of having their rather expensive data stockpile sucked up by advertising. Data drain is a particular drag for the mobile web, where header integrations could greatly help a lackadaisical programmatic space.
Header-bidding technology providers have long been aware they are in a battle against latency—smarter players have expanded their data-center distribution to offer better coverage. They have also built server-to-server connections with DSPs to faster deliver bids and introduced “pre-fetch” technology that hold auctions for inventory yet to appear on a page.
And they’ve introduced single-request bid architecture. Index Exchange President Andrew Casale goes into pretty amazing detail about the technology here, but I’ll give a shot at a layman’s explanation. The first generation of header bidding employed multiple-request bid architecture, in which each placement on a page is treated as an autonomous unit. That may sound nice in theory, but it means each placement gets its own auction by each partner.
If you have 5 placements on a page trying to be filled by 5 header partners, suddenly you’ve got 25 auctions to run (if there aren’t auctions inside of auctions). Remember how we mentioned that Chrome only performs 10 tasks at once? Many of your auctions are going to be delayed (or never occur—pesky timeouts!).
With single-request, on the other hand, the whole page (including potential units a la pre-fetch) can be auctioned at once rather than separate placements. In the above situation, you’d have 5 auctions for 5 partners, which is far more manageable for a browser.
Single-request opens up a bevy of opportunities (Tandem ads! Fluid placements!), but in particular it cuts latency (and data usage) by simply reducing the number of auctions that take place in the browser. It’s startlingly more efficient.
But there’s another header-based technology you’ve been doubtlessly reading about that truly kicks latency concerns to the curb: server-to-server. (Continued…)Read More at AdMonsters