IX Perspectives

S2S Is Here, But Choice and Trade-Offs Are Key

Today’s column is written by Drew Bradstock, SVP Product at Index Exchange. 

The implementation of header tags has taken the supply side of programmatic by storm over the last 24 months, and is quickly becoming a universally implemented utility by publishers worldwideIt’s exciting to see media companies take the newfound control that has been afforded to them and say not only am I not giving this up, I want more. Publishers have accepted a fair, clean, parallel auction as their right and are demanding it become even faster and more efficient. This hunger for speed and efficiency is driving a desire to move integrations in the header from Client Side (CS) to Server to Server (S2S). We don’t believe it will be an either/or, at least not in the short term, and so our focus is to enable that choice and benchmarking.

We are approaching this emerging need like we do any other product decision, by considering how we can give publishers the control, speed, and data they need to maximize the value of their inventory. We are upgrading the Index wrapper’s existing partnerships with top exchanges to allow publishers to choose client-side or server side connections for each exchange partner, empowering publishers to learn where each partner performs best.

Publishers will have direct control over critical aspects of their server-side wrapper auction, including the partners involved, timeout thresholds, and deal handling. We believe publishers and bidders need not sacrifice transparency for technology, so we will be providing robust reporting and logging on all auction dynamics.  Transparency, infrastructure, and ease of use will be more important than ever for making these optimizations successful.  For publishers already running the most recent version of the Index wrapper, there is no development work needed, S2S connections will simply appear in the UI, like an iPhone update.

It is easy to get caught up in all of the potential upsides of S2S but there are other significant revenue-impacting factors to consider when deciding whether to move exchanges server-side. No one knows exactly how CPMs and latency will be affected for each publisher when an integration is server side vs client side. S2S allows a theoretically unlimited number of demand sources to participate in an auction which should create a net positive outcome, but there are quite a few other factors to consider.

Transparency

Server-side integrations have existed for close to 10 years, spinning up during the advent of programmatic and RTB. The obstacle that has been so challenging for the industry to overcome towards adoption was not technology, but trust. When header bidding exists in the browser, transparency is a given, since every auction is out in the open and available for inspection. This meant that when the wrapper rose in popularity, it was a relatively easy process to get exchanges to work together. They only had to trust each other as long as they could prove there were no shenanigansIn S2S, the trust issue remains a very real concern. What ultimately has started to break down this barrier were the adtech giants that recently had the muscle to get exchanges to integrate with one another via S2S, like Google and Amazon. This has made the market ripe for S2S, but publishers need to be more careful than ever when choosing a solution.

Latency from Connections

We know that browsers — and mobile browsers in particular — are pretty ineffective at managing multiple connections. That’s one of the main gripes about client-side header bidding. When the connections move from the browser to the server, a couple of things are happening. First, the user and their browser are now an additional step away from the exchanges, but the server can gather bids much faster and more reliably.  By moving to S2S, publishers are making a bet that specialized adtech infrastructure is better suited to gather demand than a collection of mobile phones and desktop computers. Client side header bidding hasn’t been sitting still however, and its advancement from multi request architecture, to single request architecture radically reduced the connection count of any given bidder. Therefore, the gain from S2S is not quite as substantial today as it would have been, say 24 months ago.

Transit time

Infrastructure matters a lot in header bidding. All interactions happen in client side header bidding between the exchange and the end user. The farther away an exchange’s bidding end point is from the end user, the longer the latency. The more distributed the exchange is, the lower the latency. With S2S, infrastructure becomes far more important, as there is a secondary party introduced. S2S needs to be close in proximity not just to the end user as is the case with header bidding, but also in proximity to each participating exchange. A poorly distributed S2S solution will penalize each exchange, regardless of how diverse its infrastructure happens to be.

For example, if a header bidder endpoint is in NYC, and the end user is in NYC, network latency will be < 1 millisecond. Now suppose that same header bidding endpoint, for that same user in NYC, is called from a S2S solution whose infrastructure is in Texas. The transit time will be taxed twice – once to get to the user, and again to get to the endpoint. Effectively, in this admittedly made up example, S2S would actually be slower than header bidding. Investments in infrastructure must be made to ensure S2S is near both the (a) end user and (b) all participating exchanges, for the results to even get close to mirroring the header.

User Sync

Another reason the header bidding wrapper chose the browser to be its original home — user matching. This one is a really big deal for publisher monetization, particularly in desktop and mweb. When buyers are purchasing inventory programmatically, it’s all about the user and everything they know about her. If they can’t match that user, it loses much of its value, driving down CPMs drastically. Sharing user data would get everyone on equal footing, but it’s going to take a while for exchanges to agree to share this precious resource. There are also some huge revenue-driving partners that may have earned their place in the header and might be less willing to move to S2S because of user matching issues.  Index is already working with exchange partners on multiple methods to ensure cookie matching is as effective as possible. A major holdout would be a non trivial concern for a publisher’s bottom line.

Choice

To us, choice is key here. It’s important to understand that an S2S integration is fundamentally different than a header integration. The differences called out above are but a few of the variables to consider, meaning that results will vary. Given this, we believe the most powerful way to enable publishers is through choice, rather than assuming what will drive the best results, we want to prove it:

  1. Test an exchange partner with a client side header call
  2. Test that same partner with a server side call
  3. Assess latency
  4. Assess revenue
  5. Choose to implement the partner in the path most optimal to your business outcome

While the technology is advancing, the wrapper still belongs to publishers and they shouldn’t have to blindly switch to a buzzy new feature without proof that it will improve their core metrics of latency, user experience, and revenue. Transparency is also not going out of style any time soon. Fully auditable logs, transparent auction dynamics and transparent cost structure are imperative, and the industry must remain steadfast on their refusal to settle for anything less. That’s why we are building even more tools to give publishers greater visibility and control over their wrapper and to make a data-driven decision for what works best for their business.

Leave a Reply

Your email address will not be published. Required fields are marked *