As AI presents new opportunities in advertising and marketing, IAB Tech Lab has played a central governance role as the industry looks to maximize AI’s potential.
Lately, Tech Lab’s focus has appeared to be all about AI agents and “agentics”.
Witness last week’s “Launching the Agentic Roadmap” webinar with Tech Lab CEO Anthony Katsur leading the charge. Mr. Katsur’s responsibilities entail working on a myriad of complex topics while brokering agreement among companies — and personalities — large and small that ideally speak to everyone’s business interests.
Tipsheet spoke to Mr. Katsur about the latest agentic governance news at the IAB’s Annual Leadership Meeting, which kicked off this weekend in Palm Springs, California.
Topics included:
- AI in ads and “rushing” into experimentation.
- Creating the agentic definition at Tech Lab.
- “Agentifying” six standards.
- The most important agentic standard right now.
- V2 of the Agentic Real-Time Framework (ARTF).
- Goals and deadlines for current agentic efforts.
- Development of an “Agent to Agent” registry launching March 1.
- Was “User Context Protocol” (UCP), now “Agentic Audiences”.
- Agentic mobile, formerly “MobileCP”.
- Concerns about other protocol initiatives such as AdCP.
- Top milestones ahead for Mr. Katsur.
Scroll down for the interview which has been lightly edited for clarity
TIPSHEET: Last June, you expressed concern coming out of Cannes regarding how fast AI was coming and that the industry needed to get in front of it. How is the industry doing?
ANTHONY KATSUR: It’s a great question and… it’s tricky to answer.
The industry is rushing towards experimentation, which is great for innovation.
At the same time, the industry is thinking of “agentic” as this new gold rush, which I’m candidly skeptical of.
But I want to be clear: Agentic workflows can bring greater efficiencies within agencies and in how media is discovered, planned and bought as well as how it’s reconciled. I do think agents can facilitate and streamline better workflows, no question.
This notion of “the open web is going to compete with the walled gardens” — I’m very skeptical of that claim, and no one has yet to articulate to me how that’s going to occur.
I’ve also had conversations where agentic workflows are going to activate, perhaps, certain legacy media channels. I don’t think protocols solve for those things — people do.
So, the introduction of agents and new protocols don’t necessarily address some of the challenges advertising — not even digital advertising — but just advertising deals with.
So how has the industry dealt with it… I think we’re rushing into this experimentation phase, which is fine and healthy, but my one concern is that we’re rushing into it with very few guardrails. The expectations are incredibly high and I don’t think the reality meets the current hype of how agentic workflows are going to transform digital media.
The IAB Tech Lab’s webinar, “Launching the Agentic Roadmap,” took place last week. Given some of your hesitations, how did you plan the agentic roadmap?
The roadmap is planned based on existing Tech Lab standards. We have almost all the ingredients we need today to power agentic workflows. Agents built using Model Context Protocol (MCP) and the Agent2Agent (A2A) protocol — that’s all you need. But, those protocols need referenceable context to perform actions.
It’s appealing to say, “OK, we did this agentic transaction: a buyer agent interacted with a seller agent, and we set the CTV campaign live.” But, in order to do that millions of times with repeatable accuracy, you need a common set of referenceable objects or well-defined primitives or else your campaign budget becomes your impression goal and your impression goal becomes your campaign budget.
We all hear about large language models (LLMs) hallucinating. You can see it in everyday life. Ask your favorite LLM the same question twice, you will get a subtly different answer. That’s okay when it’s a human reading the answers and understands the nuance. But, when you’re integrating with native systems like ad servers, measurement systems, SSPs, DSPs and the current digital media ecosystem — those systems require precision accuracy.
This is where I go back to guardrails. We’re running off and looking at all these new protocols and reinventing ways of doing things we already have. We’ve already defined the language of advertising: what a campaign, placement, line item, impression and so on is and those have manifested themselves in the form of Tech Lab standards.
Layering MCP and Agent2Agent on top of Tech Lab standards helps create those guardrails and repeatable accuracy to do this millions of times over.
It also facilitates integration with the existing systems.
So, using Tech Lab standards allows you to then use the Advertising Common Object Model (AdCom) which OpenRTB uses. So if you want to have agentic interfaces into OpenRTB, using Tech Lab standards gives you those rails into the existing programmatic ecosystem.
So that’s what I talk about when I say “guardrails.” The fact that we’re rushing into this and exploring and experimenting is great, but in order to scale this, we’re going to need these standards.
And again, the industry will need these standards — not the Tech Lab. Tech Lab is just the steward of these standards. The industry has already defined the lingua franca of the ecosystem. Layering the agentic protocols on top of that is not that hard. Furthermore, you don’t have to reinvent the language of the ecosystem because we already have created these objects, schemas and taxonomies. We actually innovate faster in agentic because we’ve already created a common definition.
You talked about “agentifying” six standards at the beginning of last week’s webinar. What did you mean by “agentifying”?
Just to explain to the non-technical folks in tipsheet’s audience, this concept of Model Context Protocol (MCP) resides on what’s known as an MCP server. Think of the MCP server as the database of all of those objects and defines primitives. Those primitives can be actions you want to take: create campaign, set campaign, submit creative and so on are. Those object models live in this MCP server. So when we say we’re agentifying our standards, we are effectively publishing a set of MCP server reference implementations with how the AdCom model [and other standards] will be manifested in an MCP server.
Think of an MCP server in the simplest terms as a common referenceable database of these objects that agents can go speak to and say, “OK, the ABC agent wants to create a campaign.” OK, what does that mean within the MCP server — where the XYZ agent as a seller understands what “create a campaign” is. It creates a common reference point for those agents to speak so effectively. What we’re doing is we’re taking all of our standards and we’re going to be deploying those through a set of MCP servers that the industry can use.
Of the six standards, which one do you think is the most important right now?
The Advertising Common Object Model (AdCOM) is the most important because it’s the underpinning set of objects that powers OpenRTB. It powers Open Direct, which is programmatic guaranteed — direct integration with publisher ad servers —, and it also powers the Deals API.
So starting with AdCOM… that becomes the underpinning of what a campaign is, what a placement is, what a line item is — all those definitions and objects live in AdCOM, which is the foundation that powers an agentic workflow. It ties into everything else.
Let’s talk about the Agentic Real-Time Framework (ARTF) — what is the “V2” that you’ve scheduled as part of the “agentifying” process?
To be clear, ARTF started out as “The Containerization Project“. But, last summer, the task force that was working on it saw MCP maturing and said, “Wouldn’t it be cool if a model could interface with the RTB protocol? Let’s put an MCP front-end into the standard.”
That’s when we realized this was more than just a containerization standard — we actually have an agentic component to it, and that’s how ARTF was born.
On “V2”, now that we have MCP as an interface — we’re also going to standardize it with Agent2Agent (A2A) protocol. We also want to expand the standard around areas of security and interoperability with other containers. So that’s what V2 is going to be. V2 is not a heavy lift. We’ll probably have V2 out the door by end of Q2.
Are you aiming at end of Q2 for all six of them?
No. We will have effectively “agentified” or effectively have deployed MCP servers with all of our standards for the industry to use by April.
The Tech Lab board gave us special dispensation in the last board meeting to move fast. Traditional working group rules do not apply. “Let’s innovate. Let’s move quickly.”
We’re building and coding reference implementations of all of this. So you know, we’re investing well over seven figures into agentic initiatives this year off the Tech Lab balance sheet. We’re not asking for donations. The only thing we want from the industry — for people reading — donate code, time and use cases. I have an army of engineers that are building this. So that’s what we’re looking for from the industry.
Publishers, donate your use cases. Agencies, donate your use cases. What are the most common use cases we can solve for to build these reference implementations that the industry can just take off the shelf and it will get them 90% of the way there, and then they can build on top of it.
On the new Agent to Agent registry, which has a completion goal of March 1, what’s the use case there?
Discoverability. The agent registry serves a couple of purposes. One, it’s a market and becomes a marketplace of agents.
So, if you’re looking for a measurement, seller, or buyer agent, what are the agent options out there? It acts as a marketplace of agents, first and foremost.
But second, there’s a security component to this: Who’s endorsed it? Who else is using the agent? Is it a reputable agent? It has to have some level of credibility within the ecosystem for folks to use it.
Also, what are the skills that the agent has? Because you may want to use an agent, but you may not want to use all of its skills.
All of that is the purpose of this agent registry — let people explore it, let other agents explore it. I don’t think we’re looking at a world where there’s just this autonomous agent discovering other agents. These relationships will still be governed by contracts and business negotiations.
How will agents qualify for the registry?
We have to sort through that this month. But, building the registry is pretty trivial.
At this point, I can say the registry will be free for the industry. You do have to get a login to the Tech Lab tools portal but that’s free, too. It’s not member-only.
So, on qualifications… are you a reputable company? What does the agent do? The governance is probably the harder work around getting the agent registry launched than building it.
Regarding User Context Protocol (UCP), where do things stand?
Well it was UCP, now it’s “Agentic Audiences,” a “little” company named Google took the acronym UCP (Universal Commerce Protocol. Read more.).
“Agentic audiences” was donated by LiveRamp and it is effectively a protocol by which agents can share signal around audience data using what’s known as an “embedding.”
An embedding is powered by vectors — which is a complex topic.
A vector is effectively a set of calculations that take a dataset and “vectorize” it to the point where it has an element of uniqueness about it such that it could represent an audience segment such as a “new homeowner.” But, the signal that goes into that could be unique to a particular buyer or seller.
Now, if someone else were to calculate a vector that is also defined as a “new homeowner,” there becomes this kind of fuzzy logic between the two in terms of “How closely do the vectors overlap?” There are challenges around vector interoperability that we have to explore as part of this initiative.
Taking the approach of vector embeddings within agentic audiences also has elements which are privacy preserving. I do think the agentic audience piece is probably one of the more interesting aspects of this whole agentic workflow process.
And the mobile protocol that CloudX brought to the table?
Well, as any mobile app publisher or anyone who buys a significant amount of mobile knows, advertising in mobile has certain unique properties that are distinct from CTV and open web purchasing. The agentic mobile donation — formerly MobileCP — donated by CloudX, takes that into account in its object structures. This is where we would augment the AdCOM — by introducing some of the unique object models that are specific to the mobile app ecosystem.
There are other industry initiatives going on in parallel, such as those with Prebid and AgenticAdvertising[dot]org in support of Ad Context Protocol. Does that impact your strategy at IAB Tech Lab?
No, it doesn’t.
Any concerns?
Our concern is fragmentation. We think our approach is the right approach.
Again, we don’t need to redefine the language of advertising. That’s what we’ve done already. Layering MCP and Agent2Agent on an existing, agreed-upon use of a set of object models and standards — for well over a decade — will allow for faster innovation than recreating the wheel.
I’ve used this analogy publicly many times: we’re remodeling the kitchen, we’re not bulldozing the house. And I think some of those other approaches are bulldozing the house, and that is going to take a lot longer to adopt and use.
Why don’t we just build on the foundations that we already have today?
So, no, those initiatives don’t impact our strategy. We are going to forge ahead and put the best reference implementations of seller and buyer agents, agents of an audience… we’re looking at a specific creative agent for creative readiness and creative submissions… those are all parallel paths that we are coding for today. I am blocking out the noise and we’re going to do what we believe is right for the industry.
Finally, what are the next milestones you’re looking forward to?
Roadmap-wise, we already have a reference implementation of buyer agent and seller agent. You can take those off of our GitHub today and they would be up and running within a day. See the demo we gave at the webinar.
Seller agent and buyer agent V2… we’ll have those out later this month. We are being aggressive — in fact, I’ve got to go review some of the check-ins in GitHub later today that some of our engineers did.
I think the real thing I’m looking forward to is powering some buys this quarter with these reference architectures and where we’re effectively acting as agentic systems integrators by working with a few companies to actually power some meaningful programmatic guaranteed buys as well as Deal IDs. That’s the most exciting for me — to see the Tech Lab’s agentic frameworks being used to actually do meaningful buys.

