
Episode 88: Not Everything Is an Agent
HockeyStack CRO Emir Atli breaks down what a true AI revenue agent actually is — and why most tools using that label are just workflows in disguise.
What if most of the tools being sold as "AI agents" today are actually just workflows with better marketing copy? That's the question at the center of this episode — and the answer has significant implications for how revenue operations teams should be thinking about their technology investments right now.
In this episode of RevOpsAF, Emir Atli, Co-founder and CRO of HockeyStack, joins co-host Matthew Volm to cut through the noise around AI agents, define what a true revenue agent actually is, and explain why the data architecture decisions companies make today will determine who wins or loses in go-to-market over the next several years.
Before getting to agents, it helps to understand what they're not. Emir walks through the evolution of software in go-to-market in a way that makes the distinctions genuinely clear — and explains why so much of what's being marketed as an "agent" today is actually something else entirely.
Era 1: Systems of record. Traditional SaaS platforms like Salesforce — essentially a database you interact with through dashboards, reports, and tables. Software as storage and retrieval.
Era 2: Workflows. Deterministic automation. You set a trigger, you set a destination, and define what happens in between. A classic example: when someone visits your website, send them to HubSpot as a marketing qualified lead (MQL). The reasoning is entirely human; the software executes.
Era 3: Copilots. A human reasons and asks; the copilot retrieves or generates. Emir gives the example of asking an AI analyst why healthcare customers are expanding faster than financial services — the human already has the question, the copilot surfaces the answer. "Copilot makes things 10x, 15x, whatever, faster," he explains, "but I, as a human, still reason and ask the copilot. Copilot gives me the answer."
Era 4: Task-specific AI. A narrowly focused AI that performs one task better and faster than any human. Account research is the canonical example — an AI that delivers a comprehensive research brief on your accounts every Monday. "That is not an AI agent, that's a task-specific AI that works on one task." This is also, notably, one of the most crowded categories in go-to-market right now.
Era 5: Autonomous workers. This is where true agents live. And the distinguishing characteristic isn't the technology — it's the scope of work.
"Instead of taking a task from someone's day, you task an entire work stream of tasks that is driving an outcome." — Emir Atli
The example he uses is closing a deal. From a surface level, "running a deal cycle" might look like a single task. But map it out and it's millions of decisions, actions, and touchpoints spanning three months to thirty months. That is a work stream, not a task. A true agent needs to reason without prompting, make autonomous judgment calls, complete work end to end, and — critically — know when to loop in a human.
This last point is one of HockeyStack's core philosophical positions: they will not automate customer interactions or person-to-person interactions. The agent decides when a human needs to be involved. That decision itself is a judgment call, and it's one the agent needs to make autonomously.
For RevOps practitioners thinking about where tools in their current stack fit, this framework maps directly onto the tech stack consolidation conversation that's been happening across the industry. Most of what exists today sits in eras two through four.
One of the more counterintuitive points Emir makes is about prompting — specifically, why it's a poor fit for go-to-market contexts even as a way of interacting with AI.
The assumption baked into most copilot and prompt-based AI tools is that the human already knows what to ask. They have a clear question, they have time to iterate on the prompt, and they'll get a useful output. In many contexts that's fine. But go-to-market doesn't work that way.
A deal cycle involves constant, layered decision-making across multiple stakeholders — an AE communicating with a solutions engineer, a sales manager, and a prospect who has their own internal dynamics involving an economic buyer, a decision maker, and others. The decisions that need to be made "every single second of the day" don't wait for someone to type a prompt.
"A lot of times the assumption behind a copilot or assumption behind the prompting AI is the human already knows what to ask or what to prompt, and the prompt is perfect, or we have enough time to go prompt over and over again to get the right outcome. But that's usually not the case." — Emir Atli
This is part of why Emir believes the reasoning layer matters so much. An agent built for go-to-market needs to operate continuously and autonomously — not on-demand when someone has the presence of mind to ask the right question.
This connects to a broader challenge the RevOps community has been wrestling with, explored in depth in Episode 83: Why You Should Stop "Doing AI" and Start Solving Problems — the risk of adopting AI tools for their own sake rather than in service of specific, well-defined outcomes.
If revenue agents start covering the work currently spread across five, ten, or fifteen specialized tools, what does that mean for the tech stack that RevOps teams have spent years building and managing?
Emir's answer is direct: consolidation is coming, and it's going to be faster than most people expect.
"I believe there will be systems of record, especially in the short term, let's say two years, three years, and one or two, probably one super app that will do a lot of the things that five tools do." — Emir Atli
The reason specialized tools exist today is that each was built to be best-in-class for one specific vertical — forecasting, call recording, account research, prospecting. You evaluated the landscape, picked the winner for each category, and integrated them. That logic made sense when each tool was built with fundamentally different technology.
But if a single platform has the right architecture and the right reasoning layer, all of those applications can be built on the same foundation. And perhaps more importantly: every time a new AI model drops, that platform gets better overnight. A specialized forecasting tool built before the AI era improves incrementally over months. A platform with the right architecture improves instantly.
Emir is candid that consolidation hasn't actually happened yet — "Every single ops team still owns 15 different tools for 15 different purposes" — but he's confident it's within a six-month horizon. Systems of record will hold for a while longer, but even that changes within two years in his view.
For operators currently evaluating or rebuilding their stacks, the RevOps tech stack rebuilding framework from Episode 48 is worth revisiting in light of this lens — because the evaluation criteria are shifting.
Before any of this works — agents, consolidation, AI-powered go-to-market — there's a foundational problem that most teams are actively avoiding: the state of their data.
Matt frames the question honestly: a lot of operators are sitting on a "massive dumpster fire" of accumulated mess in their systems of record. Where do they even start?
Emir's answer might surprise people: you're never going to have clean data, and waiting for it is a trap.
"I think we will never have clean data. I think again, it's like an architecture problem. It's never, it was never about having the most clean data. It is the fact that these systems were built 20 years ago for humans to enter data." — Emir Atli
The structural problem is that current data systems were designed for manual human input. Humans are inconsistent. They create duplicates, they enter incorrect data, they build workflows that compound the mess. That's not a behavior problem that can be trained away — it's a design problem baked into the architecture.
The answer, in Emir's view, is to change the architecture entirely rather than continue trying to clean what exists. HockeyStack's approach has been to convert Salesforce's object and field model into an event model, stitching identities across platforms into a unified timeline. Other approaches — knowledge graphs, metadata management systems, deduplication at the infrastructure level — are all variations on the same insight: the problem requires a new foundation, not better housekeeping on the old one.
He also calls out a cognitive dissonance he sees regularly: the same VP of RevOps who acknowledges their data will never get clean will turn around and ask about building internal AI tooling on top of that data. "If you don't have the right architecture, how do you expect an LLM to just go build what you want? That also is not gonna happen."
The data-first imperative for AI is a theme that runs across Episode 50: Thinking of AI? Think Data First — and it's one of the most consistent findings from operators who've actually tried to deploy AI in production environments. HockeyStack's own thinking on this is worth exploring through their resources for B2B revenue teams.
With hundreds of customer implementations across mid-market and enterprise companies — and visibility into 300-plus Salesforce instances — Emir has a clear-eyed view of what actually separates high-performing revenue operations from teams that are spinning their wheels.
The answer isn't the tools. It isn't the size of the team. It's direction.
"The number one thing that I would say separates the best from good or bad is having at least one person that is like that person or that team of people need to know the direction." — Emir Atli
What dysfunction looks like in practice: no stable direction means constantly shifting priorities. One quarter territories change. Next quarter territories change again. Then compensation. Then product lines. "We need people who can set direction," he says, "and these people, especially in ops, need to be very well compensated and their manual work and tedious work needs to be eliminated."
His structural recommendation is notable: the RevOps leader should report directly to the CEO, not the CRO. The logic is incentive alignment. A CRO's incentive is closing as much revenue as possible this quarter. That's not wrong — it's their job. But a RevOps leader optimizing for the next three to five years needs to be aligned with company-level incentives, not functional ones. Reporting to the CRO creates a structural misalignment between the time horizons RevOps should be managing.
This is a recurring debate in the community. Episode 23: What CROs Really Want From Their RevOps Teams explores the tension from the CRO's perspective, and the reporting structure question comes up repeatedly in Episode 5: What is RevOps and Who Should It Report To?
Matt's shorthand for what Emir is describing: "Random acts of go-to-market accomplish random results." No framework, no sustained thesis, no direction — just activity for its own sake.
This conversation surfaces the concern that every RevOps professional has heard, thought about, or quietly worried over: is AI coming for the job?
Emir's answer is a genuine counterpoint to the doom-and-gloom narrative, and it's grounded in a specific logic rather than generic reassurance.
The premise of AI replacement assumes that agents get deployed and nothing else changes. But if agents actually work — if they generate more pipeline, close more deals, expand more customers — then the business grows. And growth creates more work, not less. More customers means more complexity, more process, more strategy, more judgment calls.
"If you have more revenue and more pipeline, that brings more work to everybody. The work changes from what we are doing today to doing: one, customer interaction; two, strategy; three, process; four, vision." — Emir Atli
The work that's genuinely at risk is the manual, repetitive work: managing objects and fields, fulfilling report requests, making the same decisions over and over inconsistently. Emir is direct that this work will be automated. But the work of setting strategy, building vision, thinking through how to incentivize and develop people — that stays human.
He draws a distinction between go-to-market and other verticals here. In engineering, banking, and finance, entry-level roles are being compressed by AI. In go-to-market, he sees the opposite: agents will accelerate onboarding and ramp time, which means companies will be able to hire more people and make them productive faster. The constraint on go-to-market team size has historically been time-to-productivity. If agents solve that, headcount can grow.
One of HockeyStack's explicit design choices reinforces this: their agents are built to loop in humans for customer and person-to-person interactions. The premise is that deals are still ultimately made between people, and no amount of AI capability changes that. Revenue agents handle the work around those interactions so humans can show up better for the interactions themselves.
"AI is still a pattern matching system. You can never outsource strategy to AI or core thinking to AI. All of those things will always stay human." — Emir Atli
For operators concerned about career trajectory in this environment, Episode 82: The Next GTM Hero: Revenue Engineer and Episode 86: Adapt or Die? RevOps in the Age of AI both address what the evolution of the RevOps role looks like — and what skills become more valuable, not less, as AI matures. HockeyStack's own perspective on how go-to-market teams are evolving is worth reading through their product updates and thought leadership.
Check out our blog, join our community and subscribe to our YouTube Channel for more insights.
Explore more from HockeyStack: Learn how revenue agents are changing the go-to-market landscape at hockeystack.com.
Our average member has more than 5 years of RevOps experience. That means you’ll have real-time access to seasoned professionals. All we ask is that you’re generous with your knowledge in return.