
Most revenue teams spend the majority of their resources chasing new logos. But as two practitioners made clear in a recent RevOps Co-op webinar, the deals and the accounts tied to them are often won or lost in the spaces between conversations.
Eli Portnoy, CEO of BackEngine, and Zach West, Founder and Principal Consultant at PatchOps, joined moderator Camela Thompson to explore what it actually takes to build an AI-ready revenue infrastructure — one that captures the qualitative signals hiding in call transcripts, email threads, and Slack messages before they become the reason your biggest customer doesn't renew.
Portnoy opened with a story that most RevOps practitioners will find uncomfortably familiar. In 2019, while in the middle of a Series B fundraise, he received a call from McDonald's, Sense360's largest customer at $630,000 annually, informing him they would not be renewing.
"With hindsight, it was incredibly obvious that they were going to churn. The question that came out from that was, okay, well if it was obvious, why wasn't it obvious to us? Why didn't we catch it?" — Eli Portnoy
The account had no shortage of coverage. Implementation, customer success, professional services, and a dedicated analytics resource were all assigned to McDonald's. But that breadth turned out to be the problem. Different team members were speaking to different parts of the organization, hearing different signals, and recording them in different systems. None of which talked to each other. Everyone, independently, believed the account was healthy. Collectively, the company was flying blind.
The second conclusion Portnoy drew was arguably more painful: it was preventable.
"If we had caught it, we could have absolutely solved it. This was not an unpreventable loss." — Eli Portnoy
That diagnosis — too many cooks, data scattered everywhere, no centralized view — maps directly to the challenge RevOps teams face when trying to build reliable customer intelligence at scale. It's also why pipeline hygiene and clean data practices matter so much earlier in the account lifecycle than most organizations treat them.
West's version of this story came from the agency side. A nine-month client engagement had escalated. The client was flagging poor quality assurance and timeline slippage. In a prior era, diagnosing the root cause would have meant manually reviewing nine months of Slack threads, over a hundred Fathom call recordings, and dozens of email threads. Days of work, pulled from commission.
Instead, West used Claude via MCP (Model Context Protocol) server to ingest everything at once, generate a comprehensive timeline, and produce an objective analysis of what had actually happened. The result was counterintuitive: the engagement had recorded only one minor QA miss and two minor timeline breaches across dozens of major deliverables.
"It doesn't make any sense for us to tighten up QA and tighten up timelines. We are already doing that. No, we need to meet with the client and align on what they're really unhappy about. We would have no insight into that pre-AI." — Zach West
The analysis shifted the conversation from reactive defensiveness to genuine diagnosis. And it took hours, not days.
For RevOps teams dealing with managing the sales to customer success handoff or trying to understand why accounts churn, this kind of retrospective signal analysis represents a meaningful capability shift.
For those unfamiliar with Model Context Protocol, Portnoy offered a clear breakdown. An MCP server is simply a mechanism that allows AI agents and external systems to communicate. When you connect an large language model (LLM) like Claude to an MCP server, you're giving it a pathway to access data from other tools: your CRM, your call recording platform, your email archive.
But the architecture underneath that connection matters enormously. Portnoy described four layers that have to work together for an AI system to consistently produce good answers: the prompt, data access, planning, and the core reasoning engine. The LLM itself — whether Claude, ChatGPT, Grok, or Gemini — is increasingly commoditized. The real inconsistency lives in the other three layers.
"If you just give everyone access to Claude or ChatGPT and you tell them to go use that to power their AI use cases, you'll find they'll all prompt differently. They will either give it access to data through an MCP or they won't give it at all, or they'll copy and paste stuff in." — Eli Portnoy
The implication: without structured, consistent infrastructure governing how AI accesses data, plans its approach, and interprets results, you get outputs that vary wildly and can be dangerously convincing even when they're wrong. Portnoy shared an example of a customer who thought Claude had performed a thorough account analysis, only to discover that it had based its entire response on a single Google Doc agenda the user had written that same week.
This challenge connects to a broader theme the RevOps community has been wrestling with around AI readiness and data strategy: you can't get reliable outputs without reliable inputs.
A live poll during the session revealed that a significant portion of attendees had already integrated between four and eight tools in the first quarter of the year alone, and a meaningful number had advocated for cutting between four and eight tools in that same window. Both numbers underscore a tension that is defining RevOps work right now.
Portnoy cited a striking statistic from their market research: in 57% of sales opportunities, the word "integration" came up more frequently than the word "AI." At first glance, that seems backward. But it makes sense on reflection.
"AI only functions if it has access to the data and if it has access to the tools that you need to use. So AI on its own is kind of irrelevant without those integrations." — Eli Portnoy
West echoed this from the client side. Organizations rarely come to PatchOps asking to prioritize AI over data integrity and data confidence. The demand for clean foundations is consistent, even when the language around it changes.
"It's very rare that we have to sell fundamentals and best practices over chasing the shiny new AI tool. Folks are very in tune with the need to give any sort of AI layer that they build or adopt a quality foundation first." — Zach West
This aligns with what the RevOps community has been observing more broadly. The shiny object problem with AI tools isn't that operators don't know better; it's that organizational pressure often outpaces the infrastructure work required to make AI actually useful.
Rather than presenting a binary choice, Portnoy outlined the trade-offs on both sides and made the case for a middle path that an increasing number of teams are gravitating toward.
The build risk: Whatever you build, you own: including maintenance. A vibe-coded solution that solves today's problem becomes tomorrow's technical debt, especially when team members turn over and institutional knowledge walks out the door.
The buy risk: You don't control the roadmap. Switching costs are severe. And in an era where many capabilities can be built for a fraction of what vendors charge, you may be overpaying for things you could own.
The middle path: Build the layer that is specific to your business: the workflow, the user interface, the pieces your team interacts with directly every day. Buy the core infrastructure: the harder, more technical components that need to be hardened, well-tested, and maintained by specialists pooling resources across many customers.
West connected this to a useful analogy: treating AI less like a turnkey enterprise solution and more like a very capable junior hire who still needs training.
"It's like having a very, very bright, very junior consultant. It's the training that's going to bridge that gap between them being a junior and then them having a level of experience and subject matter expertise that enables them to truly master what they're doing." — Zach West
The risk of skipping that training, or defaulting to open LLMs without structured context, is that you get inconsistent outputs from the same inputs. For teams making business decisions around account health, pipeline, or customer sentiment, that inconsistency isn't a minor annoyance. It's a liability.
For more on what it takes to prepare a revenue stack for AI, the foundational work is remarkably consistent across organizations regardless of size.
One of the sharpest exchanges in the session came when Thompson asked Portnoy directly about customer health scores and whether better systems actually make them better.
His answer was direct: health scores are an "aspirational concept that's inherently pretty flawed." The problem isn't the data sources; it's the reductiveness of collapsing a complex, multi-dimensional relationship into a single number.
"If I have to go to the doctor, I'm not gonna go to the doctor and at the end say, give me a score of my health. There's so many different components that go into it — the leading indicators, the lagging indicators, my family history. To get that into a score would not be accurate." — Eli Portnoy
The underlying issue is that the data sources feeding most health scores are themselves unreliable proxies. Survey response rates are low, so most of the customer base never contributes. CSMs are reluctant to give their own accounts poor ratings. Usage data, often treated as a proxy for satisfaction, conflates necessity with preference. Portnoy used Salesforce as the example: teams use it constantly not because they love it, but because their job requires it. High usage tells you the category is important; it tells you nothing about whether the customer is thinking about switching.
West added the inverse: sometimes high usage is actually a distress signal. Multiple users spending excessive time in a tool because it's confusing or broken.
The more useful framing, Portnoy argued, isn't a score at all. It's a holistic diagnostic: a structured review of all available information that treats account health the way a physician treats patient health: with nuance, multiple data points, and context.
Toward the close of the session, Portnoy introduced a question that he argued most organizations haven't seriously confronted: have you ever run an audit on your AI infrastructure?
The live poll results were revealing: the majority of respondents answered either "no, but open to the idea" or "yes, but not that helpful." The gap between aspiration and execution here is significant, and the lack of a helpful audit experience points to a structural problem with how most audits are conducted.
West described what a genuinely useful architecture audit actually involves: not a checklist of unused features, but a conversation-first process that begins with understanding the business, maps all data ingestion sources, and then tests for consistency in how the LLM interprets the same inputs across multiple runs.
"If you give a given LLM the exact same data and the exact same prompt, how much variance do you get in the outputs? In some cases, if well-trained and properly structured, it can be minimal. In other cases it can be enormous. And that's a big problem if you're making business decisions based on the same inputs and completely different outputs." — Zach West
The most common finding from PatchOps audits, West shared, is the accumulated technical debt of multiple RevOps regimes: systems stood up, partially integrated, and then abandoned by teams that have long since turned over. The tools are often all there. A Snowflake instance might be configured. A BI layer might be running. But the connections between them are partial, conflicting, or built for a prior era's requirements. Rebuilding a RevOps tech stack from that kind of accumulated complexity requires both the technical expertise to map it and the business judgment to prioritize what to fix first.
Thompson closed the session with a question she described as the one she hears most often in the community: what do you tell your chief revenue officer (CRO) to get them on board?
Both Portnoy and West pointed to the same framing: don't sell the solution, sell the cost of inaction.
West argued that breaking down what "business as usual" actually costs: — in human capital spent inconsistently monitoring account health, in churn that was preventable but undetected, in the compounding disadvantage against organizations that have already built this layer — is more persuasive than any feature comparison.
Portnoy added the organizational context: board-mandated AI readiness is a real and growing pressure. In a room of ten CEOs he'd been in earlier that same week, the full two-hour conversation had centered on how leaders felt they were "behind the eight ball" on becoming AI-native organizations.
"The only way to do that is to really make sure these different tools are talking and the data's available. So if you want to pitch it to them, I would say: I know you want to go there. These are the types of things that we need to be investing in and spending time and auditing and figuring out if we want to get there." — Eli Portnoy
For RevOps leaders working on influencing without authority, the most effective framing connects infrastructure investment to the leadership outcomes executives already care about. This is exactly that framing.
You can learn more about how BackEngine structures data access and integrations for AI-powered revenue workflows at BackEngine's website.
Check out our blog, join our community and subscribe to our YouTube Channel for more insights.
And be sure to check out BackEngine for more on building an AI-ready data infrastructure for your revenue team.