By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
revopsAf the podcast

Episode 91: Your Next Hire Isn't Human

Follow us on your favorite podcast platform:

youtube podcast icon
spotify podcast icon
apple podcast icon

What if the most transformative hire you could make for your RevOps team doesn't require a LinkedIn profile, never calls in sick, and can simultaneously manage project documentation, client communication, and discovery call analysis... All at once? That's the premise Zach West is building toward at his consulting agency, and the implications for RevOps practitioners are hard to ignore.

In this episode of RevOpsAF, co-host Camela Thompson sits down with Zach West, founder of PatchOps, a tech consulting agency specializing in custom integrations, web apps, and AI-enabled tooling built primarily on HubSpot. Zach shares how he's rethinking what it means to scale a consulting team — not by hiring more humans, but by building AI agents that function like increasingly senior team members. For resource-strapped RevOps practitioners, the conversation offers both a glimpse at what's possible and practical first steps for today.

Treating AI Like Headcount

The framing Zach brings to his work is deceptively simple but genuinely reorients how most people think about AI tooling. Rather than treating AI as a productivity add-on, PatchOps treats agents as headcount, specifically, as roles on the team.

"We really like to think of AI in many cases, especially when we build out the more sophisticated agents, as headcount. We treat them as junior roles until we've developed enough of a knowledge base and they've shown the ability to access that corpus and interpret it effectively to where they are now a senior agent, as it were." — Zach West

This framing matters because it changes how you invest in AI. When you think of an AI agent as a junior hire, you instinctively understand that you wouldn't hand a new employee a pile of undocumented, unstructured work and expect expert output. You'd onboard them, give them context, and build their knowledge base over time. The same logic applies here, and it's the foundation for everything else Zach describes in the conversation.

For RevOps leaders who have been wrestling with the question of how AI and automation are changing the work, this headcount framing offers a more useful mental model than the typical "AI as tool" approach.

The Consulting Unicorn: What an AI-Powered Operator Actually Looks Like

Scaling a consulting agency classically means hiring more people: ideally a mix of senior and junior staff across development, technical direction, account management, project management, and solutions architecture. The challenge is that finding all those skills in one person is nearly impossible. AI, Zach argues, changes the calculus entirely.

He describes the vision as a "consulting unicorn" — a construct that would have seemed like science fiction even recently.

"What you would imagine your consulting unicorn to look like. A person that you could hire that never got sick, never slept, was an a joint project manager, had immaculate attention to detail, could also run calls, could also ask questions based on vertical, based on ICP, based on the pains and the opportunities that a client is voicing live, and then could take all of that and match it up against the scope of work and generate a build plan, a truly relevant build plan, not making assumptions, not operating without context but working from a massive corpus of projects that have already been delivered successfully of paths that have already been walked." — Zach West

The key qualifier here is "not making assumptions." This is where most out-of-the-box AI deployments fall short, and where the knowledge base becomes the differentiator. A general-purpose large language model (LLM) will generate plausible-sounding output. An LLM trained on hundreds of real scopes of work, discovery calls, and build plans from similar engagements will generate output grounded in what has actually worked.

From Zero-Context Prompts to Senior Agents: The Knowledge Base Difference

This is perhaps the most practically transferable insight in the episode. Zach draws a sharp distinction between using an off-the-shelf LLM and building a knowledge-backed agent, and the metaphor he uses lands well.

Without a knowledge base, you have a junior team member who can look things up on the internet. With one, you have a senior team member who has done a hundred of these engagements before and can draw on all of them.

"You can do that in any vertical where you're able to digitize knowledge and provide it to an LLM. The more knowledge, the more senior that LLM will become." — Zach West

PatchOps has accumulated hundreds of proven scopes of work, sales proposals, discovery calls, and client calls; all structured, sequenced, and stored in a vector database. That corpus transforms their AI agents from generic assistants into domain-specific senior operators.

The practical implication for in-house RevOps teams is significant. The same principle applies to any function where institutional knowledge accumulates: customer success onboarding calls, analyst reporting requirements, sales discovery transcripts. The data already exists. The question is whether it's been structured in a way that an LLM can learn from it. And that's a solvable problem. As Camela notes, "what are the successful requirements calls I'm having versus which are the ones that are weaker? That kind of context really goes a long way in aiding yourself and streamlining your own job."

This connects directly to a broader truth in RevOps: better data management isn't just about CRM hygiene. It's increasingly about building the institutional memory that makes AI actually useful.

Practical Starting Points for Resource-Strapped Teams

Not every RevOps practitioner is building custom agents with vector databases. For those who are heads-down in operations and haven't had time to deeply explore AI tooling, Zach is pragmatic about where to start.

The table stakes are already accessible:

  • Document generation: Writing technical documentation by hand when an AI with full system context can produce a first draft in seconds is, as Zach puts it, "crazy."
  • Call transcription and analysis: Tools like Fathom, Google Transcript, or Gemini are already in most people's workflows. The upgrade is feeding those transcripts — thoughtfully — into an LLM to produce outputs like sales proposals, solution designs, or client sentiment assessments.
  • Flowchart and process documentation: Dragging shapes in Miro by hand when AI can generate a structured flowchart draft is a significant time sink that's already solvable.
  • Snippet-level code: If you're still writing small pieces of code by hand for automations or integrations, that's a clear quick win.

The nuance Zach adds on call transcription is worth highlighting: the transcript quality is only as good as how deliberately the call was run. If you know the output is going to be processed by an LLM, you should guide the call with that in mind — asking explicit questions, stating action items clearly, and building context intentionally. "You have to guide the call as a human in a way that is very deliberate in generating data for the LLM that is inevitably going to process it."

For RevOps teams thinking about where AI fits into their day-to-day, this framework for thinking about AI readiness is a useful companion to Zach's practical starting points.

Leveraging AI vs. Relying on It: The Skill Gap That's Widening

One of the most clarifying moments in the conversation comes when Zach introduces a distinction his team has started using internally: leveraging AI versus relying on it.

"Leveraging can increase your output 10, 20x and you don't have to sacrifice quality. Reliance means you don't really know what's going on. You're effectively just — it's not even strategic guidance because you don't have the foundation to make it truly strategic. It's just sort of dictation. It's vibe coding, but not just for a single application — it's your entire job." — Zach West

The difference is expertise. A skilled consultant with deep background knowledge who uses a well-trained LLM gets dramatically better outputs than someone junior who uses the same tools without foundational context. The tools are identical. The results are not.

This has direct implications for how RevOps professionals should be thinking about their own development and how leaders should be thinking about hiring. Camela notes that in her own work helping clients hire, she's shifted emphasis toward business acumen and the ability to translate requirements into what the business actually needs, because "what people ask for, or the solutions they come to the table for, rarely address the problem that they actually have."

The soft skills aren't becoming less important. They're becoming the differentiator because AI handles more of the execution layer, the judgment layer matters more than ever. This theme runs through much of the current conversation in RevOps around what it means to be a strategic revenue operator versus a purely tactical one.

The Quiet Shift in Client Engagement

Zach raises an observation that's harder to quantify but resonates with anyone who has worked in consulting or in-house ops recently: something has changed in how clients engage with projects.

He's careful not to overstate the connection to AI, and he's explicit that his best clients remain fully dialed in and accountable. But there's a macro-level pattern worth naming.

"I just feel like I remember a time even as recently as last year where it was different. There was generally overall something has shifted in terms of the way that people approach consulting engagements." — Zach West

Camela's read on it is that everyone is drowning in requests, and the expectation that AI enables faster sprinting has, paradoxically, made the gaps more visible when collaboration slows down: "because everybody's struggling, we all feel it more when we're trying to get the other person to help us out and participate with us."

This dynamic — more output expected, more friction in cross-functional collaboration — is a thread that runs through a lot of current RevOps challenges. The answer isn't less communication or lower expectations; it's being intentional about how requirements gathering, documentation, and handoffs are structured so that neither human nor AI is left without the context they need.

Domain Expertise Still Wins

The conversation closes on a point that might seem obvious but is easy to miss in the current AI hype cycle: domain expertise still determines what's possible.

Zach uses a pointed example. PatchOps would not attempt to build a cybersecurity layer for a high-volume website containing sensitive personally identifiable information (PII) — not because the AI tools aren't powerful, but because the human expertise and the trained-model knowledge base required to do that well simply aren't there.

"We don't have the human expertise and we really don't have anything close to the AI expertise to build a cybersecurity layer over a high volume website that contains very sensitive PII. We are nowhere near qualified to do that even with all the AI tools at our disposal. Whereas conversely a firm that had done that for 10 or 15 years now that has a huge database of everything they've seen and done, all the exploits that they've identified, all the solutions that they've generated — things that aren't publicly available to train their own LLM on — would be in a different stratosphere." — Zach West

The same principle applies inside RevOps teams. An AI agent built on a rich corpus of CRM migration experience will outperform one built on general knowledge for CRM work. An agent trained on CS onboarding calls will surface better recommendations than a zero-context prompt. The expertise — human and institutional — is what separates good AI-assisted work from generically plausible AI output. And that's a problem RevOps teams should be thinking about now, not after the fact.

Key Takeaways for RevOps Practitioners

  • Treat AI agents like headcount: Invest in onboarding them with context and a knowledge base, just as you would a new hire. Junior agents become senior agents as the corpus grows.
  • Your institutional knowledge is the moat: Structured historical data — scopes of work, discovery calls, delivery outcomes — is what elevates a generic LLM to a domain-specific expert.
  • Start with the table stakes: Document generation, call transcript analysis, and process flowcharting are high-ROI starting points that don't require advanced infrastructure.
  • Guide your calls for the LLM: If a transcript is going to be fed into an AI for downstream processing, run the call with that context in mind — explicit questions, clear action items, deliberate language.
  • Leverage, don't rely: Deep expertise is what makes AI a force multiplier. Without the foundational knowledge to evaluate AI outputs, you're not leveraging AI — you're delegating blindly.
  • Domain expertise still determines ceiling: AI doesn't erase the difference between specialists and generalists. It amplifies it.

Looking for more great content?

Check out our blog, join our community and subscribe to our YouTube Channel for more insights.

Related Episodes

🚀 Reach Your RevOps Goals

Our average member has more than 5 years of RevOps experience. That means you’ll have real-time access to seasoned professionals. All we ask is that you’re generous with your knowledge in return.