By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.
Revenue Operations

AI Orchestration: The Infrastructure Behind AI That (Actually) Works

swirled squiggle accent

Everyone wants to harness the power of AI — but few have the infrastructure to actually make it work. In this session, Matthew Volm, CEO of RevOps Co-op and Eventful, sits down with Dom Freschi, Jr., Director of Operations at Openprise, for a tactical teardown of what it takes to deploy enterprise-ready AI across your go-to-market (GTM) systems.

With most AI projects failing to deliver measurable value, Dom shares a detailed six-part orchestration framework built from lessons learned across dozens of deployments. From prompt management to hallucination mitigation, this conversation is a reality check — and a roadmap — for RevOps professionals trying to move beyond flashy demos and into scalable, integrated, AI-powered execution.

If you're building workflows that rely on AI — or planning to — this is your blueprint for getting it right the first time.

AI Isn’t Failing. Your Infrastructure Is.

The session kicks off with a poll: What percentage of AI projects fail to deliver value?

Spoiler alert — the majority of attendees believe it’s well over 50%. And Dom agrees.

“AI isn’t plug-and-play. It’s not magic. What we’re dealing with is still, fundamentally, a fragile system that breaks without the right scaffolding.” – Dom Freschi, Jr., Director of Operations at Openprise

Why is this happening?

  • Lack of clear success metrics: Many AI initiatives start with a mandate like “go implement AI” rather than a specific problem to solve or ROI target.
  • Overreliance on AI as a personal productivity tool: While AI is great at generating content or summarizing calls, these use cases rarely scale across GTM systems.
  • Missing orchestration: Without clean data, governed prompts, model flexibility and integration pipelines, AI outputs rarely drive operational outcomes.

Dom’s message is clear: Treat AI like you’d treat any enterprise system. It needs structure, not just strategy.

For more on this topic, check out AI can’t fix what your data is breaking.

A 6-Part Framework for Enterprise AI Orchestration

To move AI from sandbox experiments to revenue-impacting workflows, Openprise developed a six-part orchestration framework. Internally they call it the “beach ball,” but functionally it’s a practical operating model for RevOps teams.

Check out the full white paper on how AI orchestration makes AI ready for reliable RevOps deployment.

1. Context Orchestration (Clean Your Data First)

AI is the ultimate “garbage in, garbage out” system. Without clean, enriched, and structured data, even the best prompts and models will return low-quality results.

Examples of failure include:

  • Sending duplicate records through enrichment tools, driving up costs.
  • Feeding misspelled job titles or industry tags into generative agents, causing hallucinations in outreach messages.

Your orchestration layer should support:

  • Deduplication and normalization
  • Enrichment and segmentation
  • Field-level cleaning and formatting

“We’ve seen AI generate entire emails using completely incorrect titles — promoting someone to CEO. That’s not just embarrassing. It kills trust.” – Dom Freschi, Jr.

For more on this topic, also check out 8 Steps to Creating a Data Utopia.

2. Prompt Management (Treat Prompts Like Code)

Prompts are the instruction set that guides the model’s behavior. But unlike code, they’re often untracked, inconsistent and injected haphazardly into tools.

Dom’s best practices:

  • Version your prompts like any system config.
  • Maintain a prompt library for consistent reuse.
  • Monitor for prompt injection attacks (especially in user-submitted fields).

Dom notes that operators need to understand that every prompt is made up of two components:

  • Data context: The records, metadata or examples included in the input.
  • Instruction layer: The actual language that guides the LLM’s behavior.

Managing both is essential to reduce hallucinations and ensure repeatability.

3. Model Orchestration (Choose the Right Brain for the Job)

No single LLM will outperform in every scenario.

OpenAI’s GPT may be best for creativity, Claude excels at long-context summarization, and Google Gemini might win on multilingual tasks.

Key orchestration needs include:

  • Routing logic to assign prompts to the right model
  • Token cost forecasting and usage management
  • Benchmarking outputs across providers

4. Hallucination Management (Validate Everything)

LLMs are designed to sound human — not to be accurate.

The result? High-confidence wrong answers. And in enterprise systems, that’s dangerous.

Dom shares tactics to mitigate risk:

  • Use dual-model validation — send the same prompt to two different models and compare responses.
  • Embed “fact checks” into your process (e.g., if you’re segmenting 200 records, verify that all 200 are accounted for in the output).
  • Routinely test AI-generated URLs, statistics, or job roles for validity.

“We’ve seen AI cite research papers that don’t exist — and then other AI tools cite those fake papers. Hallucinations spread unless you put controls in place.” – Dom Freschi, Jr.

5. Action Routing (Integrate with Real Systems)

Once your AI delivers a clean output — now what?

AI doesn’t magically write to Salesforce, Outreach, or Marketo. You need a translation layer to route outputs into structured fields or trigger downstream workflows.

This is where orchestration platforms shine:

  • Mapping natural language outputs to field-based schemas
  • Integrating across CRMs, MAPs, data warehouses and sales engagement tools
  • Supporting security and access controls (especially important in regulated industries)

Think of this as the plumbing layer that turns insights into action.

6. Performance + Cost Monitoring (Make the ROI Clear)

AI costs are notoriously hard to predict. Most operators don’t know what a “token” is — let alone how many tokens their prompts will consume.

Dom recommends:

  • Using structured prompts to reduce variability and forecast token usage
  • Tracking prompt frequency and output accuracy to define cost-per-insight
  • Including token spend in your ROI calculations for AI-powered workflows

“It’s not just about if the AI works. It’s about whether it’s worth it — and that means measuring cost and output with the same rigor as any other GTM investment.” – Dom Freschi, Jr.

The Case for Thoughtful Acceleration

Throughout the session, Dom returns to a central theme: thoughtful acceleration.

That means:

  • Start with a specific business problem.
  • Build the infrastructure to support automation and validation.
  • Use AI selectively, where it adds clear value — not just because it’s trendy.

And perhaps most importantly: Know when AI isn’t the answer.

“Not everything needs AI. Sometimes what you need is a deterministic workflow with no hallucination risk. Know the difference.” – Dom Freschi, Jr.

For more on this topic, also check out a prior digital event Taking the Sexy out of AI: AI for Ops.

Real-World Examples: From Outreach to Enrichment

Dom and Matthew walk through a common GTM use case: automating outreach. What sounds simple — generate emails with AI — is actually a multi-system problem:

  • Clean data: Ensuring contacts are properly segmented and enriched before outreach
  • Prompting: Tailoring messages based on role, industry, and recent engagement
  • Model selection: Choosing the right LLM for tone, brevity, or multilingual content
  • Validation: Catching errors in name, title, or personalization before sending
  • Action: Syncing the email into a system like Outreach or Salesloft for delivery
  • Measurement: Tracking reply rate, open rate, and cost-per-message for ROI analysis

AI without orchestration would fail at nearly every step. But with the framework in place, operators can generate, validate, and route messages with confidence.

Final Thoughts: AI Is a System, Not a Sidecar

Too many companies treat AI as an add-on.

But what Dom makes clear is that AI is a system — and like any system, it requires architecture, governance, and operational oversight.

If you’re serious about using AI to power GTM, don’t start with tools. Start with a framework.

Get the Tools to Do It Right

✅ Learn more about Openprise’s orchestration platform

✅ Explore more frameworks on the RevOps Co-op blog

Join the RevOps Co-op community and connect with 18,000+ operators deploying enterprise AI the right way

Don’t let hallucinations, hidden costs, or broken data pipelines derail your AI dreams. With the right orchestration, you can finally deliver the value your executives are asking for — and your GTM systems are begging for.

Related posts

Join the Co-op!

Or