Guide··9 min read

How to tell if a marketing agency actually uses AI (or just claims to)

The seven questions to ask on the discovery call that separate real AI-native agencies from rebranded retainers. Includes the live-demo test most agencies cannot pass.

Every agency claims to use AI now. The category is in the ChatGPT-logos-on-the-deck phase. By 2027 the marketing language will calm down. Right now you can't take any agency's word for it.

The seven questions below are the ones that separate the agencies that genuinely re-architected around AI from the ones that bolted ChatGPT onto their 2019 service deck. None of them are technical. All of them produce honest answers. Most agencies fail at least three.

Question 1: Can you demo the AI you're using, live, on this call?

This is the highest-signal question of the seven. A real AI marketing agency has a working voice agent calling leads, a working qualification automation, a working attribution dashboard you can poke at. They can screen-share and show it.

A hybrid-trap agency has screenshots in a deck and a future-tense roadmap. They'll deflect — “happy to set up a follow-up where we walk through it.” That follow-up never lands a real demo because there's nothing to demo.

What good looks like: agency opens a screen-share, submits a test lead through their own form, you watch a sub-60-second response trigger an AI agent that qualifies and books a meeting on a live calendar. Whole thing takes 90 seconds.

Question 2: Is the AI proprietary or are you reselling generic platforms?

There's no shame in using OpenAI, Anthropic, or GoHighLevel as underlying infrastructure — every serious AI marketing agency does. The signal is what the agency has built on top of those platforms.

A proprietary AI agent built for lead qualification, a custom GHL workflow library refined across 40+ engagements, an attribution dashboard the agency built and maintains — these signal investment in the craft. A bare ChatGPT subscription and Zapier zaps signal a reseller who's repackaging tools you could buy yourself for $50/month.

What good looks like:a clear answer about what they've built versus what they license. They should be able to name specific custom builds and explain what those builds replace (manual labor, third-party tools, or both).

Question 3: How does AI fit into your team structure?

The honest answer separates “we use AI to make our existing team faster” from “we restructured the team because AI changed what each role needed to do.” Both are valid; they produce different agencies.

A traditional agency that uses AI for productivity still has the same role taxonomy: account managers, copywriters, designers, strategists. They're each ~30% more efficient. Pricing and headcount stay similar.

An AI-native agency has different roles — the “operator” who runs five engagements solo where five separate FTEs would have been needed; the “build engineer” who does GHL architecture work that didn't exist as a role in 2022; the editor whose job is reviewing AI output rather than producing first drafts. Pricing reflects the lower headcount; output volume per dollar is higher.

What good looks like:a specific answer about how their roles differ from a traditional agency. If the answer is “we're structured the same, we just use AI to be faster,” you're hiring a traditional agency with a paint job.

Question 4: Walk me through your attribution model.

Real AI marketing agencies have an opinionated attribution stack they deploy on every engagement. They can name the platforms (Triple Whale, HubSpot, GHL native, first-party UTM tracking), explain why they triangulate rather than trust any single tool, and show you what the dashboard looks like.

Hybrid-trap agencies still report on platform-level metrics monthly: “Meta says X conversions, Google says Y.” They cannot reconcile the platforms because they don't have a unified attribution layer.

What good looks like: the agency shows you a real dashboard from a real engagement (redacted for privacy) and walks you through how a specific lead flowed from first touch to closed revenue.

Question 5: What does the engagement leave me with on day 180?

This is the systems-vs-services question. A real AI marketing agency builds infrastructure that survives the engagement: the GoHighLevel sub-account you own, the automation workflows, the attribution dashboard, the documentation, the templates. If you fire them on day 181, you walk away with assets your next operator can run.

A traditional agency leaves you with a relationship — and a Notion doc full of campaign post-mortems. Fire them on day 181 and you start over.

What good looks like:the agency can list, by name, what artifacts you own at engagement end. If they hesitate or give a vague answer, you're renting access to their team rather than building anything.

Question 6: Show me a current client's real numbers (anonymized).

“We helped a client grow” is not a case study. “We took client X from 4 booked calls to 31 booked calls per month over 90 days using this specific approach” is.

Real agencies have specific numbers from specific engagements they can quote. They can show before/after, name the system that produced the result, and walk through what they tested and what failed before what worked.

Hybrid-trap agencies show case studies that are all results, no process. Big numbers, no narrative for how the numbers were produced. Often the numbers are flattering but unverifiable (“100x ROI”) — these are designed to impress, not to inform.

What good looks like:case studies with specific before/after numbers, named systems, honest reporting on what didn't work along the way. Bonus points if the agency can give you a reference call with a current client.

Question 7: How do you handle brand voice quality control on AI-generated content?

Every AI-generated content workflow has the same risk: output that sounds like every other AI-generated content. Bland, generic, factually shaky. The agencies that have figured this out have a documented review process — what AI produces, who reviews it, what the brand-voice guardrails look like, how they catch hallucinations before publication.

Hybrid-trap agencies don't have an answer for this — or they'll claim “our AI is trained on your brand voice” (mostly meaningless; what matters is the editorial process around the AI output).

What good looks like: the agency can describe their content review workflow specifically. AI generates first draft, named human role reviews against named criteria, output gets measured on engagement (not on volume). Hallucinations get caught at a documented checkpoint.

Red flags (any one is sufficient to walk away)

  • Claims of “100x ROI” or other absurd numbers without context.
  • Cannot demo any working AI on the discovery call.
  • Pricing scales with team-hours rather than systems built.
  • The team that pitches you isn't the team that runs your account.
  • Refuses to name the underlying tools they use.
  • Promises to “own” your channels or platforms — you should always own infrastructure they build for you.
  • Pushes back on transparent pricing or refuses to put numbers in writing pre-call.

Green flags (any combination of these is a strong signal)

  • Live demo of a working AI agent or automation, no follow-up needed.
  • Published pricing on the website that they don't apologize for.
  • Specific case study numbers with named systems and process detail.
  • A clear inventory of what infrastructure you own at engagement end.
  • An opinionated, named attribution stack they can show you in production.
  • Willingness to say “not us” when asked about fit they don't serve well.
  • Documented content review workflow with named human roles.

How AI Marketing Agency answers each

We built the brand against this exact set of questions. The full rationale lives on the about page. The short version: live demo of GoHighLevel automation on every strategy call, custom AI agent builds documented case-by-case, productized retainer pricing with the numbers published openly, infrastructure that the client owns from day one, named systems with documented methodology, and a clear list of buyer profiles we're not the right fit for.

If you ask us these seven questions, you'll get specific answers. If you ask any other agency these questions, you'll learn whether they've actually done the work or whether they're selling the same retainer they sold in 2019 with new marketing language on top.


Want to see the seven answers in person? Book a 30-minute strategy call. We'll walk through them on the call. If we don't pass our own test, we'll say so.

Related

About

This post explores the topic at depth. If you're ready to see how it gets implemented, the next step is the system page itself.

Read about About

Ready to talk specifics?

A 30-minute strategy call. We'll audit your stack and tell you whether AI Marketing Agency is the right partner.