Mar 5, 2026

How to Run an AI Search Visibility Audit in Under an Hour

Robin Pautigny

Robin Pautigny

Co-founder, Refine

Blog Cover

Summary

Most teams still have no idea whether they appear when buyers ask AI assistants for recommendations. This audit gives you a repeatable process: pick the prompts that matter, run them across ChatGPT, Perplexity, and Claude, record who gets cited and how your brand is described, then prioritize fixes. You can do the first pass in under an hour and repeat it monthly to track progress.

When to use this

Use this audit before you invest in new content or GEO tooling. It tells you whether the problem is absence (you never show up), weak positioning (you're mentioned but not recommended), or negative framing (you're described in a way that hurts conversion).

What you're auditing (and why)

An AI search visibility audit answers three questions: Are we cited for the prompts our buyers actually use? How are we described when we appear? Who gets the recommendation when we don't? Unlike a traditional SEO audit, you're not crawling your site for technical issues first — you're simulating how prospects research in AI tools.

Keep a simple spreadsheet: one row per prompt, columns for each assistant (ChatGPT, Perplexity, Claude at minimum), whether you're mentioned, position in the answer (first brand, listed, absent), short note on sentiment, and links or source types cited.

Step 1: List your priority prompts

Aim for 10–20 prompts to start. Pull them from how people really talk: sales discovery calls, support tickets, Reddit or LinkedIn threads in your space, and "best X for Y" style questions. Avoid only branded queries — include category and comparison prompts (e.g. "best CRM for small agencies", "Tool A vs Tool B for startups").

  • High-intent: buyer is close to a decision ("best", "compare", "alternatives to").
  • Problem-led: buyer describes a pain ("how to automate invoicing for freelancers").
  • Category: buyer names the space without your brand ("top project management tools for remote teams").

Step 2: Run them in the main assistants

Use the same wording a human would type — no keyword stuffing. Run each prompt in ChatGPT (with browsing if available), Perplexity, and Claude. Use a fresh session or incognito where possible to reduce personalization bias, and note the date so you can compare next month.

Screenshot or copy the full answer. Capture which domains or pages are linked or named as sources. That tells you where models are pulling from and which competitors or publishers own the narrative for that prompt.

Step 3: Score visibility and sentiment

For each cell (prompt × assistant), use a simple score: 0 = not mentioned, 1 = mentioned but not recommended, 2 = recommended among options, 3 = primary or top recommendation. For sentiment, tag as positive, neutral, or negative based on how you're described (accurate strengths vs. vague vs. risks or caveats).

Red flag

If competitors with weaker products rank higher in AI answers, the issue is often distribution and structure of content — not product quality. Fix the sources models cite, not just your homepage copy.

Step 4: Map who gets cited instead of you

When you're absent, list the brands and URLs that appear. Patterns emerge quickly: comparison roundups, G2-style reviews, Wikipedia, niche blogs, or a competitor's docs. That list is your backlog — you need presence in the same types of sources, or clearer on-site content that those sources can mirror.

  • If roundups dominate: pitch or earn placement in the same publications.
  • If reviews dominate: strengthen third-party profiles and gather structured feedback.
  • If one competitor dominates: study their cited pages and what format they use (tables, FAQs, clear positioning).

Step 5: Turn findings into a 30-day plan

Prioritize prompts where you scored 0–1 but revenue impact is high. For each, assign one content action (new comparison page, updated FAQ, guest post) and one distribution action (directory, PR, partnership). Re-run the same prompts in 30 days and update your scores. Consistency beats one-off campaigns.

When you're ready to scale, automate prompt monitoring and historical tracking so you're not manually re-running dozens of queries every week. That's where a dedicated visibility product saves time — but the one-hour audit is enough to prove the gap and align your team.

The bottom line

You don't need weeks of research to start GEO. List your prompts, run them in the assistants you care about, score visibility and sentiment, and map the sources that win when you don't. Repeat monthly. The brands that treat AI answers as a measurable channel — not a black box — pull ahead fast.