Skip to content
SEO Tecnico April 21, 2026 12 min read

SEO for ChatGPT and Perplexity: What Works in 2026

SEO for ChatGPT and Perplexity in 2026: how each LLM cites sources, 7 operational tweaks, differences with AI Overviews, and how to track real mentions.

JR

Jose Redondo Delgado

Founder & Director, Ad2Place Digital

SEO for ChatGPT and Perplexity: what works in 2026, how they cite sources and 7 operational tweaks

This is the second satellite in the GEO cluster. The first explains how to appear in Google AI Overviews. The pillar, GEO vs SEO: what changes in 2026, defends the thesis that GEO is SEO done well at 80%. Here I land the new 20% specific to the two LLMs that drive the most citation traffic: ChatGPT and Perplexity.

How ChatGPT and Perplexity work when citing sources

Before talking about SEO for ChatGPT and Perplexity specifically, you need to understand how they decide who to cite. They don’t work like Google.

ChatGPT cites sources when it activates its “browse the web” mode (with SearchGPT integrated). SearchGPT relies mostly on the Bing index, although OpenAI has been adding its own sources. If you ask ChatGPT something and it responds without citations, it’s pulling from trained knowledge — your page doesn’t matter there. If it responds with citations, it’s doing RAG (Retrieval Augmented Generation): searches in real-time, selects 3-8 sources and uses them to compose the answer.

Perplexity always cites. It’s their product model. It has its own crawler (PerplexityBot) and combines multiple sources: Bing, Google, its own index and academic sources. Every answer comes with numbered citations and links. That’s why Perplexity is, by far, the LLM that sends the most referral traffic to external sites.

Differences with Google AI Overviews:

  • Google Overviews pull from the Google index. ChatGPT and Perplexity pull mostly from Bing + own crawlers.
  • Overviews show 3-8 sources as sidebar links. ChatGPT and Perplexity integrate citations as numbered footnotes within the answer.
  • Overviews are more conservative on YMYL (health, finance). ChatGPT and Perplexity cite more broadly.
  • Overviews depend on your Google ranking. ChatGPT and Perplexity depend on your Bing ranking + visible topical authority.

What they have in common: all four (Google, ChatGPT, Perplexity, Gemini) reward extractable structure, verifiable author, deep topical coverage and freshness. Optimizing well for one partially optimizes for all.

What each one prioritizes (practical matrix)

I’ve spent months auditing clients crossing queries across the four main LLMs. This is the pattern:

ChatGPT (SearchGPT) prioritizes:

  • Bing ranking (not Google).
  • Generic domain authority (backlinks still weigh more than in Overviews).
  • Content with direct, structured answers.
  • Sources with identified authors.

Perplexity prioritizes:

  • Source diversity (often cites 5-10 per answer).
  • Freshness (visible dates move positions).
  • Technical articles with citable data (stats, figures).
  • Sites that don’t block PerplexityBot.

Gemini (Google) prioritizes:

  • Google ranking (90% of sources are top-10).
  • Parseable FAQPage and HowTo schema.
  • Strict E-E-A-T on YMYL.
  • Clean extractive snippets.

Claude (when using web search) prioritizes:

  • Long, well-structured content (tends to cite in-depth sources).
  • Domains with academic or journalistic reputation.
  • Data with linked sources.

If your goal is maximizing mentions across all, 80% of the work is the same: SEO done well. The specific 20% is below.

2026 comparison matrix: what each LLM (ChatGPT, Perplexity, Gemini, Claude) prioritizes when citing sources, measured by base index, backlink weight, extractable structure, E-E-A-T, freshness and sourced data

The 7 tweaks that move the needle

These are the concrete SEO for ChatGPT and Perplexity tweaks I apply with clients when the explicit goal is increasing mentions in these LLMs. Not revolutionary, but different from 2020 classic SEO.

1. Audit your robots.txt and unblock the relevant bots

First step, obvious but often ignored. If you block GPTBot, PerplexityBot, CCBot or ClaudeBot, you simply don’t exist for them. Check your robots.txt:

User-agent: GPTBot
Allow: /

User-agent: PerplexityBot
Allow: /

User-agent: CCBot
Allow: /

User-agent: ClaudeBot
Allow: /

Some cases warrant blocking them (premium content sites that don’t want to be “digested” without clicks). If that’s not you, open up.

2. Rank on Bing too, not just Google

Bing Webmaster Tools is still free and carries weight in ChatGPT. Submit your sitemap. Monitor impressions. If you rank well in Google, Bing usually follows 4-8 weeks later, but not always. Actively checking can unlock ChatGPT citations without touching anything else.

3. Rewrite the opening of every key article with a direct answer

Same rule we saw for AI Overviews, but stricter: the first two sentences must answer the question. Perplexity and ChatGPT extract disproportionately from the article opening. An article starting with 3 paragraphs of historical intro loses to one that gets to the point.

4. Back up concrete data with linked sources

LLMs disproportionately cite pages that cite sourced data. If you claim “70% of companies fail at this”, link to the study. If you give a date or a number, link it. It’s a rigor signal the trust filters weight. And you get natural link building as a bonus.

5. FAQPage and Article schema with a Person author

FAQPage schema with 6-10 questions per article. Article schema with the author property pointing to a complete Person schema (name, url, sameAs, image, jobTitle). LLMs parse these schemas to identify verifiable authors — one of their main trust signals.

6. Optimize for extractive citations with lists and tables

Perplexity especially: when your article has numbered lists or tables with comparable data, it extracts them literally. A page with 10 lists and tables gets cited much more than one with only paragraphs, even if both are the same length.

7. Maintain active freshness on core articles

Update dates, add sections when you detect new queries, explicitly mark “updated YYYY-MM”. Perplexity watches freshness signals aggressively. An unmaintained 2023 article loses to a 2025 one even if inferior in content.

How to monitor if they cite you

Three levels, free to paid:

Weekly manual spot-checks. Open ChatGPT with web search enabled and Perplexity, run the top 10-15 queries for your business, note if you appear and what fragment is cited. Time: 30 min/week. Cost: 0.

Monitoring tools. Peec AI, Profound, Otterly, Rankscale, BrandGPT. Between 50 and 300 USD/month. Automatically detect brand mentions in LLMs and compare against competitors. Useful once you have established traffic and want to defend.

Referral traffic analysis. In GA4 filter by source equal to chatgpt.com, perplexity.ai, openai.com. If you see traffic from those sources, they’re citing you and people are clicking. This traffic tends to convert better than Google organic — lower volume but highly qualified users.

How much traffic to realistically expect

Getting cited is one thing. Getting clicks is quite another. Data I’ve collected from around ten Ad2Place clients over the past year paints this approximate pattern:

Perplexity drives the most referral traffic. For every 100 appearances as a cited source, between 8 and 20 users land on your site. The range varies a lot by query type: on specific technical questions the CTR climbs, on definition questions it drops. The traffic is highly qualified: users arrive already knowing what they want and convert at a rate higher than Google organic.

ChatGPT sends fewer clicks per citation, between 2 and 8 per 100 appearances. Reason: when ChatGPT answers, it drafts longer responses that resolve the question without needing a click. However, when users do click, it’s usually to deepen or verify — high intent.

Gemini (AI Overviews) sends clicks in the 3-10 per 100 citations range. Behavior similar to ChatGPT but with slightly higher absolute volume because Google still concentrates most searches.

Claude sends less traffic because web-search usage is niche, but the traffic it does send tends to be technical and professional users with high conversion value.

Key read: don’t optimize for ChatGPT and Perplexity expecting to replace Google traffic. Optimize to add a complementary channel with better-qualified users. The total click volume you add can be between 5% and 20% of what Google organic gives you, depending on the niche. But with higher conversion.

Diagnosis if ChatGPT or Perplexity never cite you

Five common causes, by probability:

Your site blocks the bots. Check robots.txt now. Cause number one.

You don’t rank on Bing. Especially relevant for ChatGPT. An SEO who only looks at Google leaves 30% of the LLM citation market blind.

Your content isn’t structured for extraction. Very long paragraphs, no H3, no lists, no FAQ. LLMs need isolable fragments.

Anonymous author. No Person schema, no real bio, no verifiable sameAs. On YMYL this almost always disqualifies.

Young domain or low topical authority. LLMs reward consolidated topical authority even more than Google. If you’ve been publishing for 3 months, it’s early. Patience measured in 6-12 months.

If you’ve been optimizing for 6 months and none cite you, your base SEO probably needs work — not a specific “GEO” tweak. Back to the cluster pillar.

Why this doesn’t require hiring a “ChatGPT SEO specialist”

Everything above are adjustments within the classic SEO frame. Robots audit, Bing positioning, content structure, schema, real author, freshness. Any serious SEO current with 2026 does this without charging extra. Be skeptical of anyone selling “ChatGPT SEO Specialist certification” as a separate service — I explained it more extensively in the GEO vs SEO pillar.

If you want us to review your site together and tell you which of these 7 tweaks you’re missing so ChatGPT and Perplexity cite you more, book a free SEO consultation. In 30 minutes you leave with a concrete plan.

Want us to apply these strategies to your business?

Request a free consultation and we'll show you how to improve your digital presence with measurable results.