000 H1 CTA
inspector/v2 · mit licensed · open source · self-host last audit · just now

// start here

Here's what AI reads
when it visits your site.

The open-source AEO toolkit for teams who ship. Score your site against 36 deterministic checks for ChatGPT, Perplexity, Claude, and Gemini — then generate the nine files that get you cited. Others monitor. AEOrank fixes.

~/your-site · terminal
npx aeorank-cli scan your-site.com
https://your-site.com · ·
100% what visitors see
✓ TITLE Your Site — Do Great Things
✗ LLMS.TXT 404 · not found
✗ AI.TXT 404 · not found
~ SCHEMA.ORG 1/9 types present
✗ GPTBOT blocked by robots
✗ CLAUDEBOT blocked by robots
~ SPEAKABLE 0 marked regions
✓ H1 1 · well-formed
~ AUTHOR no byline schema
✗ LAST-MODIFIED no freshness signal
extracted score // 36 checks
31 /100 F
31% what chatgpt extracts
36 checks · 9 failing · 11 partial · 16 passing crawled 14 pages · 0.9s fix all 9 → regenerate files → rescan
criteria checked
36
files generated
9
framework plugins
11
tests passing
288
first scan
<30s
license
MIT
SCAN

// live scan

Watch an actual audit stream past.

Below is a real aeorank-cli run — the same output you get in your terminal, your CI, or your GitHub check. No mocks, no LLM guesswork. Each line is one deterministic criterion with a signed weight delta. Same input, same score, every run.

STREAMING node@v20 · aeorank-cli@0.1.1 · https://your-site.com
title 64 chars · well-formed +2
llms.txt 404 · not found -4
ai.txt 404 · not found -3
~ schema.org 1/9 types present -5
gptbot blocked by robots.txt -6
canonical present · single +1
claudebot disallowed in /robots -4
~ speakable 0 regions marked -3
h1 1 · matches <title> +2
last-modified no freshness signal -3
og:image 1200×630 · present +1
~ author schema.Person missing -2
sitemap.xml 14 urls · fresh +2
rss feed missing -2
viewport responsive · present 0
~ citations 2 internal · weak -2
hreflang not needed 0
perplexitybot blocked by robots.txt -4
headings h1→h2→h3 · ordered +1
~ definitions 0 glossary terms -2
title 64 chars · well-formed +2
llms.txt 404 · not found -4
ai.txt 404 · not found -3
~ schema.org 1/9 types present -5
gptbot blocked by robots.txt -6
canonical present · single +1
claudebot disallowed in /robots -4
~ speakable 0 regions marked -3
h1 1 · matches <title> +2
last-modified no freshness signal -3
og:image 1200×630 · present +1
~ author schema.Person missing -2
sitemap.xml 14 urls · fresh +2
rss feed missing -2
viewport responsive · present 0
~ citations 2 internal · weak -2
hreflang not needed 0
perplexitybot blocked by robots.txt -4
headings h1→h2→h3 · ordered +1
~ definitions 0 glossary terms -2
16 passing  ·  ~ 11 partial  ·  9 failing final: 31/100 · grade F fix all 9 →
01 · deterministic

No LLM. No drift.

Every check is a pure function of HTML, headers, and files. Tools that prompt an LLM to evaluate your site give a different score every run. AEOrank gives the same score every run — exactly what you want in CI.

assert.equal(scan(A), scan(A))
02 · transparent

Every criterion is readable source.

No proprietary black-box score. Each of the 36 checks is a small, tested function you can read, fork, or contribute to. No vendor can change your score by changing their model.

packages/core/src/checks/*.ts
03 · actionable

Each failure ships with a fix.

Other tools monitor — they tell you you're invisible and send a dashboard. AEOrank hands you the exact 9 files AI engines want, ready to drop into your site root. One command, not a roadmap.

./dist/llms.txt · ./dist/ai.txt · …
/100

// anatomy of a score

Every point is auditable.

Your AEO score is a weighted sum of 36 deterministic checks across five pillars. Every weight, every check, every line of logic is in the open-source repo. Click a pillar to read the actual subchecks.

POINTS · weighted GRADE · A+ → F RUN · deterministic
  1. P1

    Answer readiness

    topical authority · fact density · citation-ready prose · evidence packaging · duplicate detection

    source checks/p1.ts
  2. P2

    Content structure

    Q&A format · answer density · tables & lists · definitions · heading hierarchy

    source checks/p2.ts
  3. P3

    AI discovery

    llms.txt · ai.txt · GPTBot · ClaudeBot · PerplexityBot · licensing · freshness

    source checks/p3.ts
  4. P4

    Trust & authority

    E-E-A-T signals · author schema · meta descriptions · internal linking · answer-first framing

    source checks/p4.ts
  5. P5

    Technical foundation

    schema.org coverage · semantic HTML · speakable markup · image context · extraction friction

    source checks/p5.ts
score = Σ (check · weight)  where  check ∈ {0, 1}  and Σweights = 100
A+
90–100
ai-native
A
80–89
highly cited
B
70–79
consistently visible
C
60–69
sometimes cited
D
50–59
rare citation
F
0–49
invisible
FILES

// 9 generated files

Every failing check ships with a fix.

Every other AEO tool monitors — they hand you a dashboard and a todo list. AEOrank hands you the nine actual files AI engines look for, generated from your site, ready to drop into your repo root. Click a file to read its real output.

your-site.com /llms.txt
md 15 lines

Root-level policy + site map for LLMs. The emerging spec.

# Your Site

> A short manifesto AI engines read first.

## Core pages
- [Docs](/docs.md): developer reference
- [Pricing](/pricing.md): plans and costs
- [Blog](/blog.md): recent writing

## Policies
- crawl: allow
- training: disallow
- citation: preferred
- license: CC-BY-4.0
your-site.com /ai.txt
txt 17 lines

Opt-in/out signals for AI training and retrieval.

# aeorank-cli 0.1.1
User-Agent: GPTBot
Allow: /

User-Agent: ClaudeBot
Allow: /

User-Agent: PerplexityBot
Allow: /

User-Agent: Google-Extended
Allow: /

Crawl-Delay: 1
Content-Usage: ai-training=n; ai-retrieval=y
Contact: hello@your-site.com
your-site.com /schema.json
json 13 lines

JSON-LD structured data for every significant page.

{
  "@context": "https://schema.org",
  "@type": "SoftwareApplication",
  "name": "Your Site",
  "applicationCategory": "DeveloperApplication",
  "operatingSystem": "Web",
  "offers": { "@type": "Offer", "price": "0", "priceCurrency": "USD" },
  "aggregateRating": {
    "@type": "AggregateRating",
    "ratingValue": "4.8",
    "reviewCount": "412"
  }
}
your-site.com /robots.patch
txt 18 lines

Surgical diff against your existing robots.txt — AI bots allowed.

--- robots.txt (current)
+++ robots.txt (aeorank)
@@
 User-agent: *
 Allow: /
+
+User-agent: GPTBot
+Allow: /
+
+User-agent: ClaudeBot
+Allow: /
+
+User-agent: PerplexityBot
+Allow: /
+
+User-agent: Google-Extended
+Allow: /
your-site.com /sitemap.xml
xml 13 lines

Freshness-aware sitemap with lastmod signals.

<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  <url>
    <loc>https://your-site.com/</loc>
    <lastmod>2026-04-19</lastmod>
    <priority>1.0</priority>
  </url>
  <url>
    <loc>https://your-site.com/docs</loc>
    <lastmod>2026-04-18</lastmod>
    <priority>0.8</priority>
  </url>
</urlset>
your-site.com /answers.json
json 14 lines

Your top 20 Q&A pairs — extractable, machine-citable.

[
  {
    "q": "What does your product do?",
    "a": "One-sentence direct answer.",
    "source": "/about",
    "evidence": ["/about#mission"]
  },
  {
    "q": "How much does it cost?",
    "a": "Starts free; pro at $29/mo.",
    "source": "/pricing",
    "evidence": ["/pricing#plans"]
  }
]
your-site.com /citations.json
json 6 lines

Primary sources, numbered and canonical. Reduces hallucination risk.

{
  "citations": [
    { "n": 1, "claim": "40% of discovery is now AI", "src": "https://example.com/report" },
    { "n": 2, "claim": "AI traffic converts at 15.9%", "src": "https://example.com/conv"  }
  ]
}
your-site.com /humans.txt
txt 10 lines

Author and attribution data, surfaced in citations.

/* TEAM */
  Author: You
  Contact: hello@your-site.com
  From: Earth

/* SITE */
  Standards: HTML5, CSS4, JSON-LD
  Components: schema.org
  Built with: AEOrank v0.1.1
your-site.com /feed.xml
xml 9 lines

RSS feed — AI engines still treat RSS as a primary freshness signal.

<?xml version="1.0"?>
<rss version="2.0">
  <channel>
    <title>Your Site</title>
    <link>https://your-site.com</link>
    <description>Updates</description>
    <lastBuildDate>Sun, 19 Apr 2026 12:00:00 GMT</lastBuildDate>
  </channel>
</rss>
framework plugins

@aeorank/nextjs, @aeorank/astro, @aeorank/nuxt, and 8 more. Drop one line of config and all 9 files are served at the correct route on build — no vendor dashboard to babysit.

self-hostable

Every file is generated locally. No upload, no telemetry, no required account, no API key. The entire toolchain is MIT and lives in your repo — if we vanished tomorrow, you'd still be fine.

deterministic

Same HTML in → same files out. 288 tests pin output to byte-level stability, so a diff in your PR check means something actually changed — not just that an LLM was in a different mood.

OSS

// open source by default

Built in public. Owned by you.

“Your AI visibility stack shouldn't live inside a vendor dashboard you can't read, can't fork, and can't afford."

Every AEO monitoring tool we looked at is closed-source SaaS with a $99–$499/mo floor. AEOrank is different on purpose. The core scanner, 11 framework plugins, the GitHub App, the GitHub Action, and every one of the 9 files we generate — all MIT licensed, all forkable, all self-hostable. You can stand the entire thing up inside your own infra in an afternoon.

The hosted app at app.aeorank.dev exists to make things convenient — never because the tool needs it to work. If we vanished tomorrow, your CI would keep scoring, your files would keep regenerating, and your team would keep shipping. Try getting that commitment from a venture-backed dashboard.

license

MIT

primary lang

TypeScript

signup required

no

vendor lock

zero

Early days. If you star it, you actually move the needle.

// contributors see all on github →

Loading contributors from github.com…

INSTALL

// three ways in

Pick the surface that fits your team.

Same scoring engine under all three. Same 36 checks. Same 9 generated files. Pick the surface closest to where you already ship — AEOrank is the only AEO tool that treats CI as a first-class target instead of an afterthought.

Scan from your terminal

Zero install. Zero account. Zero API key. Colored output with per-criterion scores and the 9 fix files written locally. No upload, no vendor call.

latency ~30s per scan
# one-off audit on any site
$ npx aeorank-cli scan https://your-site.com

✔ 36 checks in 0.9s
  score: 31/100 · grade F
  failing: 9 · partial: 11 · passing: 16

✔ wrote dist/llms.txt
✔ wrote dist/ai.txt
✔ wrote dist/schema.json
✔ wrote dist/robots.patch
… (+ 5 more)

→ open report.html in your browser

Install, then forget

One click. Zero YAML. Every pull request gets a Check Run with the delta, the failing criteria, and a diff of the regenerated files. Block merges on AEO regressions — the way you already block on lint and tests.

latency ~1 click · ~0 config
# Flow
1.  Install AEOrank on your repo
     github.com/apps/aeorank

2.  Open any PR
     → Check Run appears automatically

3.  Check summary example:
     ━━━━━━━━━━━━━━━━━━━━━━━━
     AEORANK · PR #419
     ━━━━━━━━━━━━━━━━━━━━━━━━
     score 67 → 72 (+5)  grade C → B
     ✔ llms.txt restored
     ✔ speakable markup added
     ✗ author.schema still missing

Full pipeline control

Pin a version, set a fail-below threshold, commit the generated files back to your repo. AEO as code — the same way tests and coverage already work. No monitoring tool on the market does this.

latency ~8 lines of YAML
# .github/workflows/aeorank.yml
name: aeorank
on: [pull_request, push]

jobs:
  score:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: vinpatel/aeorank-action@v1
        with:
          url: https://${{ github.event.pull_request.head.ref }}.preview.example.com
          fail-below: 70
          output: ./dist/aeo
# what this adds to your repo
+ .github/workflows/aeorank.yml
+ dist/aeo/llms.txt
+ dist/aeo/ai.txt
+ dist/aeo/schema.json
+ dist/aeo/robots.patch
+ dist/aeo/report.html
// framework plugins drop-in, MIT, one line of config
next.js
astro
nuxt
sveltekit
remix
eleventy
hugo
gatsby
rails
django
hono
fastify
wordpress
RANK

// AI visibility scoreboard

100 funded startups score 42/100 on average.

We scan the public sites of 100 funded startups every week with the same deterministic CLI you can run yourself. Below is today's top 10. The full board includes 90 more — filterable by industry, funding stage, and pillar. No LLM evals, no inflated numbers.

scoreboard · week 16 · 2026 full board →
# company industry score grade 7d Δ distribution
01 Anthropic ai lab 87 /100 A +4
02 Vercel infra 81 /100 A +2
03 Linear saas 78 /100 B +6
04 Supabase infra 74 /100 B +1
05 Cursor dev tools 71 /100 B -2
06 Resend infra 68 /100 C +3
07 Stripe payments 57 /100 D
08 Notion saas 54 /100 D -1
09 OpenAI ai lab 42 /100 F -4
10 Fly.io infra 38 /100 F -2
$

// pricing

Pricing, rendered as a report card.

Same scoring engine on every plan. We don't gate the 36 criteria, the 9 generated files, or the CLI. The product itself is MIT — you pay for convenience (history, auto-rescan, APIs), never for the scoring or the fixes. That's the whole reason this costs $29 instead of $299.

starter plan

Starter

B
$0 /mo

Try everything. No credit card.


sites
1
dashboard scans
3/mo
36-criteria scorer
full
9 generated files
all
CLI (local)
unlimited
GitHub App
included
11 framework plugins
included
score history
auto-rescan
PDF export
REST API

Good starting posture. Scans are capped, but every generated file is fully usable and the CLI stays unlimited forever. No feature you need lives behind the paywall.

Start free

agency plan

Agency

A+
$99 /mo

Manage 50 client sites at scale.


sites
50
dashboard scans
500/mo
REST API
included
bulk scan endpoints
included
webhooks
included
client-ready PDFs
included
priority support
included
everything in Pro
included

Built for teams delivering AEO as a service. Bulk endpoints, webhooks, client-ready exports — at a price where a single retainer pays the plan ten times over.

Start Agency — $99
// what the others charge closed SaaS · monitor-only · none generate the 9 files
aeorank pro

$29/mo

score · fix · monitor

Profound

$399/mo

monitor · closed

Scrunch

$250/mo

monitor · closed

Otterly

$189/mo

monitor · closed

Semrush

$129/mo

SEO · not AEO

They're venture-backed dashboards. We're MIT-licensed, self-hostable, and the only one that writes the 9 files your AI crawlers actually want.
FAQ

// what an AI crawler sees vs. what you read

FAQ, rendered twice.

Left: the JSON-LD we emit on this page, verbatim — exactly what an AI crawler extracts. Right: the same answers for human eyes. We practice what we score.

01 Will this actually help me rank in ChatGPT answers?

Alone, no single file is a silver bullet. AEOrank scores 36 things at once — technical signals like llms.txt, content signals like answer density, trust signals like schema.Author, and access signals like crawler permissions. Fix the 9 failing checks and a typical site goes from 31→70+ on the next scan. Citations in Perplexity and ChatGPT tend to follow within 2–4 weeks.

02 What does deterministic scoring actually mean?

Every check is a pure function of HTML, headers, and file presence — no LLM calls, no randomness. Scan the same site twice, get the same score. That's how CI works. Tools that prompt GPT-4 to rate your site give you a different number on Tuesday than they did on Monday; you cannot fail a PR on that.

03 Is it really MIT?

Yes, and this is unusual in the category. Profound, Scrunch, Otterly, Peec, and AthenaHQ are all closed-source SaaS — the scoring logic is a black box on someone else's server. AEOrank's scanner core, all 11 framework plugins, the GitHub App, the GitHub Action, and the file generators are under MIT. You can fork it, audit the checks, and self-host the entire stack.

04 How is this different from Ahrefs, Semrush, or their AI add-ons?

Classical SEO tools check ranking signals for Google's blue-link results. AEOrank checks answer-extraction signals for AI engines: llms.txt, ai.txt, schema.org coverage, speakable markup, AI crawler access (GPTBot, ClaudeBot, PerplexityBot), content structure for extraction, and 30 more criteria SEO tools don't touch. Then it generates the 9 actual files. Semrush's AI add-on and Ahrefs' Brand Radar report on visibility; neither writes a single file you can ship.

05 What's the difference vs Profound, Scrunch, Otterly, Peec, or AthenaHQ?

They monitor: they run prompts through LLMs and tell you what AI said about your brand this week. AEOrank fixes: it scores whether AI can read your site at all, then writes the 9 files that make it readable. Floor prices there run $89–$295/mo; AEOrank Pro is $29 and the CLI is free. Useful complements, but they solve the downstream question — AEOrank solves the upstream one.

06 Can I use this in CI?

AEOrank is the only AEO tool designed for CI first. Install the GitHub App for zero-YAML PR checks, or pin aeorank-action@v1 in a workflow with a fail-below threshold. Both post a check run with the score delta and the diff of the 9 regenerated files. Because the scorer is deterministic, the check is safe to block merges on — the opposite of LLM-based evals.

07 Does AEOrank send my site to an LLM?

No. The scanner is deterministic and never calls an external model. The CLI runs fully locally; the hosted app fetches your public URL over HTTPS exactly like any other crawler.

08 What AI engines is it tuned for?

GPTBot (ChatGPT), ClaudeBot (Claude), PerplexityBot (Perplexity), Google-Extended (Gemini + AI Overviews), and Cohere. If a significant engine ships a new crawler or signal, we add a check — usually within a week.