SEO Audits
The full audit framework: technical, on-page, content, backlink, local, and AI visibility. Priority-tiered checklist, deliverables, and reporting that drives action — not a 200-page PDF nobody reads.
A 200-page audit PDF that nobody implements is professional malpractice. The point of an audit is not the document — it is the prioritized list of things that, if fixed, will produce measurable lift. Everything else is performance art.
TL;DR
- Six audit pillars: technical, on-page, content, backlinks, local (if applicable), AI visibility (mandatory in 2026). Skip a pillar only if you can prove it does not apply.
- Every finding has a priority tier (P0-P3) and an effort tier (S/M/L/XL). P0/S items ship this sprint. P3/XL items go in the appendix.
- The deliverable is an action plan, not a report. A spreadsheet with
finding,priority,effort,expected_impact,owner,target_dateoutperforms any 200-page deck.
The mental model
An SEO audit is like an annual physical from a competent doctor. The point is not the chart of every blood marker — it is “your cholesterol is high, here is the change to make, follow up in three months.” A doctor who hands you a 60-page printout with no diagnosis and no plan has not earned their fee.
This matters because the failure mode of audits is bias toward comprehensiveness. A junior SEO running their first audit will list 400 findings of equal weight: a missing alt tag and an indexable login page get the same red checkmark. A senior auditor lists 30 findings, ranks them by impact, and walks the team through which three to fix this week.
The other half: audits are diagnostic, not prescriptive in detail. The audit tells you that your site has a faceted-navigation crawl waste problem. It does not write the URL parameter rules — that is engineering work that follows. Auditors who try to ship the implementation along with the audit produce neither.
Deep dive: the 2026 reality
The six-pillar framework, with priority tiers per finding.
Priority tiers.
| Tier | Definition |
|---|---|
| P0 | Demoted, deindexed, or blocking ranking. Fix this week. |
| P1 | Materially limiting traffic or conversions. Fix this quarter. |
| P2 | Optimization. Fix when capacity allows. |
| P3 | Hygiene / future-proofing. Note for next audit. |
Pillar 1: Technical.
| Finding | Priority |
|---|---|
| Robots.txt blocking critical paths | P0 |
| Indexable staging or admin URLs | P0 |
| 5xx errors on key templates | P0 |
| Sitewide noindex shipped accidentally | P0 |
| Canonical pointing to wrong URL | P1 |
| Hreflang errors on international site | P1 |
| Crawl budget waste >30% (per logs) | P1 |
| Core Web Vitals failing on top templates | P1 |
| HTTP/2 not enabled, HTTP/3 not preferred | P2 |
| Sitemap gaps (URLs in index but not sitemap) | P2 |
| Trailing-slash inconsistency | P2 |
| Schema.org missing on product/article pages | P2 |
Pillar 2: On-page.
| Finding | Priority |
|---|---|
Critical pages with duplicate <title> | P0 |
| H1 missing on commercial pages | P1 |
| Title length 30-60 chars violated >40% | P2 |
| Meta description length / missing | P2 |
| Keyword cannibalization (>1 page ranking same term) | P1 |
| Internal anchor text generic (“click here”) sitewide | P2 |
| Image alt text missing | P3 |
Pillar 3: Content.
| Finding | Priority |
|---|---|
| Pages targeting queries with no business intent | P1 |
| Thin content (>30% of indexed URLs <300 words) | P1 |
| Stale content where freshness signals matter | P2 |
| Topic gaps vs competitors | P2 |
| Content cannibalizing across multiple pages | P1 |
| Author / E-E-A-T signals missing on YMYL pages | P0 if YMYL |
Pillar 4: Backlinks.
| Finding | Priority |
|---|---|
| Toxic backlink profile (paid links, PBNs detected) | P0 if recent |
| Lost authority links (404s on previously linked URLs) | P1 |
| Internal PageRank concentrated wrong (homepage hoards) | P2 |
| Anchor text profile over-optimized | P2 |
| Disavow file outdated or missing | P3 |
Pillar 5: Local (if local intent applies).
| Finding | Priority |
|---|---|
| Google Business Profile unverified or wrong category | P0 |
| NAP inconsistency across citations | P1 |
| No location pages for service areas | P1 |
| GBP posts inactive | P3 |
| Review count and recency below competitors | P2 |
Pillar 6: AI visibility.
| Finding | Priority |
|---|---|
| Robots.txt blocking AI bots when intent is to be cited | P0 |
llms.txt missing for content-led brands | P2 |
| AI Overview citation share <5% on category prompts | P1 |
| Citation share losing to competitors >20pp | P1 |
| Schema.org incomplete (Article, Product, FAQ, Person) | P1 |
| No AI-readable structured author bios | P2 |
Effort tiers.
| Tier | Definition |
|---|---|
| S | <1 day, single SEO or single dev |
| M | 1-5 days, single team |
| L | 1-4 weeks, cross-functional |
| XL | >1 month, multi-team initiative |
The audit deliverable. The artifact that produces action:
audit_findings.csv
finding_id, pillar, priority, effort, expected_impact_pct, expected_traffic_lift,
evidence_link, recommended_fix, owner, target_date, status
A second tab summarizes the top 10 P0/P1 findings in plain English for leadership. A third tab is the appendix of every finding for the SEO team’s reference.
Visualizing it
flowchart TD
A[Crawl: Screaming Frog or Sitebulb] --> Z[Findings register]
B[GSC export 16 months] --> Z
C[GA4 + BigQuery export] --> Z
D[Server log sample] --> Z
E[Backlink data Ahrefs Majestic] --> Z
F[AI citation tracker Profound Otterly] --> Z
G[Manual review of templates] --> Z
Z --> H[Prioritize by P0 to P3 and S to XL]
H --> I[Action plan with owners and dates]
I --> J[Sprint planning]
I --> K[Quarterly roadmap]
Bad vs. expert
The bad approach
The agency runs a Screaming Frog crawl, exports every “issue” Screaming Frog flags, paginates the findings into a 200-slide PowerPoint, and presents it for two hours to a glazed-over client.
Slide 47: 1,238 images missing alt text.
Slide 48: 412 pages with title under 30 characters.
Slide 49: 89 internal links with nofollow.
Slide 50: 12 pages with H1 longer than 70 characters.
[...continues for 150 more slides...]
The client thanks the agency politely and never implements anything because there is no priority signal, no effort estimate, no expected impact, and no owner. The agency invoices, the site is unchanged, and three months later the relationship ends with both sides frustrated.
The expert approach
Run the same data extraction, but apply the priority/effort framework, present a one-page summary, and hand off a structured CSV the client’s team can drop straight into Jira.
# audit_summary.yaml — what leadership reads
period: 2026-04
domain: example.com
top_findings:
- id: F-001
pillar: technical
priority: P0
effort: S
summary: "Robots.txt is blocking /products/ — 18,000 URLs deindexed since March 14"
expected_impact: "+15-25% organic clicks within 4 weeks of fix"
owner: dev-platform
target_date: 2026-04-15
- id: F-002
pillar: ai-visibility
priority: P1
effort: M
summary: "AI Overview citation share is 3.2% vs 18% for top competitor"
expected_impact: "+8-12% incremental traffic via AIO referrals"
owner: content
target_date: 2026-06-30
- id: F-003
pillar: content
priority: P1
effort: L
summary: "47 pages cannibalizing 'crm software' family of queries"
expected_impact: "Consolidate to 3-5 hub pages, lift target queries +20-30%"
owner: content + seo
target_date: 2026-07-31
This works because the priority tier compresses 400 findings into the 12 that matter, the effort tier tells the engineering manager whether to scope this sprint or next quarter, and the expected impact gives leadership the ROI argument they need to allocate capacity.
Do this today
- Crawl the site with Screaming Frog SEO Spider or Sitebulb. Configure the crawl to render JavaScript (rendering mode: JavaScript), respect robots.txt as Googlebot would, and set a 10,000-URL cap for first pass on small-to-mid sites.
- Pull GSC data for 16 months via the Search Analytics API with
dimensions=["query", "page", "device", "country", "searchAppearance"]anddataState="final". Save to BigQuery for repeat queries. - Sample 30 days of server logs and identify Googlebot’s crawl waste percentage (see Module 091). Note the % to non-200, the % to parameter URLs, and the top crawl-wasting paths.
- Run a backlink audit in Ahrefs > Site Explorer > Backlink profile with the Best by links and New / Lost views. Export referring domains and any toxic patterns flagged.
- Generate the AI visibility baseline. Run a 100-prompt set through Profound or Otterly for your category. Note your citation share per LLM and per prompt cluster.
- Manually review five top-priority templates. Look for E-E-A-T signals (author bio, sources cited, schema), structured data validity (test in Schema.org Validator + Rich Results Test), and content quality.
- Compile findings into a CSV with columns:
id, pillar, priority, effort, finding, evidence_url, recommended_fix, expected_impact, owner, target_date, status. - Tag every finding with P0-P3 and S/XL. Be ruthless. If you have more than 5 P0s, you have not prioritized.
- Build a 1-page leadership summary in Notion or Google Docs: top 10 findings, expected aggregate impact, total effort estimate in person-days, and proposed delivery sequence.
- Schedule the readout meeting with the people who will own implementation in the room. Walk through the top 10 findings and stop when you have a date and an owner against each P0 and P1. The audit is not done until those commitments exist.
Mark complete
Toggle to remember this module as mastered. Saved to your browser only.
More in this part