Module 095 Advanced 20 min read

SEO Audits

The full audit framework: technical, on-page, content, backlink, local, and AI visibility. Priority-tiered checklist, deliverables, and reporting that drives action — not a 200-page PDF nobody reads.

By SEO Mastery Editorial

A 200-page audit PDF that nobody implements is professional malpractice. The point of an audit is not the document — it is the prioritized list of things that, if fixed, will produce measurable lift. Everything else is performance art.

TL;DR

  • Six audit pillars: technical, on-page, content, backlinks, local (if applicable), AI visibility (mandatory in 2026). Skip a pillar only if you can prove it does not apply.
  • Every finding has a priority tier (P0-P3) and an effort tier (S/M/L/XL). P0/S items ship this sprint. P3/XL items go in the appendix.
  • The deliverable is an action plan, not a report. A spreadsheet with finding, priority, effort, expected_impact, owner, target_date outperforms any 200-page deck.

The mental model

An SEO audit is like an annual physical from a competent doctor. The point is not the chart of every blood marker — it is “your cholesterol is high, here is the change to make, follow up in three months.” A doctor who hands you a 60-page printout with no diagnosis and no plan has not earned their fee.

This matters because the failure mode of audits is bias toward comprehensiveness. A junior SEO running their first audit will list 400 findings of equal weight: a missing alt tag and an indexable login page get the same red checkmark. A senior auditor lists 30 findings, ranks them by impact, and walks the team through which three to fix this week.

The other half: audits are diagnostic, not prescriptive in detail. The audit tells you that your site has a faceted-navigation crawl waste problem. It does not write the URL parameter rules — that is engineering work that follows. Auditors who try to ship the implementation along with the audit produce neither.

Deep dive: the 2026 reality

The six-pillar framework, with priority tiers per finding.

Priority tiers.

TierDefinition
P0Demoted, deindexed, or blocking ranking. Fix this week.
P1Materially limiting traffic or conversions. Fix this quarter.
P2Optimization. Fix when capacity allows.
P3Hygiene / future-proofing. Note for next audit.

Pillar 1: Technical.

FindingPriority
Robots.txt blocking critical pathsP0
Indexable staging or admin URLsP0
5xx errors on key templatesP0
Sitewide noindex shipped accidentallyP0
Canonical pointing to wrong URLP1
Hreflang errors on international siteP1
Crawl budget waste >30% (per logs)P1
Core Web Vitals failing on top templatesP1
HTTP/2 not enabled, HTTP/3 not preferredP2
Sitemap gaps (URLs in index but not sitemap)P2
Trailing-slash inconsistencyP2
Schema.org missing on product/article pagesP2

Pillar 2: On-page.

FindingPriority
Critical pages with duplicate <title>P0
H1 missing on commercial pagesP1
Title length 30-60 chars violated >40%P2
Meta description length / missingP2
Keyword cannibalization (>1 page ranking same term)P1
Internal anchor text generic (“click here”) sitewideP2
Image alt text missingP3

Pillar 3: Content.

FindingPriority
Pages targeting queries with no business intentP1
Thin content (>30% of indexed URLs <300 words)P1
Stale content where freshness signals matterP2
Topic gaps vs competitorsP2
Content cannibalizing across multiple pagesP1
Author / E-E-A-T signals missing on YMYL pagesP0 if YMYL

Pillar 4: Backlinks.

FindingPriority
Toxic backlink profile (paid links, PBNs detected)P0 if recent
Lost authority links (404s on previously linked URLs)P1
Internal PageRank concentrated wrong (homepage hoards)P2
Anchor text profile over-optimizedP2
Disavow file outdated or missingP3

Pillar 5: Local (if local intent applies).

FindingPriority
Google Business Profile unverified or wrong categoryP0
NAP inconsistency across citationsP1
No location pages for service areasP1
GBP posts inactiveP3
Review count and recency below competitorsP2

Pillar 6: AI visibility.

FindingPriority
Robots.txt blocking AI bots when intent is to be citedP0
llms.txt missing for content-led brandsP2
AI Overview citation share <5% on category promptsP1
Citation share losing to competitors >20ppP1
Schema.org incomplete (Article, Product, FAQ, Person)P1
No AI-readable structured author biosP2

Effort tiers.

TierDefinition
S<1 day, single SEO or single dev
M1-5 days, single team
L1-4 weeks, cross-functional
XL>1 month, multi-team initiative

The audit deliverable. The artifact that produces action:

audit_findings.csv
  finding_id, pillar, priority, effort, expected_impact_pct, expected_traffic_lift,
  evidence_link, recommended_fix, owner, target_date, status

A second tab summarizes the top 10 P0/P1 findings in plain English for leadership. A third tab is the appendix of every finding for the SEO team’s reference.

Visualizing it

flowchart TD
  A[Crawl: Screaming Frog or Sitebulb] --> Z[Findings register]
  B[GSC export 16 months] --> Z
  C[GA4 + BigQuery export] --> Z
  D[Server log sample] --> Z
  E[Backlink data Ahrefs Majestic] --> Z
  F[AI citation tracker Profound Otterly] --> Z
  G[Manual review of templates] --> Z
  Z --> H[Prioritize by P0 to P3 and S to XL]
  H --> I[Action plan with owners and dates]
  I --> J[Sprint planning]
  I --> K[Quarterly roadmap]

Bad vs. expert

The bad approach

The agency runs a Screaming Frog crawl, exports every “issue” Screaming Frog flags, paginates the findings into a 200-slide PowerPoint, and presents it for two hours to a glazed-over client.

Slide 47: 1,238 images missing alt text.
Slide 48: 412 pages with title under 30 characters.
Slide 49: 89 internal links with nofollow.
Slide 50: 12 pages with H1 longer than 70 characters.
[...continues for 150 more slides...]

The client thanks the agency politely and never implements anything because there is no priority signal, no effort estimate, no expected impact, and no owner. The agency invoices, the site is unchanged, and three months later the relationship ends with both sides frustrated.

The expert approach

Run the same data extraction, but apply the priority/effort framework, present a one-page summary, and hand off a structured CSV the client’s team can drop straight into Jira.

# audit_summary.yaml — what leadership reads
period: 2026-04
domain: example.com
top_findings:
  - id: F-001
    pillar: technical
    priority: P0
    effort: S
    summary: "Robots.txt is blocking /products/ — 18,000 URLs deindexed since March 14"
    expected_impact: "+15-25% organic clicks within 4 weeks of fix"
    owner: dev-platform
    target_date: 2026-04-15

  - id: F-002
    pillar: ai-visibility
    priority: P1
    effort: M
    summary: "AI Overview citation share is 3.2% vs 18% for top competitor"
    expected_impact: "+8-12% incremental traffic via AIO referrals"
    owner: content
    target_date: 2026-06-30

  - id: F-003
    pillar: content
    priority: P1
    effort: L
    summary: "47 pages cannibalizing 'crm software' family of queries"
    expected_impact: "Consolidate to 3-5 hub pages, lift target queries +20-30%"
    owner: content + seo
    target_date: 2026-07-31

This works because the priority tier compresses 400 findings into the 12 that matter, the effort tier tells the engineering manager whether to scope this sprint or next quarter, and the expected impact gives leadership the ROI argument they need to allocate capacity.

Do this today

  1. Crawl the site with Screaming Frog SEO Spider or Sitebulb. Configure the crawl to render JavaScript (rendering mode: JavaScript), respect robots.txt as Googlebot would, and set a 10,000-URL cap for first pass on small-to-mid sites.
  2. Pull GSC data for 16 months via the Search Analytics API with dimensions=["query", "page", "device", "country", "searchAppearance"] and dataState="final". Save to BigQuery for repeat queries.
  3. Sample 30 days of server logs and identify Googlebot’s crawl waste percentage (see Module 091). Note the % to non-200, the % to parameter URLs, and the top crawl-wasting paths.
  4. Run a backlink audit in Ahrefs > Site Explorer > Backlink profile with the Best by links and New / Lost views. Export referring domains and any toxic patterns flagged.
  5. Generate the AI visibility baseline. Run a 100-prompt set through Profound or Otterly for your category. Note your citation share per LLM and per prompt cluster.
  6. Manually review five top-priority templates. Look for E-E-A-T signals (author bio, sources cited, schema), structured data validity (test in Schema.org Validator + Rich Results Test), and content quality.
  7. Compile findings into a CSV with columns: id, pillar, priority, effort, finding, evidence_url, recommended_fix, expected_impact, owner, target_date, status.
  8. Tag every finding with P0-P3 and S/XL. Be ruthless. If you have more than 5 P0s, you have not prioritized.
  9. Build a 1-page leadership summary in Notion or Google Docs: top 10 findings, expected aggregate impact, total effort estimate in person-days, and proposed delivery sequence.
  10. Schedule the readout meeting with the people who will own implementation in the room. Walk through the top 10 findings and stop when you have a date and an owner against each P0 and P1. The audit is not done until those commitments exist.

Mark complete

Toggle to remember this module as mastered. Saved to your browser only.

More in this part

Part 12: Competitive Analysis & Strategy

View all on the home page →
  1. 094 Competitor Research 12m
  2. 095 SEO Audits You're here 20m
  3. 096 SEO Forecasting 22m