Module 100 Expert 22 min read

The Google Quality Rater Guidelines

What the QRGs are, how Google uses them, the key concepts (PQ rating, Needs Met, lowest quality, YMYL), and reading the 176-page document for ideas.

By SEO Mastery Editorial

The Search Quality Rater Guidelines are a 176-page document Google publishes that tells thousands of contracted human raters how to grade search results. The raters are not ranking your site. Their grades train the systems that do. Understanding the QRGs is reading the rubric your work is graded against.

TL;DR

  • QRG ratings do not directly change rankings. Raters score query-result pairs along two scales: Page Quality (PQ) and Needs Met. Those scores feed back into model training and evaluation. Your page is rated indirectly via how the systems learn from cohorts of similar pages.
  • The two ratings answer two different questions. PQ asks “how high quality is this page on its own?” Needs Met asks “how well does this page answer this query for this user?” A perfect PQ page can fail Needs Met; a thin page can pass Needs Met.
  • Read the QRGs as design specifications. Section 2 (PQ) is a content-quality spec. Section 3 (Needs Met) is an intent spec. Section 4 (YMYL) is a risk spec. Read them as the engineers’ acceptance criteria for “good enough to rank.”

The mental model

The QRGs are like the rubric a graduate-school admissions committee uses to score 50,000 applications. The rubric doesn’t decide who gets in. It produces consistent training data so that downstream models — and human reviewers — apply the same standards across thousands of decisions.

Google has two things going on simultaneously. The first is the live ranking algorithm, which makes billions of decisions per day from learned patterns. The second is a continuous evaluation pipeline in which contracted raters score samples of real queries and result pages. The rater scores are the ground truth that the engineering team uses to evaluate whether a new model version, a new ranking signal, or a new core update is making the SERP better or worse.

When Google ships a new model, they run it past the rater pool, compare new-model scores against old-model scores on a representative query mix, and if quality drops, they roll back. This is why understanding the QRGs is high-leverage: you are reading what the evaluators are looking for, which is exactly what the engineering team is optimizing for.

The QRGs are not the algorithm. They are the specification for what the algorithm is supposed to produce.

Deep dive: the 2026 reality

The QRGs are a living document. Major revisions have shipped roughly twice per year since 2018. The current public version (2026 edition, 176 pages) introduced E-E-A-T (Experience-Expertise-Authoritativeness-Trustworthiness, the extra “E” added in December 2022) and was further refined in 2024 to address AI-generated content explicitly.

Page Quality (PQ) rating

The PQ scale runs from Lowest to Highest with intermediate values:

PQ RatingDescriptionExample signal
LowestPages that fail to satisfy purpose, include harmful content, or deceptive designScams, malware, deeply unsatisfying content, no-info MC
LowInadequate Experience/Expertise/Authority/Trust signals; thin or unhelpful main contentAuthor has no demonstrable expertise, no first-hand info
MediumPage achieves its purpose adequately; not lowest, not strongStandard informational article, no notable strengths or flaws
HighStrong E-E-A-T, satisfying main content, clear purposeWell-researched article with named expert author
HighestExceptional E-E-A-T, original reporting, deep expertiseInvestigative journalism, primary research, definitive reference

Three PQ inputs the QRGs explicitly call out:

Purpose. Every page must have a beneficial purpose. Pages that exist only to harm, deceive, or generate ad revenue without value are Lowest by definition.

Main Content (MC) quality. The main content must satisfy the page’s purpose. The QRGs evaluate effort, originality, talent or skill, and accuracy.

Reputation of the website and content creators. External reputation — independent reviews, professional credentials, news coverage, expert citations — outweighs the site’s own claims. The QRGs explicitly tell raters to research outside the site itself.

Needs Met (NM) rating

The NM scale evaluates how well a result satisfies the query for a likely user:

NM RatingDescription
Fully MeetsSpecial rating for queries with a single unambiguous answer, fully provided
Highly MeetsVery helpful for many or most users on this query
Moderately MeetsUseful, but not ideal; partial match
Slightly MeetsMarginally useful; small overlap with intent
Fails to MeetOff-topic, unhelpful, or unable to be used by the user

The QRGs include extensive guidance on query interpretation — dominant intent, common interpretations, minor interpretations — and on user demographics (“could be a teenager, could be a professional, what would help most likely users”). This is why generic content under-performs on long-tail queries: it does not match a specific user’s likely intent profile.

YMYL: Your Money or Your Life

YMYL is the QRGs’ formal designation for queries where bad information could harm the user’s health, finances, safety, or rights. The 2026 edition organizes YMYL into:

YMYL CategoryExamples
Health & SafetyMedical advice, drug interactions, child safety, vehicle safety
Financial SecurityInvesting, insurance, taxes, retirement, major purchases
SocietyVoting, government services, legal information, news
Other High-StakesCareer, education choices, fitness with risk, parenting on safety

For YMYL pages, the QRGs apply an elevated standard. Lowest PQ ratings are easier to assign for YMYL pages with low E-E-A-T. A medical article without medical credentials and without citations to authoritative sources is much more likely to be rated Lowest in YMYL than a cooking article would be in non-YMYL.

What the 2024 revisions added

The most consequential 2024 update integrated scaled content and AI-generated content into the rubric:

  • The QRGs explicitly state that mass-produced content with little human involvement is Low PQ, regardless of whether it was produced by humans or AI.
  • “Lowest” rating is now applied to pages where the main content is AI-generated and lacks any meaningful human review, editing, or first-hand experience.
  • Sites that publish at extreme scale without proportionate editorial capacity are flagged as scaled content abuse candidates.

The signal: AI tools are not banned. AI without editorial process is.

What the 2026 revisions added

The current edition added two notable things:

  • Experience as the first E in E-E-A-T is given explicit weight for product reviews, travel, restaurant, and consumer-services queries. Raters are instructed to look for first-hand markers: original photography, unique descriptions, specific dates, named locations, named staff.
  • A new Forum/UGC quality section addresses how raters should evaluate forum threads, Q&A sites, and Reddit-style content. Notably, well-moderated UGC with clear expertise signals can rate Highest; unmoderated UGC defaults to Medium or below.

Visualizing it

flowchart LR
  A["176-page QRG document"] --> B["Quality Raters worldwide"]
  B --> C["Score real query-result pairs"]
  C --> D["Page Quality rating"]
  C --> E["Needs Met rating"]
  D --> F["Aggregate evaluation dataset"]
  E --> F
  F --> G["Search engineering team"]
  G --> H["Train and evaluate ranking models"]
  H --> I["Ship core update"]
  I --> J["Live SERPs"]
  J --> C
  G --> K["Helpful Content classifier"]
  G --> L["Reviews System"]
  G --> M["Spam classifiers"]

Bad vs. expert

The bad approach

Most teams skim the QRGs once and produce content like this:

<article>
  <h1>Best Term Life Insurance 2026</h1>
  <p>Looking for the best term life insurance in 2026? You've come to the right place. In this comprehensive guide, we'll cover everything you need to know about term life insurance.</p>
  <h2>What is term life insurance?</h2>
  <p>Term life insurance is a type of life insurance that provides coverage for a specific period of time...</p>
  <p>Posted by Admin on January 15, 2026</p>
</article>

This fails the QRGs’ YMYL standard. There is no named author, no credentials, no first-hand experience, no citations to authoritative sources, no original research, no methodology disclosure. A rater sees “Posted by Admin” on a financial-advice page and rates it Low without further reading. Multiply by hundreds of similar pages and the site reads as scaled content abuse to the classifier trained on rater data.

The expert approach

<article>
  <h1>Best Term Life Insurance 2026: We Compared 14 Carriers</h1>

  <div class="byline">
    <img src="/img/authors/li-yang.jpg" alt="Li Yang">
    <p>By <a href="/authors/li-yang" rel="author">Li Yang</a>,
       CFP and licensed insurance agent in NY since 2009.
       <a href="https://www.linkedin.com/in/liyang">LinkedIn</a> ·
       <a href="https://brokercheck.finra.org/individual/summary/1234567">FINRA BrokerCheck</a></p>
    <time datetime="2026-04-22">Updated April 22, 2026</time>
    <p class="reviewed">Reviewed by
       <a href="/authors/maria-okafor">Maria Okafor</a>,
       Editorial Director, formerly senior editor at NerdWallet (2018-2023)</p>
  </div>

  <section class="methodology">
    <h2>How we evaluated</h2>
    <p>Between February and April 2026 we obtained binding quotes for 8 sample
    profiles from 14 carriers, completed 6 underwriting interviews, and
    examined 4 policy contracts side by side. Our scoring rubric weighs
    pricing (35%), underwriting flexibility (25%), financial strength
    (A.M. Best rating, 20%), claims-paying history (10%), and digital
    experience (10%).</p>
    <p>Quote receipts: <a href="/data/term-2026/quotes.csv">CSV</a>.
    Underwriting notes: <a href="/data/term-2026/notes.pdf">PDF</a>.
    No carrier paid for placement; affiliate disclosure
    <a href="/disclosure">here</a>.</p>
  </section>

  <section class="findings">
    <h2>Top picks for 2026</h2>
    <table>
      <thead>
        <tr><th>Carrier</th><th>Best for</th><th>Sample 35yo $500k 20yr</th><th>A.M. Best</th></tr>
      </thead>
      <tbody>
        <tr><td>Haven Life</td><td>Fast online underwriting</td><td>$22.51/mo</td><td>A++</td></tr>
        <tr><td>Banner Life</td><td>Smokers, high coverage</td><td>$24.18/mo</td><td>A+</td></tr>
        <tr><td>Pacific Life</td><td>Conversion options</td><td>$25.07/mo</td><td>A+</td></tr>
      </tbody>
    </table>
  </section>
</article>

This works because every QRG criterion is satisfied with verifiable evidence. The author has named credentials and an external license verification link. The reviewer credentials add an editorial layer. The methodology section provides the Experience signal. The data files are downloadable. The disclosure satisfies the trust requirement. A YMYL rater scoring this page can find every signal they’re trained to look for.

Do this today

  1. Download the current Quality Rater Guidelines PDF from Google’s published location at static.googleusercontent.com. Read sections 2 (PQ), 3 (Needs Met), and 4 (YMYL) end to end. Block out 4 hours.
  2. Build a PQ rubric scorecard in a spreadsheet with columns: Purpose clear, Main Content quality, E-E-A-T, Reputation, Site information, Ad balance. Score 1–5 each.
  3. Pick your 20 highest-traffic pages from Google Analytics 4 and grade them yourself using the rubric. Score honestly; assume the rater knows nothing about your brand.
  4. For every page that scored 3 or below on E-E-A-T, identify the missing signal: no author, no credentials, no original evidence, no citations. Open a ticket per gap.
  5. Identify which pages are YMYL. Flag any health, finance, legal, civic, or safety content. Apply the elevated standard: every YMYL page needs a credentialed author, a reviewed-by signal, citations, and disclosure.
  6. Use the About Us, Contact, Editorial Policy triad as a site-wide reputation signal. Confirm all three are linked from every footer, named in JSON-LD, and contain real-person information.
  7. Audit your authors. Each named author needs an /authors/[slug] page with a photo, bio, credentials, links to external profiles (LinkedIn, ORCID, professional registries), and a list of their content.
  8. Add JSON-LD Article with author (linked to Person schema), datePublished, dateModified, and reviewedBy for YMYL. Validate in Schema.org Validator and Google Rich Results Test.
  9. For product, service, or local reviews, document and link the methodology. Include receipts: data files, photos with EXIF where appropriate, dates, named test conditions.
  10. Schedule a quarterly QRG read-through on the team calendar. Google updates the document twice a year on average; you want to read every revision diff within two weeks of release.

Mark complete

Toggle to remember this module as mastered. Saved to your browser only.

More in this part

Part 13: Algorithm Updates & Risk Management

View all on the home page →
  1. 097 Google Algorithm History 15m
  2. 098 Recovering from Algorithm Updates 16m
  3. 099 Manual Actions & Penalties 20m
  4. 100 The Google Quality Rater Guidelines You're here 22m