Continuing Education
Following Google's official channels, reading patents, experimenting on your own sites, the 30/60/90-day learning plan, and an annual SEO skills audit.
The half-life of an SEO tactic is shorter every year. Most practitioners learn passively — scroll Twitter, watch a webinar, hope something sticks — and remain at the same skill level five years in as they were two years in. Senior SEOs treat learning as a system: a quarterly skills audit, a deliberate reading list, an experimentation budget on a personal site, and a habit of cross-checking every claim against a primary source.
TL;DR
- Learning is a system, not a feed. A 30/60/90-day learning plan reviewed quarterly outperforms ad-hoc scrolling by a factor of ten.
- Run experiments on a site you own. Theory you cannot test against your own data is gossip.
- Patents and primary documentation are the highest-quality fiber in the SEO diet. Most practitioners never read them; senior practitioners read at least one a quarter.
The mental model
Continuing education in SEO is like the maintenance schedule on a high-mileage car. Skip the oil change and the engine still works for a while. Skip the timing belt and one day the engine seizes on the highway. SEO careers fail the same way: a practitioner stops learning at year three, runs on inertia for two years, then gets passed over for a senior role they should have grown into.
Three modes of learning compound. Reading (others’ synthesis). Reasoning from primary sources (Google’s documentation, patents, Search Off the Record). Building (your own experiments). Most practitioners only do the first; senior practitioners do all three; the very best teach the synthesis back, which forces them to understand it deeply.
Deep dive: the 2026 reality
The 2026 SEO learner has unprecedented access to authoritative material. Google publishes more documentation, more office-hours, and more developers.google.com material than at any point in the company’s history. AI Overviews, AI Mode, and Gemini-powered SERPs introduced new ranking surfaces with their own documentation streams. The bottleneck is no longer access; it is curation and discipline.
Google’s official channels (highest-signal sources, in order of priority):
| Source | Where | What you get |
|---|---|---|
| Search Central documentation | developers.google.com/search | Authoritative; updated frequently |
| Search Central blog | developers.google.com/search/blog | Algo-update announcements, system rollouts |
| Search Status Dashboard | status.search.google.com | Confirmed ongoing updates with timestamps |
| Search Off the Record | podcast, YouTube | Authoritative team conversations |
| Search Central YouTube | channel | Lightning Talks, Office Hours archives |
| Google’s quality rater guidelines | PDF, Search Central | The closest thing to a published rubric |
Reading patents — overrated and underrated. Reading Google patents is overrated as a tactical edge (most patents are never deployed; many are deployed differently than written) and underrated as a way to understand how Google’s search team thinks about problems. The right way to read patents:
- Read the abstract and claims for the high-level idea.
- Skim the background for the framing of the problem.
- Skip implementation specifics unless they confirm a behavior you’ve seen in the SERPs.
- Cross-reference with public statements from Googlers — if the patent describes something Mueller has confirmed in a Hangout, weight it higher.
Bill Slawski’s archive at SEO by the Sea (preserved since his passing) remains the canonical commentary on the most important patents. Newer practitioner-readers include Dan Hinckley, Dawn Anderson, and Andrea Volpini for the entity / knowledge graph patents, and Olaf Kopp for entity-and-NLP-related work.
Important patent-and-document themes for 2026:
- The Helpful Content system and its role inside the core algorithm
- MUM (Multitask Unified Model) and multi-modal retrieval
- Gemini’s integration into the SERP — see public statements + Search Status Dashboard rollouts
- Quality signals (independent of links) and site-wide classifiers
- Local search ranking systems (Vicinity, NavBoost, Magi)
- AI Overview retrieval and citation logic
Experimenting on your own sites. Theory beats theory; data beats theory. Every working SEO should own at least one site they can experiment on without a client veto. Acceptable experiment sites:
- A small niche site you bought or built (Module 119)
- A personal portfolio site (
firstname-lastname.com) - A “sandbox” subdomain on your existing site
- A side project where the cost of failure is low
The experiment template:
# Experiment 47: H1 keyword variant test
## Hypothesis
Pages with H1 = exact-match query rank ≥1 position higher
than pages with H1 = brand-led query, on commercial-intent
queries with KD < 25.
## Sample
12 URLs in /tools/ subdirectory, paired by topic and KD
band. 6 control (brand-led H1), 6 variant (exact-match H1).
## Metric
30-day average position from GSC, queries impressions > 50.
## Run period
30 days; pre-period 30 days for baseline.
## Confounds to monitor
- Click-through rate change (could affect ranking
separately from H1)
- Algo update during run (abort if confirmed update)
## Result
Variant: -1.4 position (improvement)
Control: -0.2 position
Sample too small for high confidence; will replicate in
next quarter on /reviews/ directory (n=24).
The experiment notebook is the difference between a practitioner who knows things and one who has heard things.
The 30/60/90-day learning plan. A quarterly reset that keeps you out of the inertia trap.
| Window | Focus | Deliverable |
|---|---|---|
| Days 1-30 | One new technical skill | A working artifact (script, audit, dashboard) |
| Days 31-60 | One deep concept reading | A 1-page synthesis you publish to LinkedIn or your team |
| Days 61-90 | One experiment on your own site | A documented post-mortem in your experiment log |
Examples of 30-day technical skill targets in 2026:
- Build a Python notebook that pulls GSC data via the API and segments AI Overview citations
- Implement structured data on a personal Astro site with Schema.org validation
- Set up server-log analysis with Screaming Frog Log Analyser or Logflare
- Stand up an AI Overview monitoring system with Profound or a custom script
- Learn Looker Studio at the level of being able to build a 6-page client report from scratch
The annual SEO skills audit. Once a year, score yourself on a structured rubric. The fact that this is uncomfortable is the point.
| Skill area | Score 1-5 | Evidence |
|---|---|---|
| Technical SEO (crawling, rendering, schema, log analysis) | ||
| On-page / content strategy | ||
| Link building / digital PR | ||
| Local search | ||
| International / hreflang | ||
| E-commerce platform expertise | ||
| Analytics / measurement / data engineering | ||
| AI search / GEO / AEO | ||
| Paid + organic integration | ||
| Project management / leadership | ||
| Sales + client comms | ||
| Writing / public communication |
Rules:
- A score requires evidence: a project, a case study, a deliverable, a public talk, a published artifact. Don’t score yourself a 4 on schema if you haven’t shipped Schema.org markup that validates against the Rich Results Test in the last 12 months.
- Pick one area to move from 3 to 4 in the next year. Senior careers come from depth, not breadth.
- Pick one area to move from 1 to 2 — your weak spots. The 1-to-2 jump is usually fast and prevents catastrophic blind spots.
- Compare year-over-year. The audit is most useful as a longitudinal record.
Visualizing it
flowchart TD
A[Annual skills audit] --> B[Identify 1 area: 3 to 4]
A --> C[Identify 1 area: 1 to 2]
B --> D[Q1 plan]
C --> D
D --> E[Days 1-30: Skill build]
E --> F[Days 31-60: Deep reading + synthesis]
F --> G[Days 61-90: Experiment on owned site]
G --> H[Document and publish]
H --> I[Q2 plan]
I --> J[Repeat the cycle]
J --> K[Year-end: Reaudit]
K --> A
Bad vs. expert
The bad approach
Tuesday morning, opening Twitter:
- See a thread saying "AI Overviews are killing organic"
- Retweet it
- Spend 25 minutes scrolling reaction takes
- Tell a client over Slack: "AI Overviews are killing organic
traffic, we need to pivot"
- Send no actual data, no actual recommendation
- Move on to the next inbox item
- Three weeks later: see a post saying "AI Overviews are
actually a non-event"
- Forget you ever told the client otherwise
This is the modal SEO learning loop in 2026. No experiment, no primary source check, no synthesis, no record. The practitioner cycles through hot takes for years and accumulates the feeling of expertise without the substance.
The expert approach
# learning_log.py
# Quarterly experiment runner — quarter 2, 2026
import os
from datetime import datetime, timedelta
from googlesearchconsole import Client # hypothetical wrapper
# Hypothesis: Are AI Overviews suppressing CTR on
# our commercial-intent queries?
gsc = Client(site_url="sc-domain:example.com")
# Pull 90 days of query data
end = datetime.utcnow().date()
start = end - timedelta(days=90)
df = gsc.query(
start=start,
end=end,
dimensions=["query", "page", "device", "date"],
row_limit=25000,
)
# Tag queries by AI Overview presence (manual sample
# of top 50 commercial queries via ChatGPT Search +
# google.com run from a clean session)
ai_overview_queries = load_csv("ai_overview_sample.csv")
df["has_ai_overview"] = df["query"].isin(
ai_overview_queries
)
# Compare CTR for queries with/without AI Overview
result = df.groupby("has_ai_overview").agg(
avg_ctr=("ctr", "mean"),
avg_position=("position", "mean"),
impressions=("impressions", "sum"),
)
print(result)
# Document in experiment log:
# - 47% CTR drop on AI-Overview queries vs control
# - Position unchanged
# - Conclusion: AI Overviews compress CTR but not rank
# on this site's commercial queries
# - Action: Re-prioritize content roadmap toward
# bottom-funnel and brand queries; defer top-funnel
# informational expansion
The expert approach is slower, harder, and more boring. It is also the only one that produces actual knowledge. Three months of this loop puts you ahead of practitioners who scroll for three years.
| Mediocre learner | Expert learner |
|---|---|
| Reads reactively | Reads systematically |
| Cites secondhand | Cites primary sources |
| Has opinions, no data | Has experiments, with data |
| No annual audit | Quarterly checkpoints |
| Avoids public writing | Publishes synthesis monthly |
| Keeps no notebook | Keeps a structured log |
Do this today
- Open
developers.google.com/searchand bookmark the Documentation, Blog, and Search Status Dashboard. Add the dashboard as a Chrome bookmark you check weekly. - Subscribe to the Search Central YouTube channel and watch the most recent Search Off the Record episode in your next car ride or workout.
- Set up a personal site for experiments if you don’t have one. Buy
firstname-lastname.comat Cloudflare Registrar, deploy an Astro starter to Vercel, and write your first post within seven days. The constraint of having to write forces synthesis. - Read one Google patent this month. Start with a recent Helpful-Content-related patent or one of Bill Slawski’s annotated favorites at SEO by the Sea. Read the abstract, claims, and background; write a 200-word summary in your own words.
- Run the annual skills audit. Use the rubric above. Be honest. Pick one area to move from 3 to 4 and one to move from 1 to 2 over the next 12 months.
- Write the 30/60/90 plan for next quarter in Notion or a plain Markdown file. Block calendar time for it; an unscheduled plan is a wish.
- Start an experiment notebook. Create
experiments.mdin your site repo or a Notion database. Log every test you run — hypothesis, sample, metric, run period, result. Future you will use this for case studies, talks, and interviews. - Publish one piece of synthesis a month. A LinkedIn post, a blog article, a Loom for your team. Teaching is how you learn deeply; publishing is how you build an audience that pays back over years.
Mark complete
Toggle to remember this module as mastered. Saved to your browser only.
More in this part