Capstone Project
Building or auditing a real site end-to-end, documenting your process, and producing the portfolio piece that gets you the next role or the next client.
The capstone is the single deliverable that proves the previous 124 modules became practical knowledge. It is a real-site engagement, executed end-to-end, documented as a portfolio-grade case study. A done capstone is the artifact that lands you the senior role, the agency client, or the consulting contract that pays back the time you spent on this course several times over.
TL;DR
- The capstone is real, not hypothetical. It must be a live site you have permission to work on — your own, an employer’s, a client’s, or a friend’s small business.
- Document the decisions, not just the tactics. The hireable capstone shows judgment under uncertainty, not a checklist.
- Ship a public portfolio asset. A Notion or
firstname-lastname.com/case-studies/page with first-party data, screenshots, and a Loom walkthrough.
The mental model
A capstone is like a residency for a new doctor. The training program has covered the textbook; the residency is whether you can keep a real patient alive. The first time you crawl a real site with 200,000 URLs, the first time you negotiate a content prune with a stakeholder who is afraid of losing pages, the first time you run a migration where the deadline is real — these are the moments where competence is forged. The classroom teaches the rules; the capstone teaches when to break them.
Three properties make a capstone valuable. Reality — a live site with real stakeholders. Stakes — a measurable outcome you commit to in advance. Documentation — a written record a stranger could audit.
Deep dive: the 2026 reality
A 2026 capstone case study has more competition for attention than ever, but also more compounding leverage. Hiring managers and prospective clients can search LinkedIn, Google, and YouTube in minutes; a portfolio piece that is honest, data-rich, and well-told gets shared in DM threads and lands inbound opportunities for years. The capstone format that works in 2026:
| Element | Bar |
|---|---|
| Live site | Real domain, real traffic, real stakeholders |
| Defined scope | Written brief, success metric, timeline |
| First-party data | GSC, GA4, Ahrefs, log file, your own measurements |
| Documented decisions | What you considered, what you chose, why |
| Outcome with delta | Before/after numbers, confidence interval, attribution discussion |
| Public artifact | Hosted publicly, shareable URL, indexable |
| Loom walkthrough | 5-10 minutes of video commentary |
Choosing the capstone. Three viable formats:
- Build (greenfield). A new niche site, a personal portfolio, or a small business launching online. Best for beginners and intermediates; the cost of missteps is your own.
- Audit + recover (existing site). An audit of a live site with the diagnostic and intervention plan executed. Best for technical specialists and consultants. Most popular with hiring managers because it mimics the work you’d actually do for them.
- Migrate + recover (large site). A platform migration, URL restructure, or recovery from a Helpful Content suppression. Best for senior-track candidates; this is the highest-stakes capstone with the highest payoff.
Capstone exam structure. Use this rubric to plan and self-grade your capstone.
| Section | Weight | What it must contain |
|---|---|---|
| Context | 5% | Vertical, scale, stack, your role, constraints |
| Diagnosis | 25% | What’s broken, evidence, prioritized list |
| Decisions | 25% | What you chose, what you rejected, why |
| Execution | 20% | What was shipped, with timestamps and changelog |
| Outcome | 15% | KPI delta, confidence interval, AI Overview citation change |
| Reflection | 10% | What worked, what didn’t, what you’d do differently |
Example capstone scenarios. Three realistic 2026 capstones with starting briefs.
Scenario A — Local home services SMB. A roofing company in Nashville with 14 ranked keywords, 320 monthly organic sessions, no Google Business Profile photos, a slow WordPress site, and a competitor that has 4x the local-pack visibility. Your capstone is a 90-day engagement to triple qualified leads from organic, with $0 budget and one hour per week of the owner’s time. Success metric: lead form submissions from organic > 30/month by day 90, up from 8/month.
Scenario B — B2B SaaS migration recovery. A 20-person SaaS company migrated from WordPress to Next.js eight weeks ago and lost 38% of organic traffic. Your capstone is a 60-day diagnostic and intervention to recover to within 5% of pre-migration baseline. Success metric: organic sessions within -5% of pre-migration baseline within 90 days; AI Overview citation rate restored to pre-migration levels.
Scenario C — Affiliate site rebuild. A 4-year-old kitchen-equipment site with 220 articles that lost 70% of traffic in the September 2023 Helpful Content update. Your capstone is to triage, prune, rewrite, and document a recovery path. Success metric: traffic recovery to 50% of pre-update peak within 9 months, with ≥ 30 articles surviving the prune.
Documenting the process. Three documents minimum:
# 1. The brief (signed before work starts)
- Site, scope, timeline, success metric, what success
looks like, exit clause if applicable.
# 2. The decision log
- Date, decision, options considered, what was chosen,
what was rejected, expected outcome, actual outcome
reviewed at +30/+60/+90 days.
# 3. The case study (published artifact)
- Public-facing version of decision log + outcome.
Anonymized only as much as the client requires.
The decision log is the highest-leverage document of the year. It is also the document senior reviewers (interviewers, prospective clients) will skim first because it shows whether you think.
Stakes and timing. A capstone with no stakes is an exercise; a capstone with real stakes is a portfolio piece. Build stakes by:
- Committing to a written success metric before starting.
- Sharing the metric with at least three people who will hold you to it.
- Setting a calendar date for delivery and sticking to it.
- Publishing a public update halfway through. The pressure of a public update prevents the “I’ll work on it next month” trap that kills 80% of capstone attempts.
Visualizing it
flowchart TD
A[Pick capstone scenario] --> B[Sign brief with stakeholder]
B --> C[Week 1: Baseline measurement]
C --> D[Weeks 2-4: Audit and diagnostic]
D --> E[Weeks 5-8: First wave of fixes]
E --> F[Weeks 9-12: Content and links]
F --> G[Day 90: Outcome review]
G --> H{Metric hit?}
H -->|Yes| I[Write case study and publish]
H -->|No| J[Diagnose gap and document]
J --> I
I --> K[Loom walkthrough recorded]
K --> L[Portfolio artifact public]
L --> M[Use in interviews and pitches]
Bad vs. expert
The bad approach
A “case study” that reads like a testimonial:
We helped Acme Co grow their organic traffic by 250%
in 6 months! Through a combination of technical SEO,
content marketing, and link building, we transformed
their search presence and now they're ranking for
hundreds of new keywords.
Want results like these? Contact us today!
This is what the bottom 70% of agency case studies look like. There is no scope, no method, no decision, no honest discussion of what didn’t work. A reader cannot tell whether the engagement was 6 months or 6 years, whether the +250% was on a tiny base, or whether the agency had any role beyond watching the curve go up. It is marketing copy disguised as proof.
The expert approach
A real case study published at firstname-lastname.com/case-studies/acme-recovery/.
<article class="case-study">
<h1>Helpful Content Recovery: Acme HR-Tech, 9 Months</h1>
<section class="metadata">
<dl>
<dt>Engagement period</dt>
<dd>July 2025 – April 2026</dd>
<dt>My role</dt>
<dd>Fractional Head of SEO, ~12 hours/week</dd>
<dt>Stack</dt>
<dd>Next.js 14, Sanity CMS, Vercel</dd>
<dt>Starting state</dt>
<dd>9,400 monthly organic sessions, down from
peak of 41,000 in Aug 2023</dd>
<dt>Success metric</dt>
<dd>Recover to 25,000+ monthly organic sessions
within 9 months, with ≥ 50% of demo requests
attributed to organic</dd>
</dl>
</section>
<section class="diagnosis">
<h2>Diagnosis</h2>
<ul>
<li>340 thin templated pages from a 2023 pSEO
experiment, classifier-suppressed</li>
<li>Mobile LCP averaging 4.2s (failing) on the
/pricing/ and /compare/ directories</li>
<li>Zero Schema.org Article markup; competitors
appearing in AI Overviews while Acme was not</li>
<li>Author identity vacant — 218 of 240 published
articles attributed to "Acme Team"</li>
</ul>
</section>
<section class="decisions">
<h2>Decisions</h2>
<h3>Pruned 287 templated pages over consolidate</h3>
<p>I considered consolidating the templated pages
into 35 hub pages, which would have preserved
some link equity. I chose to prune because the
templated content was so thin that consolidation
would have produced 35 still-thin pages and would
have required four months of editorial work.
Pruning unblocked the classifier within 60 days.</p>
<h3>Hired three named subject-matter experts</h3>
<p>The single highest-impact intervention. Author
identity drove a 22% lift in average ranking
position on legacy pages within 90 days, with
no other content change.</p>
<!-- 4 more decisions documented similarly -->
</section>
<section class="execution">
<h2>What was shipped</h2>
<table>
<!-- changelog with dates -->
</table>
</section>
<section class="outcome">
<h2>Outcome at month 9</h2>
<ul>
<li>Monthly organic: 9,400 → 31,200 (+232%)</li>
<li>AI Overview citations: 4 → 67 unique queries</li>
<li>Demo requests from organic: 12 → 84/month</li>
<li>Pipeline attributed to organic: $1.4M ARR</li>
</ul>
<p class="confidence">Caveat: a competitor exited
the market in February 2026, contributing roughly
8-12% of the lift. Adjusted estimate of attributed
lift: +200%.</p>
</section>
<section class="reflection">
<h2>What I'd do differently</h2>
<ol>
<li>Hire the named experts in month one, not
month four. The single biggest delay.</li>
<li>Run the GSC log-export pull in week one to
map redirect chains; I waited until week six
and lost time.</li>
<li>Set the "no new pSEO until classifier
unblocks" rule in the brief; the dev team
shipped a new programmatic feature in month
three that I had to reverse.</li>
</ol>
</section>
<aside class="loom">
<a href="https://loom.com/share/abc123">
Watch a 7-minute walkthrough of this case study
</a>
</aside>
</article>
The case study works because every section answers a real question a hireable reader has: was the engagement real, was the scope real, was the diagnosis sound, were the decisions defensible, was the outcome honestly reported, has the operator learned anything? The “what I’d do differently” section is the single highest-trust signal. Everyone has wins; the operators who can name their misses are the operators worth hiring.
| Marketing-style case study | Portfolio-grade case study |
|---|---|
| No timeline | Engagement period named |
| No role | Specific role and hours |
| No diagnosis | Documented diagnosis with evidence |
| No decisions | 4-7 decisions logged with rationale |
| Vanity metric only | KPI with confidence and confound discussion |
| No reflection | What I’d do differently — named |
| No video | Loom walkthrough |
Do this today
These steps run in sequence. Plan for 12-16 weeks total.
- Week 1: Pick the scenario. Choose between Build, Audit, or Migrate based on your situation. Identify the specific site (your own, your employer’s, a friend’s, or a paid mini-engagement). Confirm permission in writing — even an email saying “yes, you can use this for your portfolio” is enough.
- Week 1: Write the brief. One page. Vertical, scope, timeline, primary success metric, secondary metrics, what’s in scope, what’s out of scope, exit clause if applicable. Get the stakeholder to acknowledge in writing.
- Week 2: Take the baseline. Pull screenshots and exports from GSC, GA4, Ahrefs or Semrush, PageSpeed Insights, and your AI Overview tracker (Profound or manual). Save them in a
baseline/folder with timestamps. Future you will thank present you for this. - Weeks 3-5: Run the diagnostic. Use the audit framework from Module 123. Output a prioritized list with impact-vs-effort scoring. Share with the stakeholder for buy-in before executing.
- Weeks 6-9: Execute the first wave. Quick wins, technical fixes, prune, content rewrites. Document every decision in the decision log on the day you make it. Take screenshots before and after each meaningful change.
- Weeks 10-12: Execute the second wave. New content, schema, link building, AI-Overview citation work. Update the decision log weekly.
- Week 13: Outcome review. Pull the same set of metrics you pulled at baseline. Compute the delta. Have an honest conversation with yourself about what’s attributable to your work versus other factors (algo updates, competitor changes, seasonality).
- Week 14: Write the case study. Use the structure shown in the expert example. 1,500-3,000 words. Publish to your portfolio site at a public URL —
firstname-lastname.com/case-studies/[slug]/. Add Article + Author schema and a clear date. - Week 15: Record the Loom. 7-10 minutes. Walk through the case study with your face on camera. Make the most-important point in the first 90 seconds.
- Week 16: Distribute. Publish on LinkedIn with a 5-bullet summary in the post body and the link in the first comment. Send to 10 senior practitioners in your specialization with a personal note. The case study is a compounding asset — every share, comment, and citation you earn over the next two years originates from this week’s effort.
Mark complete
Toggle to remember this module as mastered. Saved to your browser only.
More in this part