Working with Developers
Writing SEO specs that ship, translating SEO into dev tickets, prioritization frameworks, and code review for SEO.
The hardest part of senior technical SEO is not the SEO. It is writing tickets developers willingly accept and shipping schema, redirects, and template changes through a code review process. This module is the playbook: writing specs that survive triage, prioritizing on dev cost rather than just SEO impact, and reviewing pull requests for SEO correctness.
TL;DR
- An SEO ticket without test URLs and acceptance criteria is rejected by good engineering teams. Specify the page templates affected, the expected DOM/HTTP/JSON-LD output, the test URLs, and the regression risk. Vague intent statements get punted to “later sprint.”
- Prioritize on impact ÷ effort, not impact alone. A high-impact item that requires three sprints loses to two medium-impact items that ship in one. ICE, RICE, and PXL frameworks all encode this; pick one and use it consistently.
- Code review for SEO is non-negotiable. Every PR touching templates, redirects, sitemaps, robots.txt, or rendering deserves an SEO reviewer label. Catch the regression in review, not in GSC.
The mental model
Working with developers is like writing a building permit. The architect (SEO) draws the plans. The contractor (developer) builds. The inspector (code review and QA) verifies the build matches the plan and the code. The permit (your spec) is the contract. Ambiguity in the permit produces variance in the build; variance in the build produces remediation work.
A great permit is specific, scoped, and falsifiable. It says exactly what walls go where, what materials, to what tolerance, and what tests will prove the build is correct. A bad permit says “improve the kitchen.” The contractor can build anything that vaguely matches and you have no recourse to reject the work.
The corollary for SEO: your tickets are your permits. The skill is not in writing them well; it is in writing them so the developer can implement, the QA can verify, and the rollback plan is obvious. Once that skill is reliable, you ship 5–10× more SEO change per quarter.
Deep dive: the 2026 reality
Modern web stacks make some SEO work easier and some harder. The key shifts to internalize:
SSR/SSG by default for SEO-critical pages. Next.js 15 App Router, Astro 5, SvelteKit 2, and Remix v3 all default to server rendering or static generation for SEO routes. Client-only React (CSR) is now a smell on a money page. If your developer pushes back on SSR, the conversation has to happen at architecture review, not at SEO ticket review.
Schema lives in the framework. Next.js apps add JSON-LD via generateMetadata or <Script type="application/ld+json">. Astro apps embed schema directly in MDX or layout components. The spec must reference the framework’s idiomatic place to put schema, not a generic “add schema to the page.”
Edge rendering complicates redirects. Vercel, Cloudflare Workers, Netlify Edge, and Fastly Compute all execute redirect logic at the edge. A 301 in next.config.js is implemented differently than one in a Cloudflare Worker. Your spec must specify where the redirect lives.
robots.txt and sitemaps are now generated, not static. Most modern stacks generate robots.txt and sitemap.xml from app routes (app/robots.ts, app/sitemap.ts in Next.js; src/pages/sitemap.xml.ts in Astro). Hand-edited static files have largely disappeared. Your spec should reference the generator, not a static file path.
AI crawler management is a 2026 line item. GPTBot, ClaudeBot, OAI-SearchBot, PerplexityBot, Google-Extended, and friends each have a distinct user agent. The robots.txt strategy is now a multi-table decision: who can train, who can search-cite, who is blocked.
The anatomy of a shipping spec
A spec that ships includes ten sections. Anything less is a wish list.
| Section | Purpose |
|---|---|
| Goal | One sentence: what this change achieves for organic traffic |
| Pages affected | Templates, route patterns, count of URLs |
| Current behavior | What is rendered/served today; pull from a real URL |
| Expected behavior | What should be rendered/served; with code or markup samples |
| Implementation notes | Where in the codebase the change lives |
| Acceptance criteria | Testable assertions, ideally automatable |
| Test URLs | Real URLs the QA can verify against |
| Regression risk | What this change could break |
| Rollout plan | All-at-once, gradual, behind a feature flag |
| Rollback plan | How to revert in under 15 minutes if something breaks |
Prioritization frameworks
| Framework | Components | Best for |
|---|---|---|
| ICE | Impact × Confidence × Ease | Smallest teams; fastest scoring |
| RICE | Reach × Impact × Confidence ÷ Effort | Mid-size teams with clearer reach data |
| PXL | Pages × eXposure × Lift; weighted score across 12+ criteria | CRO/SEO hybrid teams |
| MoSCoW | Must, Should, Could, Won’t | Roadmap framing, not single-ticket scoring |
| Cost of Delay | Value of having it now vs. later | Enterprise where opportunity cost is calculable |
Pick one and stick with it. Switching frameworks every quarter is worse than using a mediocre one consistently. Most SEO teams should default to ICE and graduate to RICE when they have clean reach data.
The dev-cost matrix
Effort estimates are often the largest source of friction. A frame that helps:
| Change | Typical effort |
|---|---|
| Add a meta tag in CMS | 0.5 day |
| Add JSON-LD to an existing template | 1–2 days |
| Add new page template | 5–10 days |
| Shipping URL pattern change with redirects | 3–7 days |
| Implementing pagination/canonical strategy | 5–10 days |
| Server-render previously CSR pages | 10–30 days |
| Migrate JS framework | months |
Use these as anchors when reading a developer’s estimate. A “two weeks for JSON-LD” estimate signals either an unfamiliar codebase or a deeper architectural problem; either way, dig.
Code review for SEO
The SEO reviewer’s job in a PR is to check the SEO-relevant surfaces. A practical checklist:
| Surface | Check |
|---|---|
| Title tag | Unique per page, under 60 chars, includes target query |
| Meta description | Unique per page, 140–160 chars |
| Canonical | Self-referential or correctly cross-referenced |
| Hreflang (if i18n) | Bidirectional, valid language codes, x-default present |
| H1 | Single H1, matches intent |
| Structured data | Validates, no warnings on Rich Results Test |
| Internal links | No nofollow on internal links unless intentional |
| Images | alt, width, height, lazy where appropriate |
| Pagination | rel=“next”/“prev” deprecated; canonical-aware |
| Status codes | 200 for content, 301 for redirects, 404 for missing, 410 for gone |
| robots.txt | No accidental disallows, AI crawler rules correct |
| sitemap.xml | Includes only canonical, indexable URLs |
| JS rendering | SEO-critical content present in the SSR HTML |
| Performance | LCP image preloaded; no layout shift |
Reviewing a PR with this list takes 20–30 minutes once the reviewer is fluent. Letting an unreviewed template ship can cost months of recovery.
Visualizing it
sequenceDiagram
participant SEO
participant PM
participant Dev
participant QA
SEO->>SEO: Identify SEO opportunity
SEO->>SEO: Estimate impact, effort, confidence
SEO->>PM: Submit prioritized ticket
PM->>Dev: Pull into sprint
Dev->>Dev: Implement
Dev->>SEO: PR review request, label seo-review
SEO->>SEO: Review against checklist
alt Issue found
SEO->>Dev: Comment with specific fix
Dev->>SEO: Fix and re-request
else Passes
SEO->>Dev: Approve
end
Dev->>QA: Merge to staging
QA->>SEO: Validate test URLs
SEO->>QA: Confirm acceptance criteria met
QA->>Dev: Promote to production
SEO->>SEO: Monitor GSC, Lighthouse, ranks
Bad vs. expert
The bad approach
Title: SEO improvements
We need better SEO on the product pages. Add schema, fix the meta
titles, make sure pages are mobile-friendly, and improve speed.
Priority: high
Owner: dev team
This ticket is rejected on sight by any good engineering manager. There is no scope, no acceptance criteria, no test URL, no estimate of size, no rollback. The dev team cannot accept it because it cannot be marked done. Even if a developer takes it on, the work that ships will not match the SEO intent because the intent was never specified.
The expert approach
A real production-grade ticket, with the JSON-LD payload shown alongside:
# SEO-451 — Add Product schema with offers and aggregateRating to PDP template
## Goal
Make every product detail page eligible for product-snippet rich
results. Target: ProductComparison and Product enhancement reports
in Search Console showing 5,840 URLs valid within 30 days.
## Pages affected
- Template: app/products/[slug]/page.tsx
- URL pattern: /products/{slug}
- Count: 5,840 URLs (matches pages table where kind = 'product')
## Current behavior
View source of https://example.com/products/sample-widget — no
JSON-LD present in head or before closing body.
## Expected behavior
Each product page renders the JSON-LD payload below (see the JSON
block in the ticket attachments) in the document head, generated
server-side from the existing product data.
## Implementation notes
Next.js App Router. Add a generateMetadata extension or a server
component that renders <Script type="application/ld+json"
strategy="beforeInteractive"> with the payload. Reuse the existing
getProduct(slug) fetch — no additional database round-trip.
## Acceptance criteria
- JSON-LD present in initial HTML (verify via curl piped to grep)
- Validates in Schema.org Validator with zero errors
- Validates in Google Rich Results Test as Product eligible
- Renders correct values for at least 10 sampled products
- No priceValidUntil more than 12 months in the future
- Hidden when aggregateRating.reviewCount = 0 (omit field rather than show 0)
## Test URLs
- /products/sample-widget
- /products/popular-blue-thing
- /products/zero-review-edge-case
- /products/out-of-stock-product
- /products/multi-currency-eu-only
## Regression risk
- Schema markup conflicts with existing Breadcrumb schema → add both, do not replace
- priceValidUntil static strings could go stale → must be derived from data
- Out-of-stock products: availability must be OutOfStock, not omitted
## Rollout plan
Deploy behind feature flag enable-product-schema to 10% of traffic.
Monitor GSC enhancements report for 48 hours. Roll to 100% if no
warnings.
## Rollback plan
Toggle the feature flag off. Schema disappears from new responses
within 60 seconds (no edge cache TTL beyond that on this template).
The exact JSON-LD payload referenced above:
{
"@context": "https://schema.org",
"@type": "Product",
"name": "Sample Widget",
"image": ["https://cdn.example.com/p/sample-widget/main.jpg"],
"description": "Plain-text product description, max 5000 chars.",
"sku": "SW-001",
"brand": {"@type": "Brand", "name": "Example"},
"offers": {
"@type": "Offer",
"url": "https://example.com/products/sample-widget",
"priceCurrency": "USD",
"price": "29.99",
"priceValidUntil": "2026-12-31",
"availability": "https://schema.org/InStock",
"itemCondition": "https://schema.org/NewCondition"
},
"aggregateRating": {
"@type": "AggregateRating",
"ratingValue": "4.6",
"reviewCount": "412"
}
}
This works because every ambiguity is resolved before the developer touches the file. The engineer can implement in a half-day, the SEO can verify in 15 minutes, and the rollback is one toggle. Tickets in this format ship in the sprint they are pulled into, every time.
Do this today
- Audit your last 10 SEO tickets in Linear or Jira. Score each on the ten-section spec rubric. Flag any that lacked test URLs, acceptance criteria, or rollback plans.
- Build a single ticket template in your PM tool with the ten sections pre-filled. Force it as the starting point for any ticket labeled
seo. - Establish a
seo-reviewGitHub label or equivalent. Any PR touching the listed surfaces (templates, redirects, robots.txt, sitemap, schema, hreflang, edge config) automatically requires the label and an SEO reviewer. - Document your prioritization framework. Commit to ICE for the next quarter. Build the scoring sheet in Airtable with columns Impact, Confidence, Ease, Score (computed).
- Pull your top 20 backlog items, score them with ICE, and re-sort. Move the top 5 into the next sprint and archive anything below the median.
- Set up a PR-review checklist in GitHub or your code-host. Use the surface-by-surface table from this module. Save as a saved reply or PR template.
- Pair-review with a developer for one sprint. They explain code paths; you explain SEO surfaces. Both sides leave with vocabulary to write better tickets together.
- For each major framework in your stack (Next.js, Astro, etc.), document the canonical place for canonical tags, schema, robots, and sitemap. Save in the team Notion under
/seo-engineering/. - Build a “what shipped” report in Looker Studio or GSC showing weekly count of completed SEO tickets, segmented by surface. Velocity is the leading indicator that the system is working.
- Schedule a 60-minute quarterly retro with engineering leads on SEO collaboration. Topics: which tickets shipped, which slipped, where the spec was insufficient, where the priority frame conflicted with engineering’s. Document the action items and revisit at the next retro.
Mark complete
Toggle to remember this module as mastered. Saved to your browser only.
More in this part