CrossGen AI

The Builder's Log
Latest Archive
February 16, 2026

24 Commits, 677 Tests, One Dashboard: Shipping Fast Without Cutting Corners

This week I built a full MLB analytics dashboard from zero to production-ready in 7 days. 24 commits. 677 tests. Real Statcast data. WASM performance bindings. Here's how numbered deliverables and test-first discipline made it possible.

24 Commits, 677 Tests, One Dashboard: Shipping Fast Without Cutting Corners

Welcome back to The Builder's Log. Every week I share what's actually happening in my lab. Not theory, not hype, just the real work of building with AI.

This week was the most productive coding week I've had in years. Let me show you what happened.

This Week in the Lab

I set out to build an MLB analytics dashboard. Not a toy. A real application with live data, WebSocket connections, offline PWA support, WASM-accelerated calculations, and a full end-to-end test suite.

Seven days later, it exists. 24 commits. 520 files. 677 tests across unit, integration, and E2E. A working Statcast data pipeline pulling real pitch-level data from Baseball Savant.

The question everyone asks about AI-assisted development is "how fast can you go?" The better question is "how fast can you go while still doing it right?" That's what this week answered.

Deep Dive: The Numbered Deliverable System

The secret wasn't some new AI trick. It was a boring organizational pattern that made everything else possible.

I broke the entire project into numbered deliverables: D1 through D14. Each one was small enough to finish in a session, big enough to be meaningful, and testable on its own.

Here's what the sequence looked like:

  • D1-D2: Core dashboard structure, PWA offline support, export and sharing
  • D3: WebSocket live data, game simulator, real-time graph propagation (207 tests)
  • D4: WASM Phase 2 bindings for performance-critical calculations (86 tests)
  • D5: Data Hub panel with import UI and progress tracking (23 tests)
  • D6: End-to-end test suite covering navigation and workflow (62 tests)
  • D7-D8: Analytics engine and chart components wired into all panels (129 tests)
  • D9-D10: Integration tests for remaining panel analytics (83 tests)
  • D11-D13: Performance, accessibility, and production readiness (43 tests)
  • D14: Real Statcast data pipeline with API proxy and URL builder (44 tests)

After D14, I added multi-season data accumulation, auto-chunking for large fetches, and Architecture Decision Records for the strategic decision features.

Why does this work so well? Three reasons.

First, each deliverable has a clear "done" definition. When D3 says "207 tests," there's no ambiguity about whether it shipped. The tests either pass or they don't.

Second, the sequence creates momentum. Each commit builds on the last. By D7, I had a live foundation that made D8 through D14 faster because the patterns were established.

Third, AI assistants thrive with bounded scope. When I tell Claude "implement D5: Data Hub panel with import UI and progress tracking," it has a clear objective. Compare that to "build me a dashboard" and you can see why scoping matters.

This is the same principle that works in operations management. When I ran logistics at Amazon, the teams that hit their numbers broke big goals into daily deliverables. Same pattern, different domain.

Tools and Techniques

Three things that made this week's speed possible:

1. WASM for Hot Paths

The dashboard does heavy statistical calculations: weighted averages, percentile rankings, trend projections. Running those in JavaScript was fine for small datasets, but Statcast data gets big fast.

I wrote the core math in Rust, compiled to WebAssembly, and bridged it into the TypeScript panels. The WASM modules handle batting averages, slugging percentages, and win probability calculations. JavaScript handles the UI. Each does what it's best at.

The key insight: you don't need WASM everywhere. Profile first, then move only the hot paths. Five WASM modules covered everything performance-critical in this app.

2. Auto-Chunking Data Fetches

Baseball Savant rate-limits API calls and returns massive payloads for full-season queries. Instead of fetching a whole season at once and hoping it works, I built an auto-chunking system.

The fetcher breaks date ranges into configurable chunks (default: 7-day windows), fetches each one independently, and persists results per chunk. If a fetch fails halfway through a season, you pick up where you left off instead of starting over.

This pattern works for any large data ingestion. Break it into resumable chunks. Persist progress. Never lose work.

3. Architecture Decision Records

For the strategic decision features (ADR-031 and ADR-032), I wrote formal Architecture Decision Records before writing any code. These documents capture the "why" behind design choices.

ADR-031 defined how strategic questions get orchestrated. ADR-032 defined the command UI architecture. When the code was written a day later, there were zero design debates because the decisions were already documented and agreed upon.

If you're building anything complex, write your ADRs first. Your future self will thank you.

What I Learned

Speed comes from structure, not shortcuts.

677 tests across 24 commits means roughly 28 tests per commit. That's not slowing down to write tests. That's using tests as the engine that makes speed possible. Every time I pushed a new deliverable, the test suite caught regressions from the previous ones. Without that safety net, D14 would have broken D3 and I'd still be debugging.

The other lesson: numbered deliverables beat sprints for solo work. Sprints are designed for teams with ceremonies and coordination overhead. When it's just you and your AI assistant, a simple numbered list of deliverables with clear scope and test requirements moves faster than any Agile framework.

I've been in operations for 25 years. The teams that ship are the ones with the clearest definitions of "done." Whether it's a submarine maintenance cycle, a PCB manufacturing line, or an MLB dashboard, the principle is identical.

Try This

Take your next project and break it into 5 numbered deliverables before you write a single line of code. For each one, write down:

  1. What it does (one sentence)
  2. What "done" looks like (specific, testable)
  3. What it depends on (which deliverables come first)

Then build them in order. One at a time. Tests for each one before you move to the next.

You'll be surprised how fast you move when each step is clear and small enough to finish in one sitting.


That's it for this week. Next time I'll dig into the real-time data patterns from D3, specifically how WebSocket connections and game simulation create a convincing live experience without a production data feed.

Until then, keep building.

Sean Patterson CrossGen AI

Newsletter Archive

Feb 16, 2026 24 Commits, 677 Tests, One Dashboard: Shipping Fast Without Cutting Corners Feb 10, 2026 The Bootstrap Paradox: Building the Thing That Builds the Thing
View All Issues

Sean Patterson

Don't Buy AI, Learn AI

CrossGen AI Blog Contact

© 2026 CrossGen AI. All rights reserved.