Back/claude code

Building Syntaqlite with AI — A 250-Hour Field Report

Updated 2026-04-13
5 min read
1,180 words

Building Syntaqlite with AI — A 250-Hour Field Report

Author: Lalit Maganti Project: syntaqlite — high-quality SQLite devtools (formatter, linter, LSP) Timeline: ~250 hours over 3 months (Jan–Mar 2026) Tool: Claude Code (Max plan)

Maganti carried this idea for eight years. It finally shipped because of AI coding agents — but the journey included a full vibe-coding rewrite, addiction loops, and hard-won lessons on design vs implementation.

Why this project almost didn't happen

The project sits at the intersection of hard and tedious:

  • SQLite has no formal parsing spec and no stable parser API
  • Its C source is incredibly dense
  • Building an exact parser requires extracting ~400 grammar rules and mapping each to a parse tree node
  • Plus tests, editor extensions, docs, packaging, community building

For years, this was "too hard for a side project, too tedious to sustain motivation."

Phase 1: The vibe-coding month (January)

Approach: maximalist AI — act as semi-technical manager, delegate almost all design and implementation to Claude.

Result after one month:

  • Functionally reasonable: C parser extracted from SQLite, Python pipeline, formatter, web playground
  • Architecturally catastrophic: "complete spaghetti"
  • Didn't understand large parts of the Python extraction pipeline
  • Functions scattered randomly, files grew to thousands of lines
  • Extremely fragile — would never integrate into Perfetto

The saving grace: it proved the approach was viable and generated 500+ tests.

Decision: throw everything away and rewrite from scratch in Rust.

Phase 2: The disciplined rewrite (February–March)

Role change: took ownership of all decisions. Used Claude as "autocomplete on steroids" inside a tight process:

  • Opinionated design upfront
  • Review every change thoroughly
  • Fix problems eagerly
  • Invest in scaffolding (linting, validation, non-trivial testing)

Where AI excelled

1. Overcoming inertia

AI turned abstract uncertainty into concrete prototypes. Instead of "I need to understand SQLite parsing," the task became "I need to get AI to suggest an approach so I can tear it up and build something better."

2. Churning obvious code faster than human

If a problem can be broken down to "write a function with this behavior and these parameters," AI is faster and produces more standard, readable, well-documented code.

The double edge: "Standardness" is harmful at the project's edge. For syntaqlite, the extraction pipeline and parser architecture were the differentiators — AI's instinct to normalize was actively harmful there. These parts Maganti designed in depth and often wrote by hand.

3. Refactoring at industrial scale

The same speed that makes AI great at generation makes it great at refactoring.

"If you're using AI to generate code at industrial scale, you have to refactor constantly and continuously."

After every large batch of generated code: step back and ask "is this ugly?"

4. Teaching assistant (highest value-to-time ratio)

AI compressed what might have been days of reading into a focused conversation.

Example: Wadler-Lindig pretty printing for the formatter. Maganti had never heard of it. Claude proposed it, explained the trade-offs, and pointed to the papers.

Another: VS Code extension would have taken 1–2 days of learning the API. With AI, working extension within an hour.

5. Reacquiring context on demand

After a few days away, AI could provide:

  • Surface-level refresher: "tell me about this component"
  • Deep dive: "give me a detailed linear walkthrough"
  • Targeted audit: "audit unsafe usages in this repo"

6. The long tail of "complete" products

AI made the non-critical but important features cheap enough to ship:

  • VS Code extension
  • Python bindings
  • WASM playground
  • Docs site
  • Multi-ecosystem packaging (crates.io, PyPI, npm, Homebrew, Zed extension)

It also freed up mental energy for UX: rustc-style error diagnostics, quick-fix code actions, intuitive CLI flags.

"AI didn't just make the same project faster. It changed what the project was."

Where AI had costs

1. The addiction

Using AI coding tools felt like playing slot machines. Late nights doing "just one more prompt." Sunk cost fallacy: keep trying even when AI was clearly ill-suited.

Tiredness feedback loop:

  • Energetic → precise, well-scoped prompts → good output
  • Tired → vague prompts → worse output → try again → more tired

2. Losing touch with the codebase

Several times, Maganti lost the day-to-day mental model of what lived where, which functions called which.

This created a communication breakdown: instead of "change FooClass to do X," you end up saying "change the thing which does Bar to do X." The agent has to figure out the mapping — and sometimes gets it wrong.

"You've become the manager engineers complain about."

Fix: made it a habit to read through code immediately after implementation.

3. The slow corrosion of design discipline

Because refactoring was cheap, it was easy to procrastinate on key design decisions. The vibe-coding month was the extreme version. Deferring decisions corroded clear thinking because the codebase stayed confusing.

Tests also created false comfort: 500+ tests, yet the design of some components was completely wrong and required total rework.

"The normal rules of software still apply: if you don't have a fundamental foundation, you'll be left eternally chasing bugs."

4. No sense of time

Models don't feel time. They don't understand why an API evolved the way it did, why past decisions were made and later reversed.

This means either:

  • Repeating old mistakes and relearning lessons, or
  • Falling into traps that were successfully avoided the first time

Capturing implicit design decisions exhaustively is incredibly expensive — and AI-drafted docs still need human audit.

The relativity framework

Maganti's model for when AI helps vs hurts:

Zone Your state AI usefulness
Known knowns Deeply understand the problem Excellent — instant review, fast iteration
Known unknowns Can describe but don't yet know Good but requires engagement — stay actively involved
Unknown unknowns Don't even know what you want Unhelpful to harmful — may follow AI down dead ends for weeks

Expertise alone isn't enough. Tasks with no objectively checkable answer (design, API ergonomics) are where AI struggles most.

"At the level of a function or class, there's usually a clear right answer, and AI is excellent. But architecture is what happens when local pieces interact — you can't get good global behaviour by stitching together locally correct components."

Final verdict

AI is an incredible force multiplier for implementation, but a dangerous substitute for design.

It has no sense of history, taste, or how a human will actually feel using your API. If you rely on it for the "soul" of the software, you'll just hit a wall faster than ever before.


Sources

Linked from