AI Exposes Your Slow Feedback Loops

AI Exposes Your Slow Feedback Loops

January 23, 2026·urjit

Everyone’s talking about how much faster they’re shipping with AI coding assistants. But here’s what I’ve noticed: the teams seeing 10x gains aren’t the ones with the fanciest AI setup. They’re the ones who already had fast feedback loops.

AI is an amplifier. It amplifies your velocity when things are working, and it amplifies your bottlenecks when they’re not.

The Math Doesn’t Lie

Say your AI assistant generates code in 30 seconds that would have taken you 20 minutes. Great. But if your test suite takes 8 minutes to run, you’ve just shifted the bottleneck. The AI is waiting. You’re waiting. The cognitive context you held while writing the code is evaporating.

flowchart LR
    subgraph "Fast Feedback Loop"
        A1[AI Writes Code] --> B1[Tests Run: 10s]
        B1 --> C1[Immediate Fix]
        C1 --> A1
        C1 --> D1[Ship in Hours]
    end

    subgraph "Slow Feedback Loop"
        A2[AI Writes Code] --> B2[Tests Run: 10min]
        B2 --> C2[Context Lost]
        C2 --> E2[Manual Debug]
        E2 --> A2
        E2 --> D2[Ship in Days]
    end

With fast tests, the AI stays in a tight loop: generate, test, fix, ship. With slow tests, you’re back to the old cadence—just with more code generated per cycle.

Long-Running Agents Are Overhyped

Hot take: the current obsession with “how long can I let my agent run autonomously” is solving the wrong problem.

I see demos where agents run for hours, making hundreds of changes across a codebase. Impressive for toy projects. But in production systems with real constraints—security, compliance, existing architecture, actual users—you don’t want an AI making 200 unsupervised decisions.

What actually works: break tasks into smaller chunks. Let the AI execute. Review. Iterate. This is the same workflow that worked before AI, just faster. The human stays in the loop not because AI is incompetent, but because feedback loops are how you catch errors early.

The agents that impress me aren’t the ones that run longest. They’re the ones that can run tests mid-generation and course-correct before going too far down a wrong path.

Tests and Builds as Part of the Thought Process

What I’ve found works really well across many projects is treating test execution as part of the AI’s reasoning loop, not a step that happens after.

When an agent can write code, run tests, see failures, and adjust—all within a single “thought”—it behaves fundamentally differently than one that writes a bunch of code and hopes for the best. The former is doing something closer to how experienced developers actually work: write a little, verify, adjust, repeat.

This only works if your tests are fast enough to fit inside that loop. A 10-second test run is part of the thought process. A 10-minute test run is a context switch.

The same applies to builds. If your agent has to wait 3 minutes to see if its changes even compile, it’s flying blind between iterations.

Making Watch Modes Work for AI

Here’s a practical pattern I’ve been experimenting with: run your test watcher or build watcher in a separate terminal, pipe its output to a file, and point your AI agent at that file.

# Terminal 1: watch mode writing to file
npm run test:watch 2>&1 | tee /tmp/test-output.log

# Tell your AI agent:
# "Test output is in /tmp/test-output.log - tail the last 30 lines when you need to check results"

Why this works: the AI doesn’t need the full history of every test run filling up its context window. It just needs to check the current state when relevant. A quick tail -n 30 /tmp/test-output.log gives it what it needs without the noise.

This keeps the feedback loop tight while preserving context space for actual code reasoning.

Your Infrastructure Is Now a Bottleneck

If AI makes coding 10x faster, everything else needs to keep up:

  • Test suites that took “acceptable” 5 minutes now feel painful
  • Build times that were fine at 3 minutes now block every iteration
  • CLI tools with multi-second startup times are suddenly irritating
  • CI pipelines become the critical path

I wrote recently about investigating why Gemini CLI takes 35 seconds to start up . That kind of latency was annoying before. Now it’s a dealbreaker. Every slow tool in your chain gets multiplied by how many times your AI wants to invoke it.

This creates real demand for faster tooling across the board. The Rust rewrites of common CLI tools (ripgrep, fd, bat, etc.) were nice-to-haves before. Now they’re starting to feel necessary. Same goes for build systems—tools like Turbopack, esbuild, and oxc exist because webpack-era build times don’t cut it anymore.

If you’re evaluating your dev infrastructure, the question isn’t “is this fast enough for humans?” anymore. It’s “is this fast enough to not bottleneck an AI-augmented workflow?”

What To Do About It

Audit your feedback loops. Time them. Find the slowest one—that’s your actual shipping speed now.

Some concrete things that help:

  • Parallelize your test suite aggressively
  • Use watch modes everywhere and pipe output to files your agent can check
  • Run relevant tests only, not the full suite on every change
  • Invest in faster hardware for CI (it pays for itself quickly now)
  • Profile your CLI tools—some have surprising startup costs
  • Consider whether your build tooling was chosen in a pre-AI era

The teams getting the most out of AI coding tools aren’t the ones with the best prompts. They’re the ones whose infrastructure can keep up.


This is the first of two posts. The next one covers why test-driven development matters more than ever when AI is writing your code—and who should be writing the tests.

Last updated on