Claude Code's Silent Truncation Bug: Why AI Couldn't Fix a 5-Second Regex

Claude Code's Silent Truncation Bug: Why AI Couldn't Fix a 5-Second Regex

February 16, 2026·urjit

I had five files in a folder. A bookmarklet script, two HTML files for reference, a screenshot, and an error log. The task: fix a regex bug in the bookmarklet. Claude Code should have knocked this out in one turn.

Instead, I watched it spin for over 20 minutes across multiple attempts, burning through 43% of its context window on a project smaller than most README files.

Attempt 1: The Infinite Loop

I gave Claude a clear prompt: here’s the script, here are the HTML files showing the page structure, here’s a screenshot, the error is in error.log. Fix it.

Claude started reading files. Then it said “The header HTML file wasn’t found.” It searched for the file. Found it — with a slightly different name than it expected. Read it. Then… started over. Read, search, read, search. The same cycle, over and over.

I interrupted: “You keep getting stuck at this prompt. Take it step by step.”

It tried again. Same loop.

The Silent Truncation

The root cause turned out to be a quiet failure mode in Claude Code’s Read tool. In my testing, lines longer than 2,000 characters were silently truncated — no error, no warning. That behavior matches an internal tool spec excerpt and is consistent with ongoing community reports , but the clearest proof is the repro later in this post. Every file in my project happened to be a single line:

FileSizeLines
single_row.html37,611 bytes1
fare_category_column_header.html9,394 bytes1
script.js (bookmarklet)7,470 bytes1

The HTML was minified React output scraped from United Airlines’ search page. The bookmarklet was URL-encoded JavaScript — single line by nature.

When Claude read single_row.html, it got the first 2,000 characters — roughly 5% of the file — and nothing else. No indication that 95% was missing. It knew the content seemed incomplete, so it searched for more, which led it back to the same file, which gave it the same truncated result. An infinite loop of trying to read a file it could never fully see.

Attempt 2: Reformat and Retry

I added line breaks to the HTML files:

perl -pi -e 's/></>\n</g' single_row.html fare_category_column_header.html

776 lines and 159 lines, respectively. Reasonable file sizes. Tried again.

Still slow. The conversation was poisoned — full of failed reads, search loops, and partial results. Claude spent more time reconciling its past failures than working on the actual bug.

Attempt 3: Clear Context

/clear to wipe the conversation. Fresh prompt with the reformatted files. This time, Claude made progress. It correctly identified the regex bug, analyzed the HTML structure, and started rewriting the script.

But each turn of “thinking” took 3 to 4 minutes. I watched the spinner:

Accomplishing… (2m 58s · ↓ 1.8k tokens · thinking)
Waddling… (4m 4s · ↓ 10.4k tokens)

The final turn — generating the actual fix — took 5 minutes and 20 seconds.

And it kept getting worse. On a follow-up turn where I asked it to add a feature to the script, the model spent 503 seconds thinking — over 8 minutes of pure “reasoning” before producing a single character of output. Total turn time: 9 minutes. For one response.

Wrangling… (9m 0s · ↓ 12.7k tokens · thought for 503s)

The next turn? 10 minutes and 44 seconds.

Crunched for 10m 44s

Ten minutes staring at a spinner. That’s not an AI assistant — that’s a loading screen.

This changes how you use the tool. You can’t stay in flow when every interaction costs you 5-10 minutes of dead time. You start batching your prompts to avoid round-trips. You hedge your instructions, stuffing multiple asks into one message because you can’t afford the cost of a follow-up. You open another terminal, start doing something else, lose your train of thought on the original task. Context-switch, come back, re-read what Claude produced, realize it misunderstood something, and now you’re looking at another 10-minute wait to correct it.

The promise of AI coding tools is tight feedback loops — describe what you want, see the result, iterate. When each loop takes 10 minutes, you’re back to the cadence of waiting for CI pipelines, except you’re waiting for the thing that was supposed to eliminate the waiting.

The Context Burn Rate

Here’s what really stung. After just a few turns on this five-file project:

  • 68% context remaining after the analysis turn
  • 57% context remaining after the first code generation turn
  • 40% context remaining after the second turn — 60% of the window gone

That’s 60% of the context window consumed in three turns on a project with less than 60KB of text content. On a real codebase with dozens of files and back-and-forth iteration, you’d be hitting context limits and auto-compression within a handful of turns.

A ~950KB screenshot was part of the problem — images are expensive in the context window. The dense HTML didn’t help either, with every line packed with CSS-module hashes like app-components-Shopping-PriceCard-styles__priceCardWrapper--FAW0m. But the fundamental issue is that a simple five-file task shouldn’t take 20+ minutes and nearly half the context window.

What This Means for Real Work

This wasn’t a complex codebase. It was five files in /tmp. If you hit this kind of friction on a toy project, what happens on a production repo?

The answer is: you learn the workarounds. You pre-format files. You resize screenshots. You keep prompts tight. You recognize when a session is poisoned and /clear before wasting more turns. You become a prompt-aware developer who shapes their project to fit the tool’s constraints.

That’s… fine, I guess. But it’s the kind of tax that erodes the productivity gains AI tools promise. The 10x speedup from AI-generated code gets eaten by the 5-minute thinking pauses, the context management, and the debugging-the-debugger sessions.

The Hidden Tax of MCP Plugins

There’s another context drain that’s easy to miss: MCP servers. Every plugin you have enabled injects its tool definitions into the system prompt — whether you use them or not. Here’s what a few common ones cost:

mcp__claude_ai_Mermaid_Chart__validate_and_render  342 tokens
mcp__claude_ai_Mermaid_Chart__get_diagram_title    306 tokens
mcp__claude_ai_Mermaid_Chart__get_diagram_summary   265 tokens
mcp__claude_ai_Mermaid_Chart__list_tools             65 tokens
mcp__claude_ai_GoDaddy__domains_suggest             321 tokens
mcp__claude_ai_GoDaddy__domains_check_availability  260 tokens

That’s ~1,560 tokens just for Mermaid diagrams and domain name suggestions — tools I wasn’t using at all in this session. Every MCP server adds its own chunk. If you have five or six servers enabled, you’re easily losing thousands of tokens from every single turn to tool definitions that sit there doing nothing.

This compounds with everything else. The dense HTML, the screenshot, the tool definitions — they’re all competing for the same context window. And unlike the HTML, the MCP tax is paid on every turn, not just when files are read.

Keep a sharp eye on which MCP servers you leave enabled by default. It’s easy for tool lists to grow over time, and that context creep gets paid on every turn whether you use those tools or not. If you realize mid-session that you need a plugin, enable it, restart Claude Code, and use /resume to continue with the same context. Check your token usage with /cost — if the system prompt alone is eating thousands of tokens before you’ve even typed anything, your default MCP set is probably too broad for the kind of work you’re doing.

It’s Not Just Me

A quick search reveals this is a widespread frustration. The community has been tracking and debating Claude Code’s performance for months:

  • A Hacker News thread asks whether Opus performance degrades by time of day, with users reporting tasks stretching from 3-4 minutes to 10+ minutes and Claude “going down extreme rabbit holes” — the same broad class of failure I hit.

  • Another HN discussion covers daily benchmarks for tracking Claude Code degradation. SWE-bench’s co-author pointed out that Anthropic’s own 50-task test suite, run once daily, isn’t statistically reliable enough to catch the variance users experience.

  • GitHub issue #19452 documents performance degradation on macOS and user frustration with slow sessions. The exact trigger in that thread differs from mine, but the symptom rhymes: turn times stretching enough to break flow.

  • GitHub issue #3477 reports the Read tool taking 2+ minutes on a short markdown file. Different root cause, same theme: once local state gets weird, the tool can become unpredictably slow.

  • A Symmetry Breaking post from February 11, 2026 argues that version 2.1.20 replaced detailed file paths and search patterns with generic summaries like “Read 3 files” — stripping away diagnostic information users relied on to understand what the tool was actually doing.

  • On social media, the sentiment is raw : “Why is Claude code so fucking dumb now?! I’m cancelling my subscription and moving to Codex or Gemini.” A month prior, the same user was a “lead evangelist at the church of Claude.”

Anthropic’s official position is that they “never reduce model quality due to demand, time of day, or server load.” But the Opus 4.6 release notes acknowledge the tradeoff directly: the model “thinks more deeply and more carefully revisits its reasoning,” which “can add cost and latency on simpler ones.” The suggestion is to dial down effort with /effort for simpler tasks — which is reasonable, except you don’t always know in advance how much thinking a task needs.

The pattern across all these reports is the same: the tool works brilliantly when it works, and when it doesn’t, you have almost no visibility into why, and no lever to pull except waiting or starting over.

Specific Gotchas

For anyone hitting similar issues:

Single-line files are invisible poison. Minified HTML, bookmarklets, compact JSON, SVGs with long path data, transpiler output — anything without line breaks will be silently truncated. Format them first: prettier for JS/HTML, jq . for JSON, s/></>\n</g for quick HTML splits.

Large images eat context. Resize before attaching. On macOS: sips -Z 800 screenshot.png. You rarely need full resolution for an AI to understand a UI.

Poisoned context is unrecoverable. If Claude gets into a loop, don’t try to correct it mid-conversation. The failed attempts stay in context and make everything harder. /clear or start a new session immediately.

Watch the context percentage. It’s in the status bar. If you’re burning through context faster than expected, something about your input is expensive — usually images or dense single-line content.

Reproduce It Yourself

Don’t take my word for it. First, a diagnostic that proves the truncation exists, then three self-contained demos that show real consequences. Run the setup, open Claude Code in the directory, and watch.

Update (March 2026): The original experience described in this post and the demos below were tested on Claude Opus 4.5 in mid-February 2026. I retested on Opus 4.6 in March 2026, and newer models were better at working around the truncation by choosing alternative tools. But the underlying Read tool limit still appears to remain, and non-interactive mode (claude -p) still hung on the bookmarklet and SVG demos in my retest.

Diagnostic: Proving the 2,000-Character Limit

Before anything else, verify the truncation is real on your setup:

mkdir -p /tmp/claude-truncation-demo
python3 -c "print('x' * 10000)" > /tmp/claude-truncation-demo/longline.txt
cd /tmp/claude-truncation-demo && claude -p \
  "Use ONLY the Read tool to read longline.txt. How many characters are on the first line?"

Claude will report 2,000 characters. The file has 10,000. The Read tool silently dropped 80% of the content — no error, no warning.

Why doesn’t this always cause failures? Claude Code has multiple tools at its disposal. On many tasks, it reaches for Grep or Bash (cat, jq, wc) alongside or instead of the Read tool, and those aren’t subject to the line-length limit. When I first hit this bug, the model got stuck in a Read-only loop. Newer models are better at choosing alternative tools — but the truncation is still there, waiting for the case where Claude relies on Read alone. The demos below show scenarios where that’s most likely to happen.

Demo 1: The Infinite Read Loop

A minified HTML page (~11,400 chars, single line) with a broken onclick handler on row 35 of a table. The first 2,000 characters are all CSS — Claude never reaches the <table> element, let alone row 35.

#!/bin/bash
# Setup: generates /tmp/claude-html-loop-demo/dashboard.html
DIR="/tmp/claude-html-loop-demo"
rm -rf "$DIR" && mkdir -p "$DIR"

python3 -c '
html_parts = ["<!DOCTYPE html><html><head><title>Product Dashboard</title>"]
html_parts.append("<style>")
for i in range(50):
    color = "%06x" % (i*41000)
    html_parts.append(
        f".component-module__item{i}--hash{i}abc"
        f"{{padding:{i}px;margin:{i+1}px;"
        f"background-color:#{color};"
        f"border:1px solid #ccc;font-family:sans-serif;}}"
    )
html_parts.append("</style></head><body>")
html_parts.append("<div class=\"app-components-ProductList-styles__wrapper--Xk2m\"><table>")
html_parts.append("<thead><tr><th>Product</th><th>Price</th><th>Stock</th><th>Actions</th></tr></thead>")
html_parts.append("<tbody>")
for i in range(40):
    if i == 35:  # THE BUG: missing closing paren
        html_parts.append(
            f"<tr><td>Widget {i}</td><td>\${i*10+99}.99</td><td>{100-i}</td>"
            f"<td><button onclick=\"deleteProduct({i}\">Delete</button></td></tr>"
        )
    else:
        html_parts.append(
            f"<tr><td>Widget {i}</td><td>\${i*10+99}.99</td><td>{100-i}</td>"
            f"<td><button onclick=\"deleteProduct({i})\">Delete</button></td></tr>"
        )
html_parts.append("</tbody></table></div></body></html>")

with open("'"$DIR"'/dashboard.html", "w") as f:
    f.write("".join(html_parts))
'

cat > "$DIR/bug-report.txt" << 'REPORT'
Bug: Clicking "Delete" on one of the product rows throws a SyntaxError in the console.
The error is: "SyntaxError: missing ) after argument list"
It only happens on a specific row — most rows work fine.
Please find the broken onclick handler in dashboard.html and fix it.
REPORT

Truncation point (char ~2,000 — still in the <style> block):

...module__item14--hash14abc{padding:14px;margin:15px;background-color:#08c230;border:1px solid #ccc;fo

Bug location (char ~10,900 — the <tbody>, table rows, and broken handler):

onclick="deleteProduct(35">Delete</button>

Claude reads the file, gets CSS, knows it needs the table, searches for it, finds the same file, reads it again — same truncated CSS. This is the read-search-read loop from the blog post, reproducible in 30 seconds.

Run: cd /tmp/claude-html-loop-demo && claude

Prompt: Read bug-report.txt and fix the issue in dashboard.html.

Fix: perl -pi -e 's/></>\n</g' dashboard.html

Demo 2: The Bookmarklet Bug

A URL-encoded bookmarklet (~2,500 chars, single line) — this is the closest to my original scenario. The bug is a malformed regex where \+ got mangled to a bare + during URL encoding. The error log points straight at the issue, but Claude can’t see the relevant code.

#!/bin/bash
# Setup: generates /tmp/claude-bookmarklet-demo/fare-scraper.js
DIR="/tmp/claude-bookmarklet-demo"
rm -rf "$DIR" && mkdir -p "$DIR"

python3 -c '
import urllib.parse

script = r"""
(function() {
    var rows = document.querySelectorAll("tr.fare-row");
    var results = [];
    for (var i = 0; i < rows.length; i++) {
        var row = rows[i];
        var airline = row.querySelector(".airline-name").textContent.trim();
        var price = row.querySelector(".price-cell").textContent.trim();
        var baseMatch = price.match(/\$(\d+(?:\.\d{2})?)/);
        var taxMatch = price.match(/\+\s*\$(\d+(?:\.\d{2})?)\s*tax/);
        var baseFare = baseMatch ? parseFloat(baseMatch[1]) : 0;
        var tax = taxMatch ? parseFloat(taxMatch[1]) : 0;
        var classText = row.querySelector(".class-badge")
            ? row.querySelector(".class-badge").textContent : "economy";
        var isPremium = /business|first|premium\+/.test(classText.toLowerCase());
        results.push({
            airline: airline, baseFare: baseFare, tax: tax,
            total: baseFare + tax, class: classText, premium: isPremium
        });
    }
    results.sort(function(a, b) { return a.total - b.total; });
    var output = "FARE COMPARISON\n" + "=".repeat(50) + "\n";
    for (var j = 0; j < results.length; j++) {
        var r = results[j];
        output += r.airline + " | $" + r.total.toFixed(2) + " (" + r.class + ")\n";
    }
    navigator.clipboard.writeText(output).then(function() {
        alert("Fare comparison copied!\n\n" + output);
    });
})();
"""

encoded = "javascript:" + urllib.parse.quote(script.strip(), safe="")
with open("'"$DIR"'/fare-scraper.js", "w") as f:
    f.write(encoded)
'

cat > "$DIR/error.log" << 'LOG'
Console output from Chrome DevTools:

Uncaught SyntaxError: Invalid regular expression: /+\s*\$(\d+(?:\.\d{2})?)\s*tax/:
    Nothing to repeat
    at fare-scraper.js:1:1

The bookmarklet throws when trying to match the tax portion of "$299.00 + $45.50 tax".
The regex is malformed — the + is being interpreted as a quantifier instead of a literal plus.
LOG

Run: cd /tmp/claude-bookmarklet-demo && claude

Prompt: The bookmarklet in fare-scraper.js is throwing an error. See error.log. Fix the regex bug.

Expected: Claude reads the URL-encoded JS, gets truncated output, and either guesses at the fix from the error log alone (sometimes correctly, sometimes not) or asks for more context it can never get from the file.

Demo 3: The Invisible SVG Legend

An SVG chart (~36,600 chars, single line) with long <path> coordinate data. A color mismatch between a legend label and its corresponding line is at the very end of the file. Claude only ever sees M652,303 C648.2,299.1,647.3,298.4... — raw path coordinates with zero semantic meaning.

#!/bin/bash
# Setup: generates /tmp/claude-svg-demo/chart.svg
DIR="/tmp/claude-svg-demo"
rm -rf "$DIR" && mkdir -p "$DIR"

python3 << 'PYEOF'
import random
random.seed(42)
DIR = "/tmp/claude-svg-demo"

svg_parts = []
svg_parts.append('<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 800 600">')
svg_parts.append('<rect width="800" height="600" fill="#f8f9fa"/>')

colors = ["#2563eb", "#dc2626", "#16a34a", "#9333ea", "#ea580c"]
for p in range(5):
    points = []
    x, y = random.randint(50, 750), random.randint(50, 550)
    points.append(f"M{x},{y}")
    for _ in range(200):
        dx, dy = random.uniform(-15, 15), random.uniform(-15, 15)
        x = max(10, min(790, x + dx))
        y = max(10, min(590, y + dy))
        cx1 = x + random.uniform(-10, 10)
        cy1 = y + random.uniform(-10, 10)
        points.append(f"C{cx1:.1f},{cy1:.1f},{x:.1f},{y:.1f},{x+dx:.1f},{y+dy:.1f}")
    joined = " ".join(points)
    svg_parts.append(f'<path d="{joined}" fill="none" stroke="{colors[p]}" stroke-width="2"/>')

# THE BUG: legend says green (#16a34a) but the Revenue line is actually purple (#9333ea)
svg_parts.append('<g transform="translate(600,550)">')
svg_parts.append('<rect x="0" y="0" width="12" height="12" fill="#16a34a"/>')
svg_parts.append('<text x="16" y="11" font-size="11">Revenue (Q4)</text>')
svg_parts.append('</g></svg>')

with open(f"{DIR}/chart.svg", "w") as f:
    f.write("".join(svg_parts))
PYEOF

cat > "$DIR/issue.txt" << 'ISSUE'
The legend color for "Revenue (Q4)" in chart.svg doesn't match its line in the chart.
Please fix the legend to match the actual path color.
ISSUE

Run: cd /tmp/claude-svg-demo && claude

Prompt: Read issue.txt and fix the color mismatch in chart.svg.

Expected: Claude reads 2,000 characters of path coordinate data, never reaches the <g> legend element at the end. It may attempt to search for “Revenue” in the file using Grep (which works — Grep isn’t subject to the line-length limit), but the Read tool alone will never show it.


All three demos target the same fundamental problem: the Read tool’s 2,000-character line truncation is silent, and files that are naturally single-line — minified HTML, URL-encoded scripts, SVGs — trigger it constantly. Newer models sometimes work around this by choosing Grep or Bash instead of Read, but you can’t count on it — and when the model does get stuck in a Read loop, there’s no escape. The workaround is always the same: add line breaks before handing the file to Claude Code.

# Quick fixers
jq .                          < compact.json > formatted.json  # JSON
prettier --write file.html                                     # HTML/JS
perl -pi -e 's/></>\n</g'    minified.html                    # Quick HTML split
xmllint --format              chart.svg > formatted.svg        # SVG/XML

The Actual Bug

After all that, the bug was a regex with + signs that got mangled during URL encoding in the bookmarklet. A five-second fix, once you can read the file.

Last updated on