AI's Hidden Cost: Why More Code Doesn't Mean Better Software
AI boosts code output, but real innovation requires human strategic thinking, ruthless editing, and prioritizing quality. This article debunks the myth that more code equals better software.
Unfettered AI generation of low-quality code, requiring human debugging and maintenance, is unproductive for businesses.

A pervasive and dangerous myth in software development suggests that sheer output directly correlates with successful outcomes. It's the belief that by simply increasing hours or lines of code, a problem will inevitably be solved.
Gergely Orosz, renowned for The Pragmatic Engineer, recently debunked this myth with surgical precision. Orosz made a compelling observation regarding the "996" work culture – the schedule of working 9 a.m. to 9 p.m., six days a week, popularized by Chinese tech giants. He noted, "I struggle to name a single 996 company that produces something worth paying attention to that is not a copy or rehash of a nicer product launched elsewhere." This intensive schedule and pace are not only inhumane but ultimately counterproductive.
Brute force yields volume but seldom fosters differentiation, and arguably, never innovation.
Before critiquing "996" practices abroad, we should examine our own equivalent. Founders often label it as "hardcore," "all in," or "grind culture," yet it embodies the same principle: overwhelm people with hours, hoping brilliance emerges. Now, we are attempting to instantiate this same idea in code, or rather, through GPUs. Some presume that if large language models (LLMs) can operate for the equivalent of thousand-hour weeks, generating code at superhuman speeds, we will magically achieve superior software.
We will not. Instead, we will merely amplify what we already contend with: derivative, bloated, and increasingly unmanageable codebases.
The High Cost of Code Churn
I have previously highlighted how the internet is being choked by low-value, high-volume content due to frictionless production. The same phenomenon is now impacting our software development.
Data supports this trend. As observed when covering GitClear’s 2024 analysis of 153 million lines of code, "code churn" – lines of code changed or discarded within two weeks – is rapidly spiking. The research revealed an increase in copy-pasted code and a significant reduction in refactoring efforts.
In essence, while AI assists in coding faster (up to 55% faster, according to GitHub’s analysis), it isn't necessarily helping us build better. We are generating more code, understanding it less, and fixing it more frequently. The real risk of AI isn't its ability to write code, but its encouragement to write too much code. Bloated codebases are inherently harder to secure, more complex to reason about, and significantly more challenging for humans to manage. Less code is, indeed, better.
This mirrors the "996" trap, but transferred to machines. The "996" mindset assumes that the primary constraint on innovation is the number of hours worked. The "AI-native" mindset, conversely, presumes the constraint is the number of characters typed. Both assumptions are flawed. The true constraint has always been, and will always be, clarity of thought.
Code Is a Liability, Not an Asset
Let’s return to fundamental principles. As experienced engineers understand, software development is not a typing contest; it is fundamentally a decision-making process. The primary objective is less about writing code and more about discerning what code not to write. As Honeycomb founder and CTO Charity Majors puts it, a senior software engineer's role "has far more to do with your ability to understand, maintain, explain, and manage a large body of software in production over time, as well as the ability to translate business needs into technical implementation."
Every line of code shipped introduces a liability. Each line must be secured, debugged, maintained, and eventually refactored. When we leverage AI to brute-force the "construction" phase of software, we significantly maximize this liability. This approach creates vast surface areas of complexity that might resolve an immediate Jira ticket but ultimately mortgage the future stability of the platform.
Orosz’s observation about "996" companies primarily producing copies is highly telling. Genuine innovation necessitates "slack"—the mental space to think without constant interruptions from meetings. Given a quiet moment, a developer might realize that a feature they were about to build is actually superfluous. If developers are consumed by reviewing an avalanche of AI-generated pull requests, they have no such slack. They cease to be architects and instead become janitors, cleaning up after a perpetually active robot.
None of this implies that AI is detrimental to software development; quite the opposite. As Harvard professor Karim Lakhani stressed, "AI won’t replace humans," but we will increasingly see that "humans with AI will replace humans without AI." AI is an exceptionally effective tool, but only if utilized as a tool and not as a means to replicate the false promise of the "996" culture.
The Human Part of the Stack
So, how do we prevent constructing a "996" culture on silicon? We must cease treating AI as a "developer replacement" and instead embrace it as a tool to reclaim the very thing "996" culture destroys: time.
If AI can manage the drudgery—unit tests, boilerplate code, documentation updates—this should not become an excuse to cram more features into a sprint. Instead, it should be an opportunity to slow down and concentrate on the inherently "human" components of the development stack, such as:
- Framing the problem: Asking "What are we actually trying to do?" sounds simplistic, yet it is where most software projects fail. Choosing the correct problem is a high-context, high-empathy task. An LLM can provide five ways to build a widget, but it cannot ascertain if that widget is the wrong solution for the customer’s workflow.
- Editing ruthlessly: If AI makes code writing nearly free, then the most valuable skill shifts to deleting it. Humans must exercise the "no." We should reward developers not for the velocity of their commits, but for the elegance and simplicity of their designs. We need to celebrate "negative code" commits—those that reduce complexity rather than adding to it.
- Owning the blast radius: When issues inevitably arise (and they will!), your name will appear on the incident report, not the LLM’s. A deep understanding of the system, sufficient to debug it during an outage, is a skill that atrophies if one never writes the code personally. We must ensure that "AI-assisted" does not devolve into "human-detached." It is crucial to ensure junior developers do not default to LLM outputs and that engineers of all skill levels receive adequate training to effectively use AI.
The resistance against robot-generated drivel is not a Luddite stance; it is a pursuit of quality.
Orosz’s critique of the "996" culture is that it produces exhausted people and forgettable products. If we are not vigilant, our adoption of AI could yield the exact same outcome: exhausted humans maintaining a mountain of forgettable, brittle code generated by machines.
We do not need more code. We need better code. And superior code originates from human minds granted the quiet, uncluttered space to invent it. Let AI handle the brute force, thereby freeing people to innovate.