Technical Deflation: How AI Changes Startup Development Timelines and Competitive Advantage
Explore technical deflation, a phenomenon where AI makes software development increasingly easier and cheaper, influencing startup decisions to delay building. This shift impacts competitive strategies and core business models.
In economics, deflation, the inverse of inflation, signifies a decrease in prices. Generally perceived as detrimental, it often stems from severe economic contractions and can trigger a downward spiral in consumer behavior. A core issue is that anticipating further price drops, consumers delay purchases and increase savings, expecting to acquire goods for less later. This reduced spending leads to decreased demand, lower revenue, job losses, and ultimately, a deepening deflationary spiral.
For this reason, economies typically target an annual inflation rate of 2%. This rate is sufficiently low to mitigate inflation's negative impacts while encouraging spending and providing a buffer against deflation. This economic dynamic presents a political challenge, as public frustration over inflation often seeks price reductions, which, if realized, could lead to broader economic distress.
This discussion, however, pivots from traditional economics to a parallel phenomenon observed in startups, which I term 'technical deflation.' The underlying mechanism is straightforward: it is progressively easier and cheaper to develop software, and this trend is expected to continue. Consequently, there's a growing inclination to defer development, postponing projects until they become even more accessible and cost-effective to build.
The Driving Forces
While technological advancement is a constant, historically, breakthroughs like Moore's Law haven't consistently doubled software development speed annually, nor has a new framework like React made web application development drastically easier than Rails. In software, a unique confluence of forces is generating a novel sense of rapid technological evolution.
Firstly, improved models are simplifying AI-based application development by enabling simpler architectures. Much of the complex work can be offloaded to large language models (LLMs), which can reliably follow rules, such as producing valid JSON. Workflows require fewer steps and less retry logic, prompts can be less intricate, and less selectivity is needed for information within extended context windows. While a Jevon's paradox effect exists—where better models also facilitate more ambitious applications, reintroducing complexity through tool calls, sub-agents, and advanced computer interaction—building existing functionality has become undeniably simpler.
Secondly, AI has significantly eased the process of writing functional application code. While not an endorsement of AI-generated code as flawless, its utility in the post-Claude-Code era for simple to moderately complex tasks is undeniable. It can often produce code that largely achieves the desired outcome, even if sometimes overly complex or inefficient (e.g., numerous nested try-catch blocks leading to memory issues with moderate concurrent users). For startups, achieving basic functionality quickly is often sufficient in the near term, aligning with the "do things that don't scale" philosophy.
This accelerated development velocity is critical. It empowers startups to swiftly challenge incumbents with extensive product suites. Previously, a startup needed to target customers experiencing severe pain, willing to adopt a niche solution while the rest of the product was incrementally built. Now, startups can still excel in a specific area while rapidly developing a suite of 'table stakes' features, making their product a more compelling choice for adoption.
This dynamic is 'technical deflation': the increasing ease and decreasing cost of doing things for startups, a trend likely to persist for several years. This holds true regardless of debates about whether pretraining or RLHF (Reinforcement Learning from Human Feedback) have "hit a wall," as continuous improvements in speed, cost, context length, and tool use are sufficient to sustain this trend. What, then, are the consequences of technical deflation?
The Micro: The Elusive Desktop App
During my career as an engineer, working primarily on web applications and even teaching a web applications course, desktop applications remained outside my expertise. Despite this, requests for desktop versions of web apps were frequent, sometimes driven by valid needs like privacy or offline functionality.
In 2024, justifying the investment for a desktop app felt impossible. Even with tools like Electron and Tauri simplifying development, building, testing, and maintaining an entirely separate application with a small team seemed a suboptimal use of resources compared to enhancing the core web application.
However, in the context of technical deflation, my thought process shifts: "With [latest model], I could likely develop a desktop app in 2-3 weeks—a feasible timeframe. But if I wait for [next model], it might be even easier, perhaps 1-2 weeks. And with [model after next], it might even be a 'one-shot' task. Perhaps not, but possibly. I'll just wait." Similar to economic deflation, non-essential "purchases" (or developments) are postponed.
The Macro: Late Entry as an Advantage
It is a well-known axiom in the startup world that being first does not guarantee success. Later entrants can learn from competitors' mistakes, avoid them, and ultimately surpass them. Timing is paramount; being too early can be akin to being wrong, albeit with the solace of having been technically correct. Examples include DoorDash's dominance over GrubHub and Lyft successfully sharing the rideshare market with Uber, even as new players like Waymo emerge.
Naturally, there are disadvantages to late entry. Yet, rapid AI advancements now amplify the benefits of joining the market later. A company founded in 2023 faced significant limitations: GPT-3.5-Turbo had limited capabilities, GPT-4 was slow and expensive, structured outputs were nascent, and LangChain was considered cutting-edge. Fine-tuning on a single GPU was restricted to relatively small token counts (e.g., 512-2048 tokens).
Many companies launched during that period struggled, often entering "pivot hell" or relying on complex, temporary architectural scaffolding. In contrast, companies entering 6-12 months later, attempting identical initiatives, often succeeded on their first try with significantly less effort. This dynamic is effectively illustrated by recent observations within the tech community.
Conclusion: Don't Just Do Something, Stand There?
Given that what can be built today will likely be easier to build in six months, what is the optimal course of action? Should one proceed with current development? Or perhaps engage in a six-month strategic pause, focusing on fundamental B2B SaaS principles?
One common response is to "focus on distribution." If the development aspect becomes commoditized, then competitive advantage must lie elsewhere—perhaps through viral social media engagement, aggressive marketing, or even unconventional guerrilla tactics. A more serious interpretation involves prioritizing sales and customer understanding over raw development. Gaining a deep understanding of customer problems is a genuine early-mover advantage that persists, irrespective of advanced AI capabilities like future Claude models.
Another approach might leverage the increasing disposability of software. Demos could become fully functional, full-stack applications. Consulting and custom software solutions could potentially scale in new ways. Giga AI, for instance, a company developing AI customer support agents, reportedly eschews the traditional "forward-deployed engineer" model for custom software, instead favoring self-customizing software—a paradigm made feasible by advanced coding agents.
The long-term implications remain uncertain. Much like economic forecasts awaiting the next Federal Reserve meeting, the future of this "technical deflation" awaits the next significant AI model launch (e.g., Opus 4.5).