The Evolving Landscape of Open Source: AI's Impact on Small Utility Libraries
Nolan Lawson explores how AI, particularly LLMs, is reshaping the open-source ecosystem, questioning the future relevance of small utility libraries like blob-util amidst a shift from teaching to instant solutions.
Read the Tea Leaves
Software and other dark arts, by Nolan Lawson
« Why do browsers throttle JavaScript timers?
The Fate of “Small” Open Source
Posted November 16, 2025 by Nolan Lawson in software engineering.
8 Comments
By far, the most popular npm package I've ever authored is blob-util. Approximately ten years old, it still garners over 5 million weekly downloads. It's a modest collection of JavaScript utilities designed for working with Blob objects. I created it after observing that PouchDB users frequently struggled with Blob manipulation, particularly conversions to strings, ArrayBuffers, and similar types.
Considering that some 80% of developers now integrate AI into their regular workflow, blob-util is precisely the kind of functionality most developers would likely prefer an LLM to generate for them. While one could use blob-util, doing so introduces an additional dependency with potential unknowns regarding performance, maintenance, and supply-chain risks.
Indeed, when prompted, Claude readily provides the necessary Blob utilities:
Write me a utility function in TypeScript to convert a Blob to an ArrayBuffer. It should return a Promise.
function blobToArrayBuffer(blob: Blob): Promise<ArrayBuffer> {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => {
if (reader.result instanceof ArrayBuffer) {
resolve(reader.result);
} else {
reject(new Error('Failed to read Blob as ArrayBuffer'));
}
};
reader.onerror = () => {
reject(reader.error);
};
reader.readAsArrayBuffer(blob);
});
}
Claude's version is quite similar to the blob-util implementation (which is unsurprising, given it was likely part of its training data!). Although it's more verbose, unnecessarily checking if readAsArrayBuffer actually returns an ArrayBuffer (a detail that satisfies TypeScript). To its credit, it improves upon my original by directly rejecting with an error, rather than relying on the somewhat less direct onerror event.
Note: For those curious, yes, Claude did suggest the newer Blob.arrayBuffer() method but also provided the above for "older environments."
Some might view this as progress: fewer dependencies, more robust code (even if slightly verbose), and a quicker turnaround compared to the traditional "search npm, find a package, read the docs, install it" process.
I hold no excessive pride in this library, nor do I particularly care if its download numbers fluctuate. However, I believe something fundamental is lost with the AI-driven approach. When I developed blob-util, I adopted a teacher's mindset; the README included a whimsical tutorial featuring Kirby in his blobby glory (I had a penchant for incorporating Nintendo characters into my projects back then). The objective wasn't merely to provide a utility (though it achieves that) but also to educate users on effective JavaScript practices, empowering them to solve future problems independently.
While I'm uncertain of the ultimate direction AI will take us (though ~80% of us are on board; to the remaining holdouts, I salute your resolve!), I suspect it's a future that prioritizes instant answers over genuine teaching and understanding. This diminishes the incentive to use libraries like blob-util, consequently reducing the motivation to create them in the first place, and ultimately, lessening the educational discourse around specific problem domains.
There's even a burgeoning movement advocating for llms.txt files to house documentation, enabling agents to parse it directly, thereby saving human brains the effort of deciphering English prose. (Is this truly documentation anymore? What is documentation?)
Conclusion
I remain a believer in open source and continue to contribute to it (albeit intermittently). Yet, one truth has become clear: the era of small, low-value libraries like blob-util is drawing to a close. They were already fading due to Node.js and browser environments gradually incorporating more of their functionality (e.g., node:glob, structuredClone). LLMs, however, represent the final nail in their coffin.
This shift means fewer opportunities to leverage these libraries as springboards for user education (a philosophy Underscore.js also embraced), but perhaps that's acceptable. If there's no longer a need to find a library to, say, group array items, then perhaps learning the underlying mechanics of such libraries becomes unnecessary. Many software developers argue that asking a candidate to reverse a binary tree is pointless because it rarely arises in daily work; the same logic might apply to utility libraries.
I am still contemplating what kinds of open source are truly valuable in this new era (hint: those an LLM cannot simply generate on command), and where educational gaps are most pronounced. My current thinking leans towards larger, more inventive projects, or highly niche topics not adequately covered in LLM training data. For example, reflecting on my work on fuite and various memory-leak-hunting blog posts, I'm confident an LLM couldn't reproduce such efforts, as they demand novel research and creative techniques. (Though, who knows? Perhaps someday an agent will simply "bang its head" against Chrome heap snapshots until a leak is found. I'll believe it when I see it.)
There's been considerable concern recently about open source's place in an LLM-dominated world, yet I still observe individuals pushing boundaries. For instance, many naysayers believe there's no point in developing a new JavaScript framework, given LLMs' heavy training on React. Still, the indefatigable Dominic Gannaway is creating Ripple.js, another JavaScript framework (and one with some fresh ideas, to boot!). This is precisely the spirit I admire: humans playfully defying the machine, continuing their uniquely human endeavors.
So, if there's a conclusion to this meandering blog post (pardon my squishy human brain; I didn't use an LLM to write this), it's this: yes, LLMs have rendered certain forms of open source obsolete, but a wealth of open source remains to be created. I eagerly anticipate the novel and unexpected contributions you all will conceive.