A CTO's Return to Coding: Impact, Challenges, and AI-Driven Opportunities
A CTO shares insights on his return to coding after a decade, exploring the value of manager coding, how AI tools facilitate it, and crucial judgment rules for impactful contributions.
Returning to Code: A Decade Later
Published on November 19, 2025
Since joining Imprint over six months ago as the CTO of a team of approximately 50 engineers, I've merged 104 pull requests, averaging slightly over four per week. While many of these were minimal configuration and documentation tweaks, and none represented the most challenging or time-sensitive tasks, I operate more as a "pull request scavenger" – identifying opportunities that don't disrupt the operating teams' workflows.
Nonetheless, a significant portion of these represent meaningful software development tasks. These 104 pull requests surpass the combined total of all pull requests I completed in the preceding decade. My last substantial coding contributions were during my time at Uber (or whatever Phabricator called its equivalent). Since then, my role has predominantly involved managing managers, rather than hands-on software development.
This shift has been fascinating, prompting me to reflect on its implications: Is it beneficial for software quality? Is it good for me personally? Is it enjoyable? And how have I adapted to the current technological landscape? (Coincidentally, I also enjoyed Peter Seibel's Coders at Work (2009).)
The Dubious Return-on-Effort of Manager Coding
As a computer science major who worked as a software engineer before transitioning into engineering management, I continued to write software as a line manager and pursued small personal projects in my free time. The idea of writing software at work always appealed to me.
However, upon becoming a manager of managers, I ceased writing software professionally. While I wanted to code, it simply felt like a lower return on time compared to other activities. For instance, focusing my time on hiring another engineer for the team would undoubtedly yield more output than my individual coding efforts. Similarly, improving our strategic plans would have a greater impact than me writing a single piece of software.
Ultimately, I found myself in a trap where coding was always recognized as valuable, but consistently less valuable than alternative endeavors. Each hour spent coding felt detrimental to the overall business and indicative of questionable judgment. This wasn't true for understanding the codebase, and I've tried, with varying success, to build a workable degree of awareness of the codebases I've managed. That said, since leaving line management roles, I've never truly succeeded beyond a superficial level.
My experience suggests that only hands-on software development can cultivate a truly effective understanding of "unreasonable software" – a term I apply to most startup software, given its iterative design by a shifting cast of characters with an evolving grasp of the domain they are modeling.
While one can achieve a lot with a minimal understanding and by surrounding oneself with strong engineers who can provide honest answers to useful questions, this approach generally suffices for big-picture issues. However, it hinders engagement with smaller questions without either consuming engineers' time for context or risking low-judgment decisions.
Whether manager coding is valuable hinges on whether you believe that making large decisions more quickly with fewer interruptions for context-pulling – coupled with the ability to make numerous small decisions that would otherwise be too costly – translates into meaningfully more impactful management. This is not a universal proclamation, but for me, I've genuinely enjoyed returning to coding at work, largely because the time commitment required has significantly decreased over the past couple of years.
Finding Small Pockets of Time to Write Software
The biggest specific challenge I faced when coding as a manager was finding dedicated blocks of thinking time to either understand a problem or implement a solution. Even simple fixes require an effective mental model of the codebase being worked on. This becomes undeniably true when focusing on the long-term impact of your work.
For example, adding a bunch of tests might be useful, but poorly designed tests are often discarded over time because they overlap with existing tests, are flaky, or collectively slow down the system. Traditionally, much of manager coding has fallen into this category: optically useful but with somewhat dubious long-term value. Delivering high-quality work simply demands too complete a mental model for individuals frequently jumping in and out of software development.
However, the new wave of AI tooling, such as Claude Code or OpenAI Codex, while susceptible to generating low-quality commits, can also provide several opportunities for useful code contributions when used effectively. They excel at:
- Answering questions about a codebase (e.g., "What are our most common patterns for working with authentication and authorization? What are good examples of each?")
- Writing code that aligns with the existing codebase's patterns and structure, especially with guidance from a well-written
AGENTS.mdfile. - Revising approaches based on general feedback (e.g., "Look for existing utilities that solve date math within our codebase and reuse those.")
Most importantly, each of these tasks can be accomplished in just a few minutes at a time. Between meetings at work, I often pop back into one of several Claude Code sessions to check progress on a given task, review the code, and suggest next steps.
It's important to acknowledge a significant learning curve to doing this effectively. I've dedicated a considerable amount of time over the past year to learning this new way of writing software, and each month brings new caveats to understand. Slowly but surely, I've built a mental model for both how AI-assisted software development works and how Imprint's codebases function.
As my knowledge in both areas has grown, I've rediscovered my ability to write software at work. I can now make progress in small chunks of time between other projects, supplementing this with an hour or two over the week for deeper thought on more complex issues.
Judgment and Problem Selection
While new AI tools make it easier than ever for managers to write useful software at work, they simultaneously make it easier to write unhelpful software. Particularly in senior roles, it's very easy to contribute code that makes you feel helpful, but is genuinely unhelpful to the team, leaving them busier than they were before.
Here are a handful of rules I've found useful for myself:
- Never contribute to anything truly time-sensitive unless it's a very constrained task I can solve end-to-end today. Otherwise, I'm likely to slow things down despite my best intentions.
- Prioritize projects that are difficult for teams to get to but are obviously valuable over time, such as technical debt cleanup, small user-requested features, or missing instrumentation.
- Infrequently take on a strategic company project without an owner that I am capable of completing myself. A real example was leading the first pass of a new Interactive Voice Response (IVR) system built on an AI agent, which was clearly valuable but hard to prioritize against the team's existing workload.
- Hold myself to a higher bar than I would hold others in terms of fully releasing my software, monitoring its release, and promptly solving any bugs it creates. If I lack the time for these responsibilities, then I am effectively stealing time from the team rather than helping them.
- Err towards implementing feedback in pull requests, even if I consider it generally neutral. The cost for colleagues to provide feedback to me is high, so I am responsible for going out of my way to incorporate it when given.
These rules have meant I didn't work on some projects I wanted to, but I believe they've done a fairly good job of allowing me to build my judgment about how our software works without getting in the way of the teams who are doing the vast majority of the heavy lifting. I'm sure better versions of these rules exist, but generally, I'd guess that managers ignoring them are close to the border of being unhelpful.
Should You Be Coding at Work?
I'm fairly certain that "Should managers be coding at work?" isn't nearly as interesting a question as people make it out to be, and a meaningful answer depends entirely on the specifics of each situation. However, what has been undeniably true for me is that the overhead of writing software at work is substantially lower than it was a few years ago.
If you weren't writing software at work because it simply took too much time away from directly managing your team, then the constraints have profoundly shifted once you learn to leverage this new wave of tooling. The learning curve for writing software with AI agents is certainly not zero, but it's an investment you can make for a few dollars of tokens and a couple dozen hours spent on personal projects that are safe to discard afterward. That feels like a worthwhile investment to remain effective in one's chosen profession.
Hi folks. I'm Will Larson.
If you're looking to reach out to me, here are ways I help.
If you'd like to get an email from me, subscribe to my weekly newsletter.
I wrote An Elegant Puzzle, Staff Engineer, The Engineering Executive's Primer, and Crafting Engineering Strategy.

Popular Posts
- "Good engineering management" is a fad
- Moving from an orchestration-heavy to leadership-heavy management role.
- Eng org seniority-mix model.
- How to create software quality.
- Useful tradeoffs are multi-dimensional.
Recent Posts
- Coding at work (after a decade away).
- "Good engineering management" is a fad
- Crafting Engineering Strategy!
- An agent to use Notion docs as prompts to comment on Notion docs.
- Commenting on Notion docs via OpenAI and Zapier.
Related Posts
- "Good engineering management" is a fad
- Moving from an orchestration-heavy to leadership-heavy management role.
- Stuff I learned at Carta.
- How to provide feedback on documents.
- Measuring developer experience, benchmarks, and providing a theory of improvement.