Programming Principles for Self-Taught Front-End Developers
Discover essential programming principles for self-taught front-end developers. Learn actionable advice like the 'rule of three,' 'make it work, make it right, make it fast,' idempotency, and the single responsibility principle to write cleaner, more maintainable code and avoid common pitfalls.
Programming Principles for Self-Taught Front-End Developers
Like many front-end developers, my journey into this field wasn't through a formal computer science degree. I transitioned from design, seeking greater control over the final product. While I earned a Bachelor of ICT, its focus on fundamental computer science was, to put it mildly, 'light'. Consequently, all my knowledge of Software Development has been acquired on the job, from various sources over two decades. If your path mirrors mine, this article aims to fast-track your learning by several years.
After more than 20 years in the industry, what truly impacts my daily work isn't the ability to model OOP systems with UML or understand Monads. Instead, it's a collection of practical 'programming principles' – sometimes concise, sometimes profound.
There's a vast array of programming principles. Some act more like 'laws' describing system and human behavior (e.g., Hofstadter’s Law: "It always takes longer than you expect, even when you take into account Hofstadter’s law."). While these offer broad utility, I find them less actionable for writing 'good code'. This article focuses on the rules-of-thumb that directly enhance my code as I write it. These principles don't demand a complete system blueprint upfront; they guide better decision-making in real-time.
Advert
Black Friday Deal: Save £60 on all courses and get a 50% discount coupon for future purchases to use any time. Claim your savings!
From Pithy Statements to Truly Useful Habits
Early in your career, you might hear a senior developer declare, "Premature optimization is the root of all evil." This sounds serious but also perplexing. Optimization is generally positive, while 'premature' is usually not. The dilemma is being told your actions are 'evil' without clear guidance on what to do next. While avoiding premature optimization is a sound principle, it doesn't directly help you write better code.
A helpful mentor might introduce You Aren't Gonna Need It (YAGNI). This principle cautions against writing code for anticipated future needs that aren't present now. The reasoning is that plans often change between the present and that imagined future, rendering the anticipated code unnecessary. It's better to build things 'just in time' – when you genuinely need them.
Conversely, you'll also be advised to follow the Don’t Repeat Yourself (DRY) principle. This means avoiding duplicate code, as it complicates maintenance. Following DRY strictly might lead you to consolidate every repeated logic into a single function or module, called from multiple places.
These acronyms generally offer good advice. When building functionality, some foresight is logical, right? If logic is needed in multiple (future) scenarios, optimizing or refactoring early seems sensible to ensure current code accounts for it and avoids duplication.
YAGNI and premature optimization typically arise when, instead of focusing on functionality needed now, you begin building a general system that addresses both current and potential future needs. This often results in significantly more code, increased complexity, and slower progress compared to experienced colleagues.
These principles don't inherently guide your coding as you write. While you might try omitting parts you think you'll need later, a principle that helps you discern what to leave out would be far more beneficial.
Enter the "Rule of Three" — an actionable, pragmatic principle that elegantly synthesizes YAGNI, DRY, and premature optimization.
The rule of three states that you should only refactor (or optimize) code once you’ve written the same code three times. The first time, just write the code. It does the thing and only the thing. The second time you need the same code, simply copy-and-paste it, making any necessary minor changes. It's only when you face doing it a third time that you review your three existing implementations and generalize them into a single solution capable of handling all cases.
The core idea is that after writing code three times, you gain a clear understanding of the general functionality truly required, and which parts can be simplified and optimized. Three implementations provide insight into the necessary 'level of abstraction', ensuring the generalized solution effectively covers all three scenarios. I apply this principle constantly due to its simplicity (I can count to three!) and its effectiveness in preventing premature over-engineering.
When studying programming principles, you'll encounter many serious-sounding Laws and Rules that require substantial context to apply. Hopefully, the "rule of three" example demonstrates how transitioning from the general "premature optimization is the root of all evil" to this specific, actionable principle significantly impacts your daily coding and code quality.
Writing the Right Thing
Let's now focus on that initial implementation. It's often tempting to optimize code from the outset, ensuring every line is fast and efficient. However, fast and efficient code isn't always the most readable.
In coding, we spend surprisingly little time writing. Instead, we spend significant time reading existing code or our recently written code, reasoning about it to decide the next step. Therefore, the easier code is to read and comprehend, the faster we can write correct code. While fast, optimized code is valuable, if it's difficult to read and reason about, it will ultimately slow us down.
We want it all: correct code written quickly, and code that is fast. How do we choose?
I prefer to apply another principle: "Make it work, make it right, make it fast," attributed to Kent Beck, the creator of Extreme Programming.
At any point while coding, you can ask yourself:
- Does it work? If no, focus solely on making it functional. Ignore everything else. Once it works, great!
- Is it right? Code that works isn't always correct. If your code isn't behaving as intended, fails tests, or doesn't accept expected input, focus on ensuring it works correctly. Only then, when it both works and is right, proceed to the final question.
- Is it fast? If performance isn't adequate, now you can concentrate on optimization.
This approach helps prioritize your efforts. Optimizing code that doesn't work or is incorrect is a waste of time; you'll likely rewrite it, losing any optimization. If the code is broken, worrying about its correctness is moot; fix its functionality first.
The power of this principle lies in its constant applicability. Simply observe your code's behavior, ask these three questions in order, and focus on the immediate priority. You can defer other concerns until they become relevant.
If you look closely, this is essentially the same principle as the rule of three, applied to a different aspect of programming. Both help you focus on the current task and avoid distractions from irrelevant concerns.
Advert

The Crappy First Version
Several principles encourage the mindset of creating a first implementation that isn't perfect. These include advice to throw away your first implementation or to build the best simple system for now. Indeed, the "make it work" step in the "rule of three" embodies this same idea.
A more academic version of this is Gall’s Law, which states: "a complex system that works is invariably found to have evolved from a simple system that worked." The implication is that attempting to account for everything from the start often results in a non-functional system. A classic example is the 'full rewrite' that typically ends up worse than the original and takes significantly longer than planned (this also has a name: the Second-system effect).
Reading this, you might recall Keep It Simple, Stupid (KISS). However, KISS struggles when designing solutions for complex tasks where some complexity is unavoidable. For such scenarios, KISS offers no clear answers.
Gall’s Law, conversely, at least affirms that complexity can exist if you commit to starting simply and iterating towards your goal.
Still, this isn't very actionable as you're writing code. So, let's explore principles that do help you write better code in the moment.
Idempotency
I strive to make the functions I write as idempotent as possible. A big word, but it simply means a function always produces the same result when given the same arguments. This seemingly basic concept has significant implications for code comprehensibility.
If a function is idempotent, you can call it multiple times with identical arguments, and it will consistently return the same outcome. For instance, getting a string's length is idempotent; "hello".length will always return 5, regardless of how many times it's called. Knowing a function is idempotent means you don't have to worry about it secretly returning a new string or accessing external variables. Same arguments in, same result out.
FYI: There's a subtle distinction between idempotent and pure functions, though not critical here. Pure functions are guaranteed to have no side effects – they don't access or modify global data, nor do they alter their arguments in-place. An idempotent function can have side effects, provided those effects are also idempotent. For example, changing global state or a database entry can be idempotent if multiple calls always result in the same final value being set.
If idempotency isn't maintained, calling a function twice (e.g., due to a user double-clicking or server retry logic on a flaky connection) might alter the resulting value, making your code much harder to reason about. If a side-effect is necessary, consider splitting it into its own function and making that idempotent. This results in two smaller functions, each with a single concern.
Idempotent functions can be treated like black boxes, allowing you to 'forget' implementation details while reasoning about your larger system. This mental shortcut simplifies higher-level reasoning; since the function always acts predictably, you can mentally collapse it into a single step.
Single Responsibility Principle
Closely related to idempotency is the Single Responsibility Principle (SRP). This principle states that a function (or module or class) should have "only one reason to change."
In practice, this means one function should be dedicated to one aspect of your system. A good example is using an Object-Relational Mapping (ORM) to handle all database access. The ORM module is solely responsible for interacting with the database; the rest of your code should have no idea how that interaction works. If you ever need to change your database interaction method, you only update the ORM, not other parts of your codebase.
Again, this enables you to mentally simplify parts of the system when reasoning about it. You don't need to understand how it constructs SQL queries; that's not the concern of the code you're currently working on. You can treat an ORM as a black box for database access.
The Single Responsibility Principle is also described as "a function/module/class should only have one reason to exist." If that single reason vanishes, the function/module/class should be entirely removable. For instance, if you switch databases and require a new ORM, the old one should be completely disposable.
Often, an ORM's functionality expands beyond just fetching data; it might also format that data into a specific structure expected by the rest of the code. Now, if you switch databases, you also need to update all code dependent on that specific data format. Your ORM now has multiple reasons to change, making it harder to reason about.
Instead, you should have one module solely for fetching data from the database, and another solely for data formatting. Each module now has a single responsibility, allowing you to change (or delete!) one without impacting the other – that's the SRP in action.
A simple trick to check for single responsibility is to describe a function/module/class's purpose in a single sentence. If you find yourself using "and," it likely has multiple responsibilities and should be refactored.
One Level of Abstraction
Related to the Single Responsibility Principle is the idea that a function should not only do one thing but also operate at only one level of abstraction. 'Level of abstraction' is itself quite an abstract concept. It means that when you read through a function's code, all operations should be at a consistent level of detail.
For example, consider this function:
async function processUsers() {
const users = await database.fetchAllUsers();
users.forEach((user) => {
if (user.isActive) {
sendEmail(user.email, "Hello active user!");
}
});
}
In this function, we observe three levels of abstraction:
- Fetching data from the database (
database.fetchAllUsers()). - Looping over users and filtering for active ones (
if (user.isActive)). - Performing an action based on that data (
sendEmail(...)).
Not only would you use multiple "and"s to describe this function – it connects to the database to get users and filters them and sends emails only to the filtered users – but you also have to consider three different levels of detail (database, active users, email sending) when reasoning about it. In real-world code, this might also involve more await calls, validation, and error handling.
As you read the function, your focus constantly shifts, making it harder to follow. For instance, it first concerns fetching users, then filtering, then emailing. Signs of multiple abstraction levels include:
- Multiple "and"s when describing the function's purpose.
- Multiple loops or iterations over data.
- Mixing 'low-level' operations (like database access) with business logic.
By splitting this, you can create three functions, each operating at a single level of abstraction:
async function getActiveUsers() {
const users = await database.fetchAllUsers();
return users.filter((user) => user.isActive);
}
function sendEmailsToUsers(users) {
users.forEach((user) => {
sendEmail(user.email, "Hello active user!");
});
}
function processUsers() {
const activeUsers = getActiveUsers();
sendEmailsToUsers(activeUsers);
}
The getActiveUsers function exclusively handles fetching and filtering for active users. The sendEmailsToUsers function deals only with sending emails. The processUsers function orchestrates by getting all active users and then emailing them. Each function becomes simpler to understand because it focuses on one level of abstraction, allowing for separate reasoning about each part.
Notice how in the refactored version, sendEmailsToUsers is indifferent to the users' origin or state; it simply sends emails to the provided list. Each function now adheres to a single responsibility and operates at a single level of abstraction.
Advert

Are They All the Same Thing?
By now, you might have observed the interconnectedness of many of these principles. As a classic quote might be butchered for software engineering: "All good code is alike; each bad code is bad in its own way." – Tolstoy (if he were a software engineer).
Good code is easy to reason about, and clarity about what each part does makes things easy to reason about. The relationships between parts are also clear. Thus, all these principles highlight different facets of writing code that facilitates comprehension.
Conversely, pinpointing why some code is bad can be challenging. Is it due to multiple responsibilities? Is it optimized but incorrect? Does it force you to jump through various levels of abstraction line-by-line? Does it perform superfluous actions? All these factors contribute to the difficulty of working with poor code.
It's significantly easier to produce good code by (dogmatically) adhering to the principles outlined above than it is to rectify bad code. The next time you write a new function, I hope you'll find yourself considering these principles: resisting optimization for non-functional code, ensuring each function has a single purpose, and avoiding complex system designs before a simple, working system is established. Good luck!
Bibliography and Further Reading
As mentioned, I absorbed much of this knowledge on the fly. Here are some resources that proved invaluable:
- Refactoring by Martin Fowler. This classic, available in a JavaScript edition, offers universal principles despite its Java-flavored code.
- Making Impossible States Impossible by Richard Feldman. A superb talk on modeling data structures to enhance code reasoning. While focused on data structures rather than business logic, I apply its rules almost daily.
- Make It Work Make It Right Make It Fast on the C2 wiki, attributing the quote to Kent Beck.
- Software Design Principles: Single Level of Abstraction
- hacker-laws.com is a collection of programming principles, laws, and rules.
Enjoyed this article? You can support us by leaving a tip via Open Collective
Advert

Author
Kilian Valkhof Founder of Polypane browser for developers, builds tools for developers and designers. More about Kilian Valkhof
Newsletter
Join thousands of subscribers and discover our twice weekly newsletter, featuring high quality, curated design, dev and tech links.
- Short. ~5 links, twice weekly
- Digestible. Readable in ~1–2 mins
- Curated. Good links, curated by humans, not AI
- Free. Zero cost, and no spam, ever
Enter your email
Name
Loading, please wait…
Powered by Postmark - Privacy policy