From Cloudwashing to O11ywashing: Reclaiming the True Meaning of Observability
This article exposes "o11ywashing," where monitoring is rebranded as observability. It argues true observability requires a unified, customer-centric view of product quality, addressing a systems problem beyond mere operational uptime.
On November 24, 2025, an industry veteran observed a panel on observability featuring several executives and experts. Their identities, though obscured, represented a prevalent mainstream view among engineering leadership that the author found deeply concerning.
The discussion began with a standard question about the executives' satisfaction with their observability investments. One executive noted that traditional observability tools, designed to identify system faults, "generally work well." However, he emphasized that their primary concern was observing product quality from each customer's perspective.
He then highlighted that numerous factors—such as dependency hiccups or mobile app rendering latency—could disrupt service or damage customer experience without impacting traditional "nines of availability." Unaware of the author's growing distress, he concluded by explaining their investment in a custom solution to measure key workflows, from startup to payment success.
This scenario, unfolding in 2025, highlights a critical disconnect: true observability, focused on customer experience, is perceived as an unsolved problem requiring bespoke solutions, even though it aligns with the original definition of observability.
Observability: A Billion-Dollar Term Lacking Meaning
The author expresses profound disappointment that high-performing tech executives still equate "traditional observability" with basic system uptime (up/down status) and consider observing service quality from a customer's perspective as an unexplored frontier. This misunderstanding often leads companies to build custom tooling from scratch.
What these executives call "traditional observability tools" are, in fact, conventional monitoring tools. They refer to the "three pillars" model—metrics, logging, and tracing—treated as separate entities. While these tools excel at delivering basic operational outcomes like system availability, they are ill-equipped to solve the problem of understanding product quality from a customer's viewpoint. Solving this requires unifying application, business, and system telemetry in a traceable manner, allowing data to be sliced and diced by customer ID, location, device ID, and other contextual factors. This holistic approach, crucial for deep insights, remains largely unaddressed by current market offerings.
From Cloudwashing to O11ywashing
The author draws a parallel to "cloudwashing," a term learned from Rick Clark, describing how companies like IBM reclassified existing offerings as "cloud" during the cloud computing boom. A similar phenomenon, termed "o11ywashing," is now pervasive in the observability space. With observability becoming a multi-billion dollar market, vendors—from legacy behemoths to new players—are rebranding any telemetry-related product as "observability" to capitalize on the trend, often without delivering true customer-centric insights.
Pushing Back Against X-washing
The problem of o11ywashing is escalating because traditional vendors are inherently unable to solve the core issue. While industry analysts might eventually help users navigate this complexity, genuine change requires market victory. To counter o11ywashing effectively, the industry must educate engineering executives, emphasizing tangible results and business outcomes over technical data structures and algorithms.
Exhibit A: "How to Spot Cloudwashing"
Communicating with executives differs significantly from interacting with engineers. Having co-founded Honeycomb nearly ten years ago, the author initially believed clear technical explanations would naturally highlight business consequences. However, years in an executive role fostered empathy for the distinct pressures faced by leadership. This is not about operational stress but a profound, systemic challenge.
Observability: A Systems Problem, Not an Operational One
Amid the unprecedented stresses of technological shifts like AI, executives seek sound decisions. Companies are investing heavily in "observability" tools, yet they remain far from a true solution because they continue to view observability through an operational lens. Observability is fundamentally a systems problem—the most potent lever for transforming software development from reactive firefighting to proactive, positive feedback loops.
As Fred Hebert might suggest, being excellent at firefighting is commendable, but perhaps it's time to understand the root causes by reading the "city fire codes"—the underlying system dynamics.
Executives often don't know what they don't know, largely because the technical community hasn't effectively communicated with them. This is now changing, with a renewed focus on executive education.
Looking ahead, the next step in solving this massive problem involves clear differentiation. As AI suggests, identifying true "cloud observability" solutions requires looking for specific features that distinguish them from "o11ywashed" offerings.
In conclusion:
If your "observability" tooling does not help you understand the quality of your product from each customer's perspective, it is not true observability. It is merely monitoring, disguised with marketing dollars. Call it o11ywashing.
Share this on X and Facebook.