My First Real Rust Project: A Journey into Production Development

Rust

Discover a developer's journey using Rust for a production monitoring component, detailing the rationale, crate selections, and overcoming Windows compilation challenges.

Nov 23, 2025

For years, I've been immersed in learning Rust, applying it to various pet projects and demos. Working primarily in a JVM-heavy environment, I often assumed this would be my permanent professional landscape. However, last week brought a pleasant surprise: I successfully advocated for using Rust on a specific project, marking my first official foray into a "real" Rust application.

While not a massive undertaking, I'm eager to share my experience leveraging Rust for this production component.

The Project

Our main software platform integrates health sensors that expose their status via HTTP APIs. The challenge is that most customers don't actively monitor these endpoints. From an engineering perspective, this represents a maturity gap in observability. From a customer success standpoint, it's an opportunity to provide support.

The objective is straightforward: develop a component that polls the state of these sensors and dispatches email notifications based on their status. Users can configure which sensors to poll, the severity level for alerts (e.g., warning and above), the recipient email, and other parameters.

Why Rust?

While I genuinely enjoy Rust, I also appreciate Kotlin and am keen to explore languages like Gleam. However, the decision to use Rust for this project was driven by objective reasons rooted in our specific context and desired solution design.

My initial advocacy focused on designing the component outside the main platform. There's a common inclination to integrate everything within the existing ecosystem – a classic case of "when all you have is a hammer, everything looks like a nail." Yet, if the platform itself is experiencing issues or crashing, a tightly coupled reporting component would likely be unavailable precisely when it's most needed. Once this architectural decision was approved, the component's tech stack became independent of the platform's.

Next, I challenged another prevalent reflex. When tasked with scheduling in a JVM application, Quartz was always my go-to:

Quartz is a richly featured, open source job scheduling library that can be integrated within virtually any Java application - from the smallest stand-alone application to the largest e-commerce system. Quartz can be used to create simple or complex schedules for executing tens, hundreds, or even tens of thousands of jobs; jobs whose tasks are defined as standard Java components that may execute virtually anything you may program them to do. The Quartz Scheduler includes many enterprise-class features, such as support for JTA transactions and clustering.

What is the Quartz Job Scheduling Library?

Quartz works reliably. However, it mandates that the application remains continuously running, as in a typical web application. What if the application crashes? How does it restart? These are questions Quartz doesn't inherently address.

Over time, my approach evolved. I now prefer to design applications that run to completion and delegate scheduling to the operating system, for example, cron. This aligns with the single responsibility principle: both the application and cron perform their distinct jobs independently. This design results in a smaller application and a more resilient overall solution.

While the JVM excels in long-lived applications, optimizing performance over time, it's less ideal for short-lived, run-to-completion processes. It consumes significant memory, and the application often finishes executing before the bytecode can be fully compiled to native code.

At this juncture, the choice narrowed to Go or Rust. I personally dislike Go, particularly its approach to error handling – or more precisely, what I perceive as a lack of proper error handling.

Finally, considering that different customers would deploy on various operating systems, Rust's cross-platform compilation was a decisive advantage:

Rust supports a great number of platforms. For many of these platforms The Rust Project publishes binary releases of the standard library, and for some the full compiler. rustup gives easy access to all of them. When you first install a toolchain, rustup installs only the standard library for your host platform - that is, the architecture and operating system you are presently running. To compile to other platforms you must install other target platforms.

Cross-compilation

These arguments collectively helped me convince management that Rust was the optimal choice for this project.

Choosing Crates

In Rust, crates are either libraries or applications. For simplicity, I'll refer to all libraries as "crates" here. The Java ecosystem, now 30 years old, has matured significantly. When I need a JVM dependency, I have a ready catalog. Rust is a more recent language, and my experience with its ecosystem is still growing. In this section, I'll detail my crate selections.

HTTP Requests

My requirements included TLS support, adding request headers, and deserializing response bodies. I've successfully used reqwest in previous projects, and it perfectly fulfills all these needs. Check!

Configuration

The component requires several configuration parameters, such as the list of sensors to query and authentication credentials for the platform. Sensor lists represent structured configuration, while credentials are secrets. To accommodate this, I needed to support multiple configuration options: a configuration file and environment variables. The config crate proved ideal for this purpose.

SMTP

While the previous two crates were already on my radar, I hadn't sent email with Rust before. A quick search led me to the lettre crate:

Lettre is an email library that allows creating and sending messages. It provides:

  • An easy-to-use email builder
  • Pluggable email transports
  • Unicode support
  • Secure defaults
  • Async support

Crate lettre on doc.rs

lettre handles the job effectively, though it does introduce a substantial number of dependencies.

Other Crates of Interest

Macros for the Win

None of the other programming languages I'm familiar with possess macros in the same way Rust does. In Java, meta-programming is typically achieved through reflection, often with the aid of annotations. Rust's macros provide a similar capability, but critically, they operate at compile time.

For instance, I wanted to sort sensor results by severity in descending order before emailing them. Programming languages generally offer two ways to order a Vec: by natural order or using a dedicated comparator. Rust is no different. In my case, using the natural order makes sense, as I'm unlikely to compare them differently. For this, the Severity enum needed to implement the Ord trait:

pub trait Ord: Eq + PartialOrd {
    // Required method
    fn cmp(&self, other: &Self) -> Ordering;

    // Provided methods
    fn max(self, other: Self) -> Self
    where
        Self: Sized,
    {
        ...
    }
    fn min(self, other: Self) -> Self
    where
        Self: Sized,
    {
        ...
    }
    fn clamp(self, min: Self, max: Self) -> Self
    where
        Self: Sized,
    {
        ...
    }
}

Trait Ord

One might mistakenly think only cmp needs implementation. However, Ord requires both Eq and PartialOrd to be implemented as well. The complete trait hierarchy looks like this:

While it's possible to manually implement these functions for the Severity enum, much of it can be inferred: an enum is equal only to itself, and its natural order corresponds to its declaration order. This is precisely what the derive macro achieves. With the help of the strum crate, I could implement the necessary traits concisely:

#[derive(PartialEq, Eq, PartialOrd, Ord)]
enum Severity {
    // Declare levels
}

I also needed Severity to be displayable, cloneable, and usable as a key in a HashMap. The full declaration became:

#[derive(Debug, Display, Deserialize, PartialEq, Eq, PartialOrd, Ord, Copy, Clone, Hash)]
enum Severity {
    // Declare levels
}

Compilation on Windows

My development journey was smooth until I introduced the lettre crate. At that point, Rust compilation on Windows halted, citing a linker issue with an excessive number of symbols. My initial attempt involved setting default_features = false for as many crates as possible. This pushed the limit slightly but didn't resolve the core problem.

Interestingly, Rust still compiled successfully in --release mode, which performs aggressive code optimization. However, developing solely in release mode is impractical; breakpoints are fundamental for effective debugging.

My investigation revealed that the default Rust toolchain on Windows utilizes Microsoft Visual C++ (MSVC), which was the source of the issue. The alternative is to switch to the GNU toolchain, specifically x86_64-pc-windows-gnu. This requires installing MSYS2:

MSYS2 is a collection of tools and libraries providing you with an easy-to-use environment for building, installing and running native Windows software.

rustc will complain if it can't find the necessary commands, which you then install via MSYS2. I can't provide detailed, step-by-step instructions because: I experimented back and forth; I prioritized fixing the issue over taking notes; and the project resides on my work laptop. Nevertheless, I hope this information is sufficient to guide others facing similar Windows compilation challenges.

Conclusion

This project has been an invaluable learning experience. On one hand, it validated my Rust skills as adequate for a straightforward component. On the other, it significantly deepened my knowledge of the Rust library ecosystem. Finally, I gained practical experience with Rust development on Windows, navigating its unique challenges.

To go further:

Follow me

Nicolas Fränkel

Nicolas Fränkel is a technologist focusing on cloud-native technologies, DevOps, CI/CD pipelines, and system observability. His focus revolves around creating technical content, delivering talks, and engaging with developer communities to promote the adoption of modern software practices. With a strong background in software, he has worked extensively with the JVM, applying his expertise across various industries. In addition to his technical work, he is the author of several books and regularly shares insights through his blog and open-source contributions.

Read More