Why Docker Compose Remains My Go-To for Production Deployments
Discover why Docker Compose is still a preferred choice for production deployments after 10+ years, offering scalability, simplified workflows, and robust solutions for diverse applications.

I started using Docker around 2015 and, even back then, I was leveraging Docker Compose in production. Over a decade later, I continue to use it successfully for the majority of applications I build and deploy.
This article isn't a comprehensive how-to guide. For practical examples, you can explore my Docker starter apps for Flask, Rails, Django, Node and Phoenix, which are configured for both development and production with Docker Compose.
Interestingly, approximately 95% of the effort involved in deploying an app with Docker Compose is actually unrelated to Docker Compose itself. It primarily revolves around setting up a Linux server and selecting a mechanism to trigger deployments (e.g., git push).
The main objective of this post is to advocate for and build trust in Docker Compose, highlighting the specific features I appreciate and explaining why it remains my default choice for deploying applications to production.
While Kubernetes has its place, and I've used it for various projects over the years, I typically reserve it for scenarios where it genuinely offers significant advantages, rather than using it as a default solution.
Success Stories
Over the past decade, I've had the opportunity to help clients deploy various types of applications. While I can't list every single one, here are a few examples with some project details:
- Rails data analysis application: This app, critical for a multi-million dollar business, performed extensive background data processing. At its peak, it utilized 40 AWS EC2 instances and its stack included Postgres, Nginx, Rails, and Redis.
- Fintech company's Python + PHP services: Running on individual servers, this setup included MySQL, Apache, PHP FPM, Flask, Redis, and Memcached. The company thrived for about 12 years before being acquired by a ~$500 million dollar company.
- Flask data collection and sales service: This application collected, unified, and sold data from various sources. Built by a solo developer, it grew into a profitable side business, covering his NYC rent. The stack involved Postgres, Nginx, Flask, and Redis.
- Flask dropshipping item finder: Another solo developer project, this app helped users find good items for dropshipping. Within 6 months, it was generating $90,000 USD/month with a 95% profit margin, all on a $100/month hosting bill. Its stack was Postgres, Nginx, Flask, and Redis.
The Flask success stories are particularly gratifying because both developers had taken my "Build a SAAS App with Flask" course. They had excellent ideas, executed them well, and reaped the rewards. My course merely provided the framework; their hard work did the rest.
All these applications, and many more, share a common deployment pattern:
git pushcode to a Linux server.- Run a few lines of shell script after the code push.
- Wait a few seconds.
- The application is deployed.
Some projects ran their database within Docker as part of the Docker Compose setup, while others opted for managed database services. Some chose to separate their web and worker processes onto different servers. Using Docker Compose doesn't restrict you to a single server setup.
Questions and Answers
Over the years, many individuals who have reached out or taken my Flask/Docker courses have asked common questions about Docker Compose. I'll do my best to address them here:
- Why use Docker Compose if it's meant for development?
- Does it scale?
- How are deployments handled?
- What has the experience been like?
Let's delve into these questions.
Isn’t Docker Compose Meant for Development?
Early in Docker's history, some documentation might have suggested Docker Compose wasn't production-ready, but that information is long outdated.
For me, it has always been about the results. If a tool works effectively, I use it. That said, I'm not reckless. I rigorously tested Docker Compose for the applications I was building and deploying, and it consistently performed well. This gave me enough confidence to use it for client work fairly early on.
My definition of "works" includes:
- Can I reliably spin up multiple containers that remain stable?
- Has it worked consistently over time?
- Does it avoid unpredictable crashes?
- Is it stable across different systems?
- Does it provide all the necessary features?
By 2015, all these criteria were met for me. One of the very first applications I deployed was an early-stage portal to sell the v1 of my Flask course. It handled distributing a ~1GB zip file containing the video course after payment confirmation. This was quite different from the current video streaming platform, but it was a real application processing $199 payments for a digital product. (I didn't include this in the earlier success stories.)
I emphasize this because I firmly believe in using and testing tools on my own projects before risking them on client work. Of course, exceptions are made if a client specifically requests me to learn and use something on the job.
In summary, Docker Compose functions perfectly well in production, pre-production, CI, or any other environment you wish to set up. This environmental parity is a significant advantage of Docker Compose, and I plan to write a separate post about it in the future.
Does it Scale?
This is a frequently asked question.
For a large category of applications, horizontal scaling isn't strictly necessary. Vertical scaling—adding more CPU and memory resources to a single server—can take you a long way.
The typical use case involves deploying your Docker Compose project on a single server. Whether you use a managed database or not is up to you. Most web applications are I/O bound, primarily waiting for responses from a database.
If you can serve your p95 (95th percentile latency) traffic under 100-150ms on a single $20/month server, that's often sufficient, isn't it?
Whether your application handles 200 or 20,000 requests a day, you can always increase vertical scaling as needed. Providers like DigitalOcean offer servers with up to 32 vCPUs and 256 GB of memory. Most applications I deploy comfortably handle production traffic on servers with 2-4 CPUs and 4-8 GB of memory.
For higher-end resource needs, if DigitalOcean becomes too costly, providers like Hetzner can offer 48 vCPUs, 192 GB of memory, and an NVME SSD for around $320/month (prices may vary; this was observed in mid-2025). That's an enormous amount of computational power!
Consider a scenario with 10,000 paying customers. Depending on your tech stack and traffic patterns, you might host this on a server with just 2 CPU cores and 4 GB of memory. Even if you needed to double that, it's still entirely reasonable if you're earning $20/month per customer, totaling $200,000/month.
What About Unpredictable Large Traffic Spikes?
The vertical scaling approach works well for reasonably predictable traffic where you want to avoid over-provisioning and increased costs. However, if your app typically gets 1,000 requests a day but needs to handle sudden spikes of up to a million requests at random times, you might consider more cost-effective solutions than running an expensive machine 24/7 at max load.
It's important to note that something like Kubernetes isn't a silver bullet for this. If you run Kubernetes on a cloud provider like AWS, you still need nodes to spin up and join your cluster before your application can handle the increased load. This applies whether you use managed nodes with auto-scaling groups, Karpenter, or even Fargate. Compute resources must become available in your cluster, and then your application needs to be rolled out to them, which takes time.
The out-of-the-box experience might involve several minutes before everything is ready. Achieving a balance between costs and the lag time before resources become available requires a significant engineering effort. The key takeaway is not to expect to pick Kubernetes, fumble through it, and have a perfect solution in a couple of hours of casual coding.
How Do You Handle Deployments?
In my opinion, one of the simplest methods involves using Git with a post-receive hook.
Essentially, you set up a bare Git repository on your server, allowing you to git push prod main from your development machine or CI pipeline. This push triggers a script of your choice on the server to perform necessary actions, such as pulling a new Docker image, bringing up your project, and potentially running database migrations. You could even send a Slack notification with the deployment results.
The elegance of this process lies in its near-universal consistency across projects (approximately 95% the same). This consistency offers enormous benefits.
Because we're working with Docker Compose commands, the type of application (Flask, Rails, Django, Node, etc.) is irrelevant. Similarly, it doesn't matter if your app is a simple web server or a complex stack involving web, worker, websocket, database, cache, and full-text search. These are merely implementation details within your Docker Compose file. The core deployment process remains largely the same, and you can incorporate custom hooks for specific actions before or after a deployment.
My preferred deployment stack consists of:
- Terraform: For creating infrastructure resources (servers, DNS records, etc.).
- Ansible: For configuring and provisioning Linux servers (Debian/Ubuntu).
- Docker: For running applications.
- Git: For performing deployments.
How’s the Experience Been?
The experience has been exceptionally positive.
Not only have dozens of applications I've deployed over the years worked flawlessly, but the overall complexity has remained remarkably low. If I want to deploy a completely new application to a server and secure it with HTTPS, the only code I need to write involves configuring a few Ansible settings. After about 5 minutes for everything to spin up, I can simply git push prod main. Here's a glimpse of the configuration:
git_deploy__projects:
- name: "hello"
local_git_repo_path: "~/src/open-source/docker-rails-example"
post_deploy_commands: [
"./run rails db:migrate"
]
pki__realms:
- domains:
- "hello.example.com"
nginx__vhosts:
- server_name: "hello.example.com"
root: "/srv/hello/public"
proxy_pass_url: "http://127.0.0.1:8000"
cache_assets: true
Deploying a Flask app is essentially the same; I just modify the repository path and change post_deploy_commands to ["./run flask db migrate"].
Incidentally, I prefer running Nginx directly on my host, a topic I've covered in the past.
The general-purpose Ansible playbooks and roles I've developed over the last decade have facilitated this streamlined process. They aren't magic, but they automate the installation and configuration of components, ensuring your server remains updated, secure, and healthy. This approach isn't limited to Ansible; it could be achieved with other tools as well.
I may release these playbooks if I ever complete my "Deploy to Production" course. If you're interested in that course, please let me know, as the current impact of AI on organic traffic makes me uncertain about the sustainability of creating new courses. This uncertainty has partly led me to focus more on contract work in recent years.
In any case, I highly recommend considering Docker Compose for all your environments; it's an excellent tool!
The video below further elaborates on the points discussed in this blog post, providing additional details.