For years, OpenMRS implementers have faced a tough reality: scaling digital health systems, especially across large programs or national rollouts, was tedious, fragile, and often too dependent on manual work. Spinning up dozens of instances meant managing everything by hand, from deployment to upgrades and monitoring. And without native support for clustering or high availability, many teams had to make trade-offs between performance and resilience.
Well, all that’s changing.
Over the past year, the OpenMRS community has made major strides toward enabling modern, cloud-ready, and scalable deployments. While full support for features like horizontal scaling and clustered caching is landing in the upcoming Platform 2.8 release, many of the key building blocks are already available for testing.
Thanks to tools like Helm charts, a new distributed caching layer, a new storage service with distributed storage support, and ElasticSearch integration, implementers can now start experimenting with cloud-based and on-prem Kubernetes setups that bring consistency, performance, and resilience to OpenMRS deployments.
What used to take deep DevOps expertise and days of trial and error can now be done with a single command. This story highlights how far we’ve come and how close we are to making scalable OpenMRS deployments easier, faster, and more reliable for everyone.
As earlier stated, scaling OpenMRS for large or national programs wasn’t just difficult; it was a manual, time-consuming, and often frustrating process. Implementers had to manage, upgrade, and monitor dozens of individual instances by hand, each with its own slightly different configuration. There were no shared tools for automation, no consistent patterns to follow, and very little built-in support for running OpenMRS at scale.
There was also no out-of-the-box support for clustering, multi-tenancy, replication, or high availability, features that are essential for building resilient health systems. To make things more complex, many implementers needed to achieve these goals in on-premise environments, not just on cloud platforms like AWS or Microsoft Azure. That meant we couldn’t rely on cloud-native services alone; we had to build solutions that could flex across very different infrastructure realities.
Even for teams exploring Kubernetes, getting started was daunting. YAML-heavy setups required deep DevOps knowledge and a lot of trial and error. For most implementers, it was too much overhead for too little reliability.
A modern, cloud-native, developer-friendly deployment model!
To meet the growing demands of implementers and enable scalable, resilient deployments, OpenMRS is embracing a modern, flexible infrastructure approach. Here’s how the new model works:
OpenMRS now supports Kubernetes deployments across all major cloud providers such as AWS, Azure, Google Cloud, as well as private or on-premise clusters. Kubernetes enables container orchestration that automatically handles the following:
Because it’s vendor-agnostic, implementers have the freedom to deploy where it makes the most sense, whether that be in the public cloud or on their own servers.
To reduce the complexity of setup, the OpenMRS team provides Helm charts and Terraform scripts that automate the provisioning and configuration of OpenMRS environments.
This approach makes it easier for teams to adopt standard deployment patterns across regions or programs.
With the maintained Helm chart, users can now deploy a complete OpenMRS 3 cluster using a single helm install command.
For those who prefer a graphical interface, Rancher provides an easy, no-code way to launch and manage clusters visually.
Database options include:
Users can also customize ingress settings and storage classes to fit their infrastructure needs.
The new architecture supports running multiple OpenMRS instances in a single Kubernetes cluster, each with:
This sets the stage for future tenant-level upgrades, isolation, and scaling, making it possible to support many clinics or regions from one unified deployment.
The system includes several features that improve reliability and ease of management:
These improvements reduce the operational burden and make OpenMRS more resilient in real-world use.
The shift to a cloud-native, cluster-ready deployment model has already begun to reshape how OpenMRS is implemented and managed. For many implementers, what used to be a high-effort, low-confidence process is now faster, more reliable, and easier to scale.
These improvements mark a meaningful leap toward making OpenMRS more scalable, dependable, and sustainable for partners around the world, no matter the size or setting of their health programs.
With many foundational changes already landing in Platform 2.8, the path ahead is all about deepening and extending OpenMRS’s support for scalable, cloud-ready deployments.
Recent work has added a pluggable StorageService—available since OpenMRS Core 2.8—which allows implementers to store large files (like MRIs) in scalable storage backends such as S3, Longhorn, MinIO, etc.. This supports distributed storage, with built-in replication, high availability, and automated backups.
In the area of observability, the team plans to integrate Grafana dashboards for unified log monitoring, metrics, and performance alerts across services. This will help implementers proactively manage performance and detect stability issues early.
OpenMRS’s Kubernetes-based architecture already supports early patterns for multi-tenancy, where different implementations can run on shared infrastructure with isolated data schemas.
Phase 3 will build on this with tooling for:
A new area of exploration is support for asynchronous messaging and queuing between OpenMRS and external systems like FHIR servers or Master Patient Indexes (MPI). Plans are forming around using Apache Camel, either as a standalone middleware or OpenMRS module, to:
This work aims to offer a pluggable, standards-based way to integrate OpenMRS into health information exchange environments, without sacrificing performance or consistency.
These features are already complete and available in the 2.8-SNAPSHOT:
OpenMRS is not only becoming easier to deploy and manage,but it’s also becoming more future-proof, modular, and scalable than ever before. The foundation is strong, and the community is already putting it to use. What comes next will only build on that momentum.
Perfect—here’s the “Get Involved” section, written in a tone that’s welcoming, action-oriented, and aligned with the rest of the story.
Whether you’re an implementer, engineer, or simply curious about what’s next, there are several ways to engage with the OpenMRS community as we continue improving support for cloud hosting, scaling, and clustering.
Your feedback, questions, and use cases help drive progress. Whether you’re deploying OpenMRS for the first time or scaling to support thousands, your voice is essential in shaping how we build a stronger, more scalable platform, together.