How OpenMRS Is Embracing Cloud Hosting and Clustering

For years, OpenMRS implementers have faced a tough reality: scaling digital health systems, especially across large programs or national rollouts, was tedious, fragile, and often too dependent on manual work. Spinning up dozens of instances meant managing everything by hand, from deployment to upgrades and monitoring. And without native support for clustering or high availability, many teams had to make trade-offs between performance and resilience.

Well, all that’s changing.

Over the past year, the OpenMRS community has made major strides toward enabling modern, cloud-ready, and scalable deployments. While full support for features like horizontal scaling and clustered caching is landing in the upcoming Platform 2.8 release, many of the key building blocks are already available for testing.

Thanks to tools like Helm charts, a new distributed caching layer, a new storage service with distributed storage support, and ElasticSearch integration, implementers can now start experimenting with cloud-based and on-prem Kubernetes setups that bring consistency, performance, and resilience to OpenMRS deployments.

What used to take deep DevOps expertise and days of trial and error can now be done with a single command. This story highlights how far we’ve come and how close we are to making scalable OpenMRS deployments easier, faster, and more reliable for everyone.

The Challenge

As earlier stated, scaling OpenMRS for large or national programs wasn’t just difficult; it was a manual, time-consuming, and often frustrating process. Implementers had to manage, upgrade, and monitor dozens of individual instances by hand, each with its own slightly different configuration. There were no shared tools for automation, no consistent patterns to follow, and very little built-in support for running OpenMRS at scale.

There was also no out-of-the-box support for clustering, multi-tenancy, replication, or high availability, features that are essential for building resilient health systems. To make things more complex, many implementers needed to achieve these goals in on-premise environments, not just on cloud platforms like AWS or Microsoft Azure. That meant we couldn’t rely on cloud-native services alone; we had to build solutions that could flex across very different infrastructure realities.

Even for teams exploring Kubernetes, getting started was daunting. YAML-heavy setups required deep DevOps knowledge and a lot of trial and error. For most implementers, it was too much overhead for too little reliability.

The Solution

A modern, cloud-native, developer-friendly deployment model!

To meet the growing demands of implementers and enable scalable, resilient deployments, OpenMRS is embracing a modern, flexible infrastructure approach. Here’s how the new model works:

a. Cloud-Agnostic Kubernetes Support

OpenMRS now supports Kubernetes deployments across all major cloud providers such as AWS, Azure, Google Cloud, as well as private or on-premise clusters. Kubernetes enables container orchestration that automatically handles the following:

  • High availability through node replication
  • Fault tolerance and self-healing
  • Resource-efficient scaling

Because it’s vendor-agnostic, implementers have the freedom to deploy where it makes the most sense, whether that be in the public cloud or on their own servers.

b. Helm Charts and Terraform Automation

To reduce the complexity of setup, the OpenMRS team provides Helm charts and Terraform scripts that automate the provisioning and configuration of OpenMRS environments.

  • Helm charts eliminate the need to manually write or manage YAML files
  • Terraform handles infrastructure setup, whether in the cloud or on-prem
  • Together, they enable production-grade O3 deployments with fewer steps and more reliability

This approach makes it easier for teams to adopt standard deployment patterns across regions or programs.

c. One-Command Deployment (via Rancher or CLI)

With the maintained Helm chart, users can now deploy a complete OpenMRS 3 cluster using a single helm install command. For those who prefer a graphical interface, Rancher provides an easy, no-code way to launch and manage clusters visually.

Database options include:

  • MariaDB with one primary node and a read replica
  • Galera for full clustering support and high availability

Users can also customize ingress settings and storage classes to fit their infrastructure needs.

d. Multi-Tenancy Foundations

The new architecture supports running multiple OpenMRS instances in a single Kubernetes cluster, each with:

  • A separate database schema
  • Shared storage and compute resources
  • A dedicated backend and frontend per tenant

This sets the stage for future tenant-level upgrades, isolation, and scaling, making it possible to support many clinics or regions from one unified deployment.

e. Built-in Monitoring, Recovery, and Upgrades

The system includes several features that improve reliability and ease of management:

  • Health checks and auto-restarts keep services running smoothly
  • A future integration with Grafana will allow for log aggregation and metrics dashboards
  • The HTTP gateway will enable zero downtime upgrades  by routing traffic to maintenance pages during system updates

These improvements reduce the operational burden and make OpenMRS more resilient in real-world use.

The Impact

The shift to a cloud-native, cluster-ready deployment model has already begun to reshape how OpenMRS is implemented and managed. For many implementers, what used to be a high-effort, low-confidence process is now faster, more reliable, and easier to scale.

  • Faster, easier deployment: With standardized Helm charts and Terraform scripts, implementers can spin up the full OpenMRS 3 stack in minutes, dramatically reducing setup time and early-stage friction.
  • Lower technical barrier: Tools like Rancher make it possible to deploy and manage Kubernetes clusters visually, removing the need for deep DevOps expertise or complex YAML management.
  • Greater reliability: Support for clustered databases, pod replicas, and auto-recovery mechanisms means OpenMRS can now be deployed with high availability, reducing downtime during updates or system failures.
  • Scalable architecture: The new setup lays the groundwork for multi-site and multi-tenant rollouts, where national programs can manage many OpenMRS instances within a single, coordinated infrastructure.
  • Consistent deployments across regions: By using shared, cloud-agnostic patterns, implementers no longer need to reinvent the wheel. Whether running in AWS, Azure, GCP, or on-prem, they can rely on the same foundation and deployment logic.

These improvements mark a meaningful leap toward making OpenMRS more scalable, dependable, and sustainable for partners around the world, no matter the size or setting of their health programs.

Looking Ahead

With many foundational changes already landing in Platform 2.8, the path ahead is all about deepening and extending OpenMRS’s support for scalable, cloud-ready deployments.

Phase 2: Better Observability and Extensibility

Recent work has added a pluggable StorageService—available since OpenMRS Core 2.8—which allows implementers to store large files (like MRIs) in scalable storage backends such as S3, Longhorn, MinIO, etc.. This supports distributed storage, with built-in replication, high availability, and automated backups.

In the area of observability, the team plans to integrate Grafana dashboards for unified log monitoring, metrics, and performance alerts across services. This will help implementers proactively manage performance and detect stability issues early.

Phase 3: Multi-Tenancy and Coordinated Upgrades

OpenMRS’s Kubernetes-based architecture already supports early patterns for multi-tenancy, where different implementations can run on shared infrastructure with isolated data schemas.

Phase 3 will build on this with tooling for:

  • Coordinated tenant-level upgrades
  • Schema isolation per tenant
  • Infrastructure that supports national-scale deployments with consistent reliability

Phase 4: Integration Middleware

A new area of exploration is support for asynchronous messaging and queuing between OpenMRS and external systems like FHIR servers or Master Patient Indexes (MPI). Plans are forming around using Apache Camel, either as a standalone middleware or OpenMRS module, to:

  • Queue and route patient/person updates
  • Aggregate and translate data before sending it to external systems
  • Support retry logic and dead letter queues for improved reliability

This work aims to offer a pluggable, standards-based way to integrate OpenMRS into health information exchange environments, without sacrificing performance or consistency.

What’s Already Landed in Platform 2.8

These features are already complete and available in the 2.8-SNAPSHOT:

  • ElasticSearch/OpenSearch: A distributed replacement for Lucene search, improving speed, availability, and offloading the API.
  • Infinispan Distributed Caching: For DB entities, queries, and service layer caching—boosting performance and supporting clustering.
  • StorageService: A pluggable abstraction for storing files across local or distributed backends (disk, S3, etc).
  • Load Balancing Readiness: Supported for DB, UI, gateway, and ElasticSearch. Load-balanced OpenMRS API replicas are next.

OpenMRS is not only becoming easier to deploy and manage,but it’s also becoming more future-proof, modular, and scalable than ever before. The foundation is strong, and the community is already putting it to use. What comes next will only build on that momentum.

Perfect—here’s the “Get Involved” section, written in a tone that’s welcoming, action-oriented, and aligned with the rest of the story.

Get Involved

Whether you’re an implementer, engineer, or simply curious about what’s next, there are several ways to engage with the OpenMRS community as we continue improving support for cloud hosting, scaling, and clustering.

  • Try out Platform 2.8: The 2.8-SNAPSHOT is available now. It includes the latest features for cloud readiness, distributed caching, and storage. You can deploy it using Helm or Docker Compose.
  • Spin up a cluster: Use the Helm charts and Terraform scripts to try deploying OpenMRS in Kubernetes. We’d love your feedback on what works well or what’s still tricky.
  • Share your experience: Post in the Talk forum using the cloud tag to share your setup, questions, or ideas. Your insights help shape what we build next.
  • Join community calls: The Platform Team hosts weekly calls where contributors collaborate on everything from horizontal scaling to middleware. Join via the OpenMRS calendar.

Your feedback, questions, and use cases help drive progress. Whether you’re deploying OpenMRS for the first time or scaling to support thousands, your voice is essential in shaping how we build a stronger, more scalable platform, together.

How OpenMRS Is Embracing Cloud Hosting and Clustering
Scroll to top