Deployment Flexibility: Cloud, Private Cloud, or On-Premise
Choose the deployment model that fits your security, compliance, and operational requirements without changing how your topologies are built.
Integration platforms often force you to make a tough choice up front: take the convenience of a SaaS product and accept that your data leaves your perimeter, or self-host an open-source tool and accept that you carry the full operational burden alone. Orchesty is built around the idea that the deployment model is a runtime decision, not an architectural one. The same topologies, the same SDK, and the same APIs run unchanged across every option.
1. Why Deployment Flexibility Matters #
Integrations typically sit at the most sensitive seam of an organisation, between core systems (ERP, CRM, billing, warehouse) and external partners. The "right" place to run them depends on factors that have nothing to do with how the topology is designed:
- Regulatory and compliance constraints (GDPR, HIPAA, SOC 2, PCI DSS, sector-specific rules).
- Data residency requirements that pin storage and processing to a specific country or region.
- Network topology: whether the systems being integrated are public, in a private VPC, or only reachable from inside a corporate network.
- Operational maturity: some teams want a managed service from day one, others have a strong platform team that prefers to own the stack.
Orchesty separates what you build from where it runs, so any of these constraints can be satisfied without rewriting your integrations.
2. Shared Cloud: Fastest Time to Value #
The shared cloud option is the default for teams that want to start building immediately. Sign up, create a topology, and your integrations are running in minutes on Orchesty-managed infrastructure.
- Zero infrastructure work: no Kubernetes cluster, no message broker, no monitoring stack to operate.
- Automatic upgrades: platform improvements roll out continuously without your intervention.
- Multi-tenant isolation: each tenant runs in an isolated namespace with its own queues, topologies, and credentials.
- Predictable pricing: pay for the topology slots and compute units you actually use.
This is the right starting point for startups, internal automation projects, and teams that want to validate the platform before committing to a more involved deployment.
3. Dedicated Environment: Enterprise Isolation #
For organisations that need the convenience of a managed service but cannot share infrastructure with other tenants, Orchesty offers a dedicated environment. The platform is provisioned in a single-tenant cluster, in a region of your choice, and managed by Orchesty.
- Single-tenant by construction: your data, queues, and compute never share runtime resources with another customer.
- Region pinning: choose where data is stored and processed to satisfy data residency rules.
- VPC peering and private networking: connect the platform directly to your private networks without traffic touching the public internet.
- Custom SLAs: backup cadence, RTO/RPO targets, support response times, and uptime guarantees defined per contract.
- Compliance-aligned operations: audit trails, access controls, and key management aligned with SOC 2 / ISO 27001 expectations.
The developer experience is identical to the shared cloud (same UI, same APIs, same SDK). Only the underlying infrastructure is yours.
4. On-Premise: Full Data Sovereignty #
When data simply cannot leave your perimeter (public sector, finance, healthcare, defence, or any environment with strict on-prem mandates), Orchesty can be installed inside your own infrastructure.
- Runs on your Kubernetes cluster (or via Docker Compose for smaller installs).
- No outbound dependency on Orchesty cloud: the platform is fully functional in air-gapped or restricted-network environments.
- You own everything: encryption keys, secrets, message queues, persistent storage, and observability data never leave your stack.
- Bring your own components: backups, monitoring (Prometheus / Grafana / Datadog / New Relic), logging (ELK / Loki), and secret stores (Vault / KMS) plug into the existing platform.
- Source-available core under the Elastic License: you can read, audit, and patch the code that runs your integrations.
Upgrades follow your maintenance windows, deployments your change-management process. Orchesty provides release artefacts, upgrade playbooks, and (optionally) hands-on engineering support.
5. Hybrid and Multi-Region Topologies #
Real-world architectures rarely fit neatly into a single box. Orchesty supports hybrid setups where different parts of the same logical integration landscape live in different deployment models:
- Sensitive workloads on-premise, less sensitive workloads in the cloud, connected over secure tunnels.
- Region-local processing to keep customer data in-region while a central control plane provides unified observability.
- Edge ingestion with downstream processing in a dedicated cluster.
Because every topology is self-contained and every connection between nodes is an asynchronous channel, splitting workloads across environments is a deployment concern, not a redesign.
6. The Same Platform, Wherever It Runs #
The most important guarantee Orchesty makes about deployment flexibility is what doesn't change between options:
- Same topology format: export from cloud, run on-prem, no transformation needed.
- Same SDK and APIs: components written for one deployment run on any other.
- Same observability model: metrics, traces, and logs follow the same conventions everywhere.
- Same upgrade path: start in the cloud, move to dedicated, fall back to on-prem, or run all three side by side.
You make the deployment decision based on business and compliance needs, not because you're locked into a particular architecture. As your requirements evolve (a new market, a new regulation, a new acquisition), the same platform supports the new shape.
Where next #
- Hub overview: Five Core Principles
- Managed cloud option: Orchesty Platform
- Self-hosted with support: Enterprise Edition
- Self-hosted, no support: Community Edition
- Reference (deployment models): Documentation: Deployment Models