Deployment Models
Orchesty runs the same code in every deployment. What differs is who operates the platform, whether infrastructure is shared or dedicated, and whether workers can be co-hosted with the platform. None of it changes what you build on top.
Orchesty Cloud — Starter / Pro #
We host the platform. Each customer gets their own Orchesty instance: own Admin UI, own topologies, own credentials, isolated runtime.
The underlying datastores — MongoDB and RabbitMQ — are shared across Starter / Pro instances and operated by Orchesty.
Workers run outside the platform, on your infrastructure (laptop, container, VPC, on-prem box). They connect to the platform over outbound HTTPS using the HTTP or Tunnel worker type — no inbound ports on your side.
Best for:
- Trying Orchesty quickly.
- Teams that want zero ops on the platform itself.
Orchesty Cloud — Enterprise #
We still host the platform, but each customer gets a dedicated environment. The whole stack is sized and configured to the customer's workload.
In addition to the standard external-worker setup, Enterprise allows workers to be co-hosted alongside the platform (still operated by Orchesty). Shared plans don't allow this.
Best for:
- Teams whose workload needs dedicated datastores or specific configuration.
- Teams that want their workers operated by Orchesty rather than running them themselves.
Self-hosted #
You run the platform inside your own infrastructure. We ship container images plus Helm / Compose definitions; the operational responsibility is on your side.
Two flavours, same shape:
- Enterprise self-hosted — same code as Orchesty Cloud, with a commercial agreement and support. Required for air-gapped environments, strict data residency, or integrating with on-prem-only systems where opening outbound traffic isn't acceptable.
- Community Edition — same self-hosted shape, without commercial support. Free to run; you maintain it yourself.
Best for:
- Strict data sovereignty / air-gapped environments (Enterprise).
- Teams that want to evaluate or run Orchesty without a commercial contract (Community).
Workers in every model #
A worker is a separate microservice that connects to the orchestration layer over outbound HTTPS. The connection model is identical across deployments — see Workers and SDK.
| Deployment | Where workers run by default | Co-hosted workers? |
|---|---|---|
| Cloud Starter / Pro | Customer's infrastructure | No |
| Cloud Enterprise | Customer's infrastructure | Yes (operated by Orchesty) |
| Self-hosted (Enterprise / Community) | Customer's infrastructure | Yes (operated by customer) |
This is why "where do my workers run?" is never a deployment-model decision — workers always live on outbound HTTPS, and you can mix locations as needed (e.g. most workers in your VPC, one worker close to a legacy on-prem system).
Picking one #
| Constraint | Best fit |
|---|---|
| Move fast, no platform ops | Cloud Starter / Pro |
| Dedicated datastores, sized to workload, optional co-hosted workers | Cloud Enterprise |
| Data must never leave your infrastructure (air-gapped, on-prem only) | Enterprise self-hosted |
| Self-hosted without a support contract | Community Edition |
See also #
- Architecture — how the orchestration and integration layers fit together.
- Workers and SDK — how workers connect across all deployment models.
- Reference: Platform / Environment variables — what to configure per deployment.