Basics

Why Streaming Integration Beats Synchronous Workflows

A comparison of synchronous workflows and Orchesty's asynchronous streaming model: throughput, persistence, parallelism, and reliability at scale.


When designing integrations, we often encounter two different worlds. The first is represented by traditional iPaaS platforms built on Workflows. The second is Orchesty, with its core concept of Asynchronous Streaming. While they might look similar on the surface, they represent two diametrically opposed approaches to data, performance, and reliability.

1. The Courier vs. The Pipeline: A Difference in Existence #

A traditional workflow acts like a courier. When a trigger occurs (an event), the system "starts" a process instance, executes a series of steps, and once the message is delivered, the instance vanishes. If a thousand messages arrive, the system must start and stop that entire machinery a thousand times.

Orchesty acts as a pipeline. The topology is active infrastructure that is always ready. It doesn't care if a single drop or a thousand gallons per second flow through it. As long as the pipeline has the diameter (throughput), the data just flows. This model eliminates the massive overhead associated with constantly spinning up and tearing down process instances.

2. Throughput and the "Parallel Universe" #

In synchronous workflows, messages often wait for each other. The number of concurrent instances is usually limited by the platform's core performance, creating bottlenecks during traffic spikes.

In Orchesty's asynchronous stream, nothing waits for anything:

  • Massive Parallelism: Every node in the topology works independently. While the first node is processing the 100th message, the second node might be transforming the 50th, and the third node could be saving the 10th.
  • Performance without Barriers: Asynchronous message channels let each step process work as soon as it has capacity. Many processes flow through the system simultaneously without overwhelming the control layer.
Each step in the pipeline processes work in parallel and scales independently
Each step processes work as soon as it has capacity. Bottlenecks at one step do not block the others.

3. Native Persistence: Stateful by Design #

The biggest weakness of synchronous workflows is their fragility. If a process fails in the middle of a long execution, the state is often lost unless the developer explicitly programmed complex "save-points" into a database, which drastically slows down performance.

Orchesty streams are naturally persistent. Since every connection between nodes is an asynchronous channel, the state of every message is safely stored at every moment.

  • Atomic Failure: If one message fails (e.g., due to corrupt data), that specific message stops at the point of failure. The other thousands of messages continue to flow completely unaffected.
  • Reliability without Penalty: In synchronous systems, reliability is an "add-on" that slows down execution. In Orchesty, persistence is a fundamental property of the architecture that actually enables high performance.

4. Escaping the "Timeout" Trap #

Synchronous workflows are almost always bound by time. If a target API responds slowly, the workflow instance eventually hits a timeout (e.g., 60 or 300 seconds) and kills the connection to free up resources. Data is lost, and the state becomes inconsistent.

The asynchronous stream in Orchesty has no concept of an "expiry date." A process can "live" in the pipeline for as long as necessary. If a target system requires slow ingestion (throttling), the data simply waits in the channel and processes at the rate the target allows. Nothing expires, nothing is lost.

5. Summary: Choosing the Right Paradigm #

Synchronous workflows can be efficient for simple, immediate events ("if A happens, send email B"). As soon as data volume grows or the business logic becomes critical, the synchronous model hits a wall: it becomes expensive to run, difficult to debug, and unreliable during peak loads.

Asynchronous streaming is built for the opposite case: integrations that must keep flowing under volume, must preserve state through failure, and must remain operable as part of the business infrastructure. When that is the requirement, the streaming pipeline is the right primitive.


Where next #