Operating Modes
1. Overview
Technically, WebSemaphore is a hybrid of a semaphore and queue acting in tandem to enable safe operations on critical and/or limited resources. It is scalable and fully serverless by design thereby avoiding many potential operational pitfalls of alternative solutions, while staying lightweight and easily accessible.
WebSemaphore provides both synchronous and asynchronous modes of lock acquisition:
- Synchronous: In this mode the behavior is identical to a classic semaphore sans scalability issues.
- Asynchronous In this mode the semaphore acts as a funnel, limiting the traffic rate while not skipping or losing messages. This allows for virtually infinite request volume while maintaining processing capacity at an acceptable level.
Where the async mode is applicable, WebSemaphore provides lossless amortization of traffic to comply with a parallel request/execution limit. The concept of amortization should be understandable from chart 2.2 below depicting WebSemaphore's effect on trafic when used in asynchronous mode.
In the next sections we demonstrate the flow for the primary modes of operation and the traffic patterns emerging from each flow. Please note that when the synchronous mode is used exclusively, the traffic pattern is very much equivalent to requesting a provider directly, therefore the comparison effectively demonstrates one of the key benefits of preferring WebSemaphore over direct resource access.
Notes on charts:
- For illustrative purposes, we assume that the maximum resource count is constant and the traffic exhibits periodic spikes with a Gauss distributed amplitude. In a real life scenario: (1) the distribution may vary more drastically and (2) The semaphore's maxValue may be updated in real time e.g. when backend instances are added at peaks or removed at lows.
- For better comparability of the charts, both simulations were performed on the same input dataset, i.e. the distribution of inbound requests over time is identical in both cases.
2. Synchronous lock acquisition
This is the most basic semaphore behavior. Upon sending a lock request, the requesting system will immediately get a grant or a rejection. This implies consequences such as the need to deal with the fact that the operation is not possible to complete, however for some use cases a consistent way to get an immediate acquisition or rejection is exactly the goal.
Note that rejections are only possible in this pattern. The asynchronous pattern will accept all messages and process them eventually.
Applicable when:
- denial of service is acceptable
- the client has its own retry / failover mechanism implemented
Fig 1.1 Sync request flow
The two diagrams below present the traffic pattern emerging from this flow. The consumer is a plain stateless client that has no notion of queues or retries, therefore only requests that are below the semaphore's maxValue are acccepted for processing, while all others are rejected. If the client were to implement a retry mechanism, depending on its quality the requests might have been accepted eventually but it would take extra work to keep track of each request and even more effort to keep them in the order they initially appeared.
Legend: lockValue waiting request rate per second
Fig 1.1 Sync request simulation - semaphore performance over time
Legend: processed rejected
Fig 1.2 Sync request simulation - totals over time. The final result is 42.9% rejections. Note it's specific to the simulation parameters and execution environment.
3. Asynchronous lock acquisition
In this pattern, a system requests a resource lock and suspends processing until the lock is granted. The suspension usually implies the termination of the handler that invoked the request.
When the resource is available, WebSemaphore will invoke the callback associated with the requested semaphore, whereas processing can resume with high confidence that the requested resource is within the resource allocation limits.
In this pattern the semaphore is delivering on its promise, but requests are never rejected. This is compensated by the eventuality of the critical resource allocation.
Applicable when:
- processing occurs in a downstream system or involves a local resource that has limited capacity.
- all acccepted requests must be eventually processed
Fig 2.1 Async request flow
Alt a and b present optional alternative integration flows. See more in the next section.
Legend: waiting lockValue request rate per second
Fig 2.2 Async request simulation - semaphore performance over time.
To note: (1) lockValue (red) never exceeds 10, the maximum configured for this semaphore (2) The peaks of waiting messages (yellow) follow closely the spikes of traffic rate (purple), and are compensated by holding high lockValue long after the peak is over. This exemplifies both the traffic amortization and eventual delivery aspects of WebSemaphore. (3) The timeline displays a very dense pattern of relatively frequent spikes and the actual task performed during lock acquisition is taking between 1.5 and 2.5 sec. This allows us to demonstrate the behavior using a 4 minutes long simulation. Real use cases may e.g. have daily periodicity, higher max lockValue and tasks ranging between a few seconds to tens of minutes to hours, however similarly to this small-scale model, the low traffic times will compensate for high traffic times, while near-real time processing will be provided whenever possible.
Legend: processed lockValue
Fig 2.3 Async request simulation - totals over time. Note there is zero loss in this scenario.
The next section provides more information on asynchronous mode usage.
Check out our interactive simulation where you can model your own asynchronous solution with charts like these and compare its performance to the synchronous approach.