Simple but fast Redis-backed distributed rate limiter. Allows you to specify time interval and count within to limit distributed operations.
Features:
- extremly simple
- free from race condition through LUA scripting
- fast
Rapidity has two variants:
- simple
Rapidity::Limiterto handle single distibuted counter - complex
Rapidity::Composerto handle multiple counters at once
pool = ConnectionPool.new(size: 10) do
Redis.new(url: ENV.fetch('REDIS_URL', 'redis://127.0.0.1:6379'))
end
# allow no more 10 requests within 5 seconds
limiter = Rapidity::Limiter.new(pool, name: 'requests', threshold: 10, interval: 5)
loop do
# try to obtain 3 requests at once
quota = limiter.obtain(3).times do
make_request
end
if quota == 0
# no more requests allowed within interval
sleep 1
end
endpool = ConnectionPool.new(size: 10) do
Redis.new(url: ENV.fetch('REDIS_URL', 'redis://127.0.0.1:6379'))
end
LIMITS = [
{ interval: 1, threshold: 2 }, # no more 2 requests per second
{ interval: 60, threshold: 200 }, # no more 200 requests per minute
{ interval: 86400, threshold: 10000 } # no more 10k requests per day
]
limiter = Rapidity::Composer.new(pool, name: 'requests', limits: LIMITS)
loop do
# try to obtain 3 requests at once
quota = limiter.obtain(3).times do
make_request
end
if quota == 0
# no more requests allowed within interval
puts limiter.remains # inspect current limits
sleep 1
end
endIf your message producer and message sender are independent services, and you want the sender to be agnostic of the business rules for rate limiting, use the classes in the Share module. The producer is responsible for initializing and configuring the rate limits (e.g., token bucket) with the correct business parameters in Redis. The sender then only consumes these pre-defined limits without knowing the underlying rules.
flowchart LR
G(producer)
B[Redis]
A(["message broker"])
T(sender)
E["external system with request limiting"]
G-- init limit -->B
B-- acquire limit -->T
G-- message [limit1, limit2] -->A
A--->T
T-- limited request -->E
Producer ONLY creates limits, and send messages to the Sender. Sender actually use limits to achieve overall Rate Limiting.
Beyond basic rate limiting, the Share module offers optional queue management capabilities that enable sophisticated Feedback-Driven Flow Control. This feature allows systems to handle temporary load spikes more gracefully while maintaining communication between producers and consumers.
In this scenario, Producer additionally checks the optional max_queue attribute in the limit to understand whether it makes sense to send requests to Sender or whether it is already loaded with previous requests.
Producer MUST obtain semaphore max_queue on limites, then Sender MUST release semaphore max_queue after actual send request.
- Initializing Limits and Optional Queues (Producer Side) The message producer initializes rate limits with specific business rules. Queues can be added for handling traffic spikes.
@pool = ConnectionPool.new(size: 5, timeout: 5) { Redis.new }
@producer = Rapidity::Share::Producer.new(@pool)
api_day_limit = Limit.new("day_limit", max_tokens: 1000, period: 86400, namespace: 'api_v2')
api_hour_limit = Limit.new("hour_limit", max_tokens: 100, period: 3600, max_queue: 100, namespace: 'api_v2')
@producer.init(api_day_limit)
@producer.init(api_hour_limit)- Each message is tagged with the limits it should consume when processed.
- Messages flow through your message broker to the sender service.
- The message sender attempts to acquire tokens before sending. If unavailable, it waits according to the token bucket algorithm.
@pool = ConnectionPool.new(size: 5, timeout: 5) { Redis.new }
@sender = Rapidity::Share::Producer.new(@pool)
@sender.acquire(message['api_v2:day_limit', 'api_v2:hour_limit'], tokens: 1)- For queue-backed limits, senders can release tokens back to the queue to signal max_queue availability.
It's a gem:
gem install rapidityThere's also the wonders of the Gemfile:
gem 'rapidity'- WeTransfer/prorate for LUA-examples