Proposal: Incremental Kafka Producer support in Keploy (design & PR breakdown) #3512
Replies: 3 comments 1 reply
-
|
I’ve already worked on a couple of PRs in Keploy and, in parallel, gained a better understanding of Kafka. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @Syedowais312, thanks for sharing the proposal, and I really appreciate it. The high-level approach sounds right to me. However, we had decided to reserve full Kafka support for our paid enterprise version for now, and so the original issue was created in the wrong repo. If you are still interested in working on something similar, I'm happy to work with you on async support (starting with HTTP). The goal would be to allow creating test cases for async consumers. |
Beta Was this translation helpful? Give feedback.
-
|
Hi @slayerjain, I spent some time analyzing the HTTP/1.1 recording flow in Keploy to understand how request–response pairing currently works. From my understanding, the current implementation assumes a strict one-request → one-response model at the connection level. This effectively serializes the flow and breaks support for HTTP/1.1 pipelining, where multiple requests can be in flight on the same connection before any responses are received. Based on this, I’ve been thinking about an alternative approach that stays aligned with HTTP/1.1 semantics and keeps the change scoped to the HTTP integration. The core idea is to decouple request and response I/O, explicitly track in-flight requests per connection, and pair responses using FIFO ordering (as guaranteed by the HTTP/1.1 specification). The approach would be guarded behind a feature flag ( Proposed solution:Per TCP connection:
Request handling loop:
Response handling loop:
Termination / cleanup:
This approach relies on the HTTP/1.1 guarantee that responses are sent in the same order as requests, avoids modifying user traffic, and keeps all state scoped to a single connection. Before starting with a PR, I wanted to share this proposal and get your thoughts on whether this direction makes sense for Keploy, or if there are any edge cases or constraints I might be missing. Please take a look whenever you’re free. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
This proposal is about implementing Kafka producer support in Keploy, as discussed in issue #3474.
I’m planning to work on adding Kafka support to Keploy and would like to align on the approach and scope before opening large PRs.
After understanding Keploy’s interception model and Kafka’s role in microservice architectures, I’d like to propose an incremental, producer-only approach, as suggested in the issue.
Motivation
In many microservices, producing Kafka events is a core side-effect (similar to HTTP calls or DB writes).
Currently, Keploy records HTTP and database interactions, but Kafka producer interactions are not captured, which makes tests incomplete for event-driven services.
The goal here is not to test Kafka itself, but to record and validate how a service interacts with Kafka.
Scope (Producer-only)
This proposal intentionally focuses only on Kafka producers.
Included
ProduceRequest (api_key = 0)Explicitly out of scope
Consumer support would be a separate design and implementation effort.
High-level Approach
Keploy already sits between the service and the network in record mode.
Kafka support follows the same pattern as other dependencies:
Kafka is treated similarly to HTTP / SQL — as an external system interaction, not as a component to be tested.
Proposed PR Breakdown
To keep reviews small and focused, I plan to split the work into multiple PRs:
PR 1 – Kafka traffic detection
PR 2 – Kafka protocol header decoding
PR 3 – ProduceRequest decoding (Producer only)
PR 4 – Recording Kafka producer interactions
Each PR is independently reviewable and builds on the previous one.
Open Questions / Feedback
Thanks for your time, and I’m happy to adjust the approach based on feedback.
Beta Was this translation helpful? Give feedback.
All reactions