-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Description
Is your feature request related to a problem? Please describe.
Google Cloud Logging enforces a hard limit of 256KB on a single LogEntry. When Fluent Bit's out_stackdriver plugin encounters a log entry exceeding this limit, the log is rejected or truncated by the Stackdriver API, resulting in lost data. Currently, there is no built-in way within the plugin to elegantly handle oversized logs by splitting them into smaller, compliant chunks that preserve the original log's continuity.
Describe the solution you'd like
Add native support to the out_stackdriver plugin to automatically detect oversized log entries (approaching the 256KB limit) and split them into multiple LogEntry objects according to the Cloud Logging LogSplit API documentation.
Specifically, for a large log entry, the plugin should:
- Divide the large payload (e.g., in
jsonPayloadortextPayload) into smaller chunks that fit within the size limits. - Generate a unique
split.uidfor the group of split entries. - Assign a sequential
split.index(starting at 0) to each chunk. - Set the
split.totalSplitsfield to the total number of chunks.
Implementing this will allow Cloud Logging (and the Logs Explorer) to properly group and reassemble the original oversized log natively.
Describe alternatives you've considered
- Configuring
buffer_chunk_limitor using generic splits or multiline filters across Fluent Bit. However, these are workarounds and don't natively align with Cloud Logging's built-inLogSplitsemantics, making it difficult to use the Logs Explorer for automatic reassembly of large logs (e.g., huge stack traces or JSON payloads). - Dropping or truncating logs via custom scripts, which results in data loss.
Additional context
Cloud Logging LogSplit API reference: https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#logsplit