Skip to content

PS-9773 [8.4]: Fix audit_log_read() always returning null#5910

Open
jakub-nowakowski-percona wants to merge 1 commit intopercona:8.4from
jakub-nowakowski-percona:PS-9773-8.4-audit_log_read-return-null
Open

PS-9773 [8.4]: Fix audit_log_read() always returning null#5910
jakub-nowakowski-percona wants to merge 1 commit intopercona:8.4from
jakub-nowakowski-percona:PS-9773-8.4-audit_log_read-return-null

Conversation

@jakub-nowakowski-percona
Copy link
Copy Markdown
Contributor

https://perconadev.atlassian.net/browse/PS-9773

Replace full-document JSON parsing when discovering each log file's first event timestamp with a streaming SAX handler that stops after the first "timestamp" string. That avoids loading large arrays into memory and keeps init() usable on valid-but-growing files.

When reading events, treat EOF reached during incremental parsing as normal and stop the read loop instead of failing. The active audit log file is a top-level JSON array that is not closed until rotation, so treating EOF as a hard parse error broke audit_log_read() on the current file.

Refactor reader arguments: replace the close_read_sequence flag with an explicit Command enum (continue, read from bookmark, read from timestamp, close sequence). Seek-to-start now depends on the command: bookmark mode requires matching timestamp and id; timestamp-only mode starts at the first event whose timestamp is on or after the requested time. Add LogBookmark::operator== for the bookmark comparison.

Adjust set_files_to_read_list() to choose the log file using the interval between consecutive files' first timestamps relative to the next event bookmark, so the correct segment is selected when resuming.

FileWriterCompressing flushes gzip stream after each logged event to make sure FileReaderDecompressing can read events from currently opened log file.

With encryption enabled, log events in JSON and JSONL log files are padded with whitespace block in order to prevent those events from being stuck in internal buffer of encryption context for prolonged amount of time.

https://perconadev.atlassian.net/browse/PS-9773

Replace full-document JSON parsing when discovering each log file's first
event timestamp with a streaming SAX handler that stops after the first
"timestamp" string. That avoids loading large arrays into memory and keeps
init() usable on valid-but-growing files.

When reading events, treat EOF reached during incremental parsing as normal
and stop the read loop instead of failing. The active audit log file is a
top-level JSON array that is not closed until rotation, so treating EOF as a
hard parse error broke audit_log_read() on the current file.

Refactor reader arguments: replace the close_read_sequence flag with an explicit
Command enum (continue, read from bookmark, read from timestamp, close
sequence). Seek-to-start now depends on the command: bookmark mode requires
matching timestamp and id; timestamp-only mode starts at the first event whose
timestamp is on or after the requested time. Add LogBookmark::operator== for
the bookmark comparison.

Adjust set_files_to_read_list() to choose the log file using the interval
between consecutive files' first timestamps relative to the next event
bookmark, so the correct segment is selected when resuming.

FileWriterCompressing flushes gzip stream after each logged event to make
sure FileReaderDecompressing can read events from currently opened log file.

With encryption enabled, log events in JSON and JSONL log files are padded
with whitespace block in order to prevent those events from being stuck
in internal buffer of encryption context for prolonged amount of time.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant