Enhance logging and task execution handling in Android#1
Conversation
… background coroutine
There was a problem hiding this comment.
Pull request overview
This PR updates the core multiplatform logger to queue log/appender operations on a single-threaded coroutine worker, adds lazy message overloads for log calls, and tweaks platform appenders and build toolchain settings.
Changes:
- Replaces mutex/runBlocking synchronization in
LOGwith a coroutine/Channel-backed serial work queue and adds lambda-based logging overloads. - Adjusts Apple/Android appenders’ formatting behavior (stack trace concatenation; chunk splitting output).
- Updates Gradle JVM toolchain from 8 to 11.
Reviewed changes
Copilot reviewed 3 out of 4 changed files in this pull request and generated 7 comments.
| File | Description |
|---|---|
src/commonMain/kotlin/com/mindera/lodge/LOG.kt |
Serializes add/remove/log operations via a coroutine worker; adds lazy-message overloads; deduplicates appenders by loggerId. |
src/appleMain/kotlin/com/mindera/lodge/appenders/OSAppender.kt |
Combines message + stack trace into a single OS log call. |
src/androidMain/kotlin/com/mindera/lodge/appenders/LogcatAppender.kt |
Simplifies chunk splitting logic for long messages. |
build.gradle.kts |
Moves compilation toolchain to JDK 11. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| override fun log(severity: SEVERITY, tag: String, t: Throwable?, log: String) { | ||
| with(severity.toLevel()) { | ||
| val prefix = severity.prefix(tag) | ||
| log(prefix + log) | ||
| t?.let { | ||
| log(prefix + it.stackTraceToString()) | ||
| } | ||
| log(t?.let { prefix + "$log\n" + it.stackTraceToString() } ?: (prefix + log)) | ||
| } |
There was a problem hiding this comment.
When t is non-null, the new implementation logs prefix + log + "\n" + stackTrace but does not prefix the stack trace (previously it was logged as a separate prefixed line). If consumers rely on the prefix for filtering/reading logs, consider keeping the prefix on the stack trace as well (or keep two log calls) and avoid the unnecessary "$log" interpolation.
| val chunks = ceil((1f * length / maxLineLength).toDouble()).toInt() | ||
| for (i in 1..chunks) { | ||
| val start = maxLineLength * (i - 1) | ||
| val max = maxLineLength * i | ||
| if (max < length) { | ||
| it.add("[Chunk $i of $chunks] " + substring(maxLineLength * (i - 1), max)) | ||
| } else { | ||
| it.add("[Chunk $i of $chunks] " + substring(maxLineLength * (i - 1))) | ||
| } | ||
| it.add(if (max < length) substring(start, max) else substring(start)) | ||
| } |
There was a problem hiding this comment.
toChunks() no longer includes any chunk marker (the previous "[Chunk i of chunks]" prefix was removed), which makes multi-part messages hard to reconstruct in Logcat. Consider restoring a chunk indicator (or otherwise tagging chunks) so long messages remain debuggable; also consider whether the Throwable should be logged only once rather than repeated for every chunk.
| fun add(appender: Appender) = tasks.trySend { | ||
| if (appenders.none { it.loggerId == appender.loggerId }) { | ||
| appenders.add(appender) | ||
| } else { | ||
| log("LOG", WARN, null) { "Appender '${appender.loggerId}' discarded: an appender with that id is already registered." } |
There was a problem hiding this comment.
add(appender: Appender) now returns the result of tasks.trySend(...) (a ChannelResult<Unit>), which changes the public API from returning Unit and makes it inconsistent with the other add/remove methods. Consider switching back to a block body returning Unit (and optionally handling a failed send explicitly) to avoid a breaking signature change for consumers.
| fun add(appender: Appender) = tasks.trySend { | |
| if (appenders.none { it.loggerId == appender.loggerId }) { | |
| appenders.add(appender) | |
| } else { | |
| log("LOG", WARN, null) { "Appender '${appender.loggerId}' discarded: an appender with that id is already registered." } | |
| fun add(appender: Appender) { | |
| tasks.trySend { | |
| if (appenders.none { it.loggerId == appender.loggerId }) { | |
| appenders.add(appender) | |
| } else { | |
| log("LOG", WARN, null) { "Appender '${appender.loggerId}' discarded: an appender with that id is already registered." } | |
| } |
| private val tasks = Channel<() -> Unit>(UNLIMITED) | ||
|
|
There was a problem hiding this comment.
Using Channel.Factory.UNLIMITED for the work queue means logging spikes can grow the queue without bound and potentially exhaust memory. For a logger, consider a bounded capacity with an explicit onBufferOverflow strategy (drop oldest/latest) or a non-blocking backpressure approach, and treat appender add/remove separately if they must be lossless.
| private var delayMillis = 0L | ||
|
|
||
| /** | ||
| * A dedicated coroutine that pulls lambdas out of `tasks` and executes | ||
| * them one-by-one, preserving the exact order in which they were queued. | ||
| */ | ||
| @Suppress("unused") | ||
| private val job = CoroutineScope(SupervisorJob() + Default.limitedParallelism(1)).apply { | ||
| launch(start = UNDISPATCHED) { | ||
| for (task in tasks) { | ||
| task() | ||
| delay(delayMillis) | ||
| } |
There was a problem hiding this comment.
delayMillis is only used in the worker loop but is never written anywhere in the codebase, so the delay(delayMillis) currently adds complexity without providing behavior. Either remove the delay entirely, or add an explicit (documented) API for configuring it so the field isn’t dead code.
| @Suppress("unused") | ||
| private val job = CoroutineScope(SupervisorJob() + Default.limitedParallelism(1)).apply { | ||
| launch(start = UNDISPATCHED) { | ||
| for (task in tasks) { | ||
| task() | ||
| delay(delayMillis) | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
The job property is not a Job (it’s a CoroutineScope returned from apply) and is only kept for side effects. Renaming it to reflect what it actually is (e.g., workerScope/worker) will reduce confusion for future maintainers.
| private fun log( | ||
| tag: String, | ||
| severity: SEVERITY, | ||
| t: Throwable?, | ||
| text: String | ||
| ) = mutex.withLock { | ||
| if(appenders.isNotEmpty()) { | ||
| val log = "[T#$threadName] | $text" | ||
| appenders.forEach { | ||
| if (it.minLogLevel.ordinal > severity.ordinal) return@forEach | ||
| it.log(severity, tag, t, log) | ||
| message: () -> String, | ||
| ) { | ||
| val originalThread = threadName | ||
| tasks.trySend { | ||
| if (appenders.isNotEmpty()) { | ||
| val log = "[T#$originalThread] | ${message()}" | ||
| appenders.forEach { | ||
| if (it.minLogLevel.ordinal > severity.ordinal) return@forEach | ||
| it.log(severity, tag, t, log) | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
add/remove/log are now fire-and-forget enqueues (trySend) rather than completing synchronously as they did with the previous Mutex.withLock(runBlocking) approach. This is a behavior change for callers (e.g., add() returning before the appender is actually registered, logs potentially not flushed before process exit). Consider documenting this explicitly and/or providing a synchronous/flush option, and handle trySend failures so logs aren’t silently dropped if the channel is closed.
No description provided.