Skip to content

feat(tui): OpenCode-style TUI with agent integration#178

Draft
5queezer wants to merge 4 commits intomasterfrom
feat/tui-opencode-parity
Draft

feat(tui): OpenCode-style TUI with agent integration#178
5queezer wants to merge 4 commits intomasterfrom
feat/tui-opencode-parity

Conversation

@5queezer
Copy link
Copy Markdown
Owner

@5queezer 5queezer commented Apr 12, 2026

Summary

  • Upgrade ratatui 0.29→0.30, crossterm→0.29, add pulldown-cmark for markdown rendering
  • Restructure src/tui/ into 7-module architecture: chat, sidebar, input, command palette, events, theme
  • Implement OpenCode-style layout: toggleable sidebar (Ctrl+B) + scrollable chat with inline tool blocks + command palette (Ctrl+P)
  • Wire TUI to real agent via agent::run_tui() calling turn_streamed() with TurnEvent bridge

Layout

Sidebar hidden (default):          Sidebar visible (Ctrl+B):
┌────────────────────────┐         ┌──────────┬─────────────┐
│ Chat Area              │         │ Sidebar  │ Chat Area   │
│ (markdown-rendered)    │         │ • Agent  │             │
│ - user messages        │         │ • Tools  │             │
│ - assistant (markdown) │         │ • Chans  │             │
│ - tool call blocks     │         │ • Memory │             │
│ - tool result blocks   │         │ • Periph │             │
├────────────────────────┤         ├──────────┼─────────────┤
│ Input (tui-textarea)   │         │          │ Input       │
└────────────────────────┘         └──────────┴─────────────┘

Key decisions

  • pulldown-cmark instead of tui-markdown (type conflict with ratatui 0.30's ratatui-core)
  • tui-textarea-2 fork (maintained, supports ratatui 0.30)
  • agent::run_tui() replicates subsystem setup from run() rather than modifying the existing function

Test plan

  • cargo check --features tui passes
  • cargo clippy --features tui -- -D warnings passes
  • cargo check (default features) passes
  • Launch hrafn with no args → TUI opens in alternate screen
  • Type a message → agent responds with streamed markdown
  • Ctrl+B toggles sidebar with agent/tools/channels sections
  • Ctrl+P opens command palette, arrow keys navigate, Enter executes
  • Tool calls render as styled box-drawing blocks with elapsed timers
  • PageUp/PageDown scrolls chat, auto-scroll resumes on new output
  • /quit exits cleanly, terminal state restored
  • ESC cancels in-progress request

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Redesigned TUI into a structured chat interface with streaming assistant responses and cancel support
    • New sidebar showing agent status, tools, channels, memory activity, and peripherals (Ctrl+B)
    • Command palette (Ctrl+P) for searchable command navigation
    • Improved tool execution views with call/result boxes and timing
  • UI Improvements

    • Markdown rendering for assistant messages
    • Enhanced theming and clearer message styling
    • Better error and agent-event feedback in the TUI bridge

5queezer and others added 3 commits April 12, 2026 02:02
…layout

Upgrade ratatui 0.29→0.30, crossterm 0.28→0.29, tui-textarea→tui-textarea-2
0.10, and add tui-markdown 0.3 for markdown rendering support.

Restructure src/tui/ from 3 files to 7-module architecture:
- mod.rs: App state with ChatMessage enum, sidebar/palette state, new keybindings
- chat.rs: chat area rendering with message variants and tool status display
- sidebar.rs: toggleable sidebar (Ctrl+B) with Agent/Tools/Channels/Memory/Peripherals
- input.rs: extracted textarea creation and rendering
- command.rs: command palette overlay (Ctrl+P) with fuzzy filtering
- events.rs: TurnEvent/Observer mapping stubs for agent wiring
- theme.rs: expanded with ACCENT/WARN/ERROR colors and new style functions

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…nd palette

Chat area now renders markdown via pulldown-cmark (bold, italic, code
blocks, headings, lists, block quotes, links). Tool calls display as
styled box-drawing blocks with live elapsed timers. Streaming text
shows a block cursor indicator.

Sidebar sections use ACCENT-colored headers, animated spinners for
active tools, and colored status dots for channels. Content scrolls
when it overflows.

Command palette supports keyboard navigation (Up/Down), selection
highlight, and executes slash commands on Enter. Default items
populated with /quit, /clear, /help.

Note: tui-markdown replaced with pulldown-cmark due to ratatui-core
type incompatibility with ratatui 0.30.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the echo-back placeholder in run_interactive_tui() with full
agent integration. The TUI now loads config, initializes all agent
subsystems (observer, memory, tools, MCP, provider), and processes
user messages through Agent::turn_streamed().

Add agent::run_tui() function (cfg(feature = "tui")) that performs
subsystem setup and enters a channel-driven message loop. A bridge
task converts TurnEvent variants to display strings for the TUI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 12, 2026

📝 Walkthrough

Walkthrough

Added a TUI feature: updated TUI dependencies, introduced an agent-facing async run_tui entrypoint that streams TurnEvents, rewired main to bridge TurnEvents → TUI strings, and refactored the TUI into submodules (chat, sidebar, command, input, events, theme) with structured chat/message state.

Changes

Cohort / File(s) Summary
Dependency Updates
Cargo.toml
Bumped ratatui (0.29→0.30) and crossterm (0.28→0.29); replaced tui-textareatui-textarea-2; added pulldown-cmark; added "tui" to ci-all feature.
Agent TUI Entrypoint
src/agent/.../loop_.rs, src/agent/mod.rs
Added #[cfg(feature="tui")] pub async fn run_tui(...) that reads user messages from a tokio::mpsc::Receiver<String>, calls agent.turn_streamed, emits TurnEvents to an output sender, handles cancel sentinel and error-flush logic; re-exported behind tui feature.
Main Integration
src/main.rs
Replaced temporary echo loop with runtime/config init; created turn_event channel and a bridge task that converts TurnEvent variants into formatted strings (including pretty tool args and truncated results), forwards them to the TUI, and coordinates shutdown on errors/close.
TUI Core & State
src/tui/mod.rs
Refactored TUI into submodules; replaced flat output: Vec<String> with messages: Vec<ChatMessage> (User/Assistant/ToolCall/ToolResult/System), sidebar and palette state, active tools tracking, and updated input/submit and key handling (palette, sidebar toggles, scroll/submit behavior).
Chat Rendering
src/tui/chat.rs
New chat renderer with Spinner/SpinnerState, markdown rendering via pulldown_cmark, per-message rendering (user, assistant, tool call/result, system), pending chunk streaming, wrapping/scrolling logic.
Sidebar Rendering
src/tui/sidebar.rs
New sidebar renderer showing agent info, active tools with elapsed time, channels, memory ops, and peripheral status with width-truncation and scroll offset handling.
Command Palette & Input
src/tui/command.rs, src/tui/input.rs
Command palette popup renderer with case-insensitive filtering, navigation and selection; helpers to create/render TextArea using tui-textarea-2.
Events & Stubs
src/tui/events.rs
Added handle_turn_event and handle_observer_event stubs (todo! placeholders) to map incoming event lines into App state.
Theme & Widgets Migration
src/tui/theme.rs, src/tui/widgets.rs
Added color constants and style helpers (ACCENT, WARN, ERROR, SYSTEM, sidebar/tool/system/error styles). Deleted widgets.rs — spinner and render helpers moved into new modules.

Sequence Diagram(s)

sequenceDiagram
    participant User as User (Terminal)
    participant TUI as TUI (UI task)
    participant Bridge as Bridge Task
    participant Agent as Agent Loop (agent::run_tui)
    participant Render as Renderer (chat/sidebar)

    User->>TUI: enter text / submit
    TUI->>Agent: user_tx.send(String)
    Agent->>Agent: process message, stream turn
    Agent->>Bridge: send TurnEvent (Chunk/Thinking/ToolCall/ToolResult)
    Bridge->>TUI: send formatted display String
    TUI->>Render: update messages/state and draw frame
    Render->>User: updated terminal UI
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested labels

enhancement

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: implementing an OpenCode-style TUI with agent integration, which directly reflects the major structural and feature additions in the pull request.
Description check ✅ Passed The description covers the key aspects: dependency upgrades, architectural restructuring, layout design, and agent integration. It includes a visual layout diagram, key technical decisions, and a comprehensive test plan with specific verification steps.
Docstring Coverage ✅ Passed Docstring coverage is 81.40% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/tui-opencode-parity

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (1)
src/tui/events.rs (1)

6-17: Don't leave the event adapters as silent no-ops.

If either helper gets wired up before the parser exists, agent/observer updates will be dropped with no signal. I'd rather make this fail loudly (todo!(), unimplemented!(), or a Result) until the real mapping lands.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tui/events.rs` around lines 6 - 17, The current stub implementations of
handle_turn_event and handle_observer_event silently drop updates; replace their
no-op bodies with a loud failure (e.g., call todo! or unimplemented! with a
descriptive message) so wiring them up before the real parser fails fast;
specifically update the function bodies for handle_turn_event and
handle_observer_event to call todo!("handle_turn_event not implemented - replace
with real mapping") and todo!("handle_observer_event not implemented - replace
with real mapping") (or convert them to return Result and return Err with a
clear error) so agent/observer updates do not get silently ignored.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/agent/loop_.rs`:
- Around line 5181-5184: The code currently forwards the raw error value e into
the TUI stream via event_tx.send(TurnEvent::Chunk { delta: format!("\n[error:
{e}]") }), which can leak sensitive provider/runtime details; instead, transform
e into a sanitized description before sending (e.g., implement or call a
sanitize_error(&e) helper that returns a short, non-sensitive message such as
"[error]" or a safe error category), and use that sanitized string in the
TurnEvent::Chunk payload; update the send site in loop_.rs (the event_tx send of
TurnEvent::Chunk) to use the sanitized result rather than the raw e.
- Around line 5174-5179: The loop currently treats control and data as raw
strings on user_rx and only checks for "__CANCEL__" before calling
agent.turn_streamed, so cancellation cannot interrupt an in-flight turn; change
the user_rx message type to a typed enum (e.g., UserCommand { Message(String),
Cancel }) and update handling in the loop and callers, or add a cancellation
token mechanism passed into agent.turn_streamed so the turn can be aborted
mid-flight (update the turn_streamed signature and call sites accordingly).
Sanitize errors before forwarding to the TUI: replace direct format!("\n[error:
{e}]") with a safe wrapper that maps internal errors to user-friendly messages
(use a helper like sanitize_error or ErrorDisplay) and send that. Finally, check
the Result from event_tx.send(...) wherever used and, on Err, break the
loop/clean up (so a closed TUI channel stops the agent loop) instead of silently
ignoring send failures.
- Around line 5181-5194: The code ignores send failures on event_tx (sending
TurnEvent::Chunk) so the loop continues even if the display bridge is closed;
modify the send calls on event_tx.send(...) in the task handling loop to check
the Result, and if Err(_) is returned (channel closed), break/return to stop the
loop/task instead of continuing; do this for both the error chunk send and the
final empty-chunk end-of-turn send so the loop exits cleanly when event_tx is
closed.

In `@src/main.rs`:
- Around line 99-114: The bridge currently accumulates chunks in current_text
and only sends when an empty sentinel arrives (TurnEvent::Chunk handling), which
collapses streamed output; change the bridge_agent_tx semantics to forward
incremental updates: on each non-empty delta send that delta (or a small Update
enum variant) immediately via bridge_agent_tx.send(...) instead of appending
only, and keep the empty-sentinel behavior for final-flush/EndOfTurn (or send a
distinct End marker) so the TUI can render incremental fragments from
turn_streamed() as they arrive; update any receiver type of
bridge_agent_tx/agent_tx to accept deltas or an Update enum to preserve
compatibility.
- Around line 131-136: The truncation uses byte slicing (&output[..500]) which
can panic on UTF-8 multi-byte boundaries; update the TurnEvent::ToolResult
handling to truncate by characters not bytes: compute a 500-character prefix of
`output` (e.g., using chars().take(500).collect into a String or find a safe
char boundary with char_indices/is_char_boundary) and append "..." only when the
original `output` exceeds that character count, otherwise use the original
`output` for `display`.

In `@src/tui/command.rs`:
- Around line 43-60: The list always renders from filtered[0] so the selected
index can be off-screen; compute a scrolling window start based on selected and
max_visible (e.g. let half = max_visible / 2; let mut start =
selected.saturating_sub(half); start =
start.min(filtered.len().saturating_sub(max_visible).saturating_add(0)); then
iterate filtered.iter().enumerate().skip(start).take(max_visible) so the visible
page is centered on (or otherwise contains) selected; keep using the original
enumerate index (i) for marker/selection comparisons and update the loop to use
skip(start).take(max_visible) instead of just .take(max_visible).

In `@src/tui/mod.rs`:
- Around line 356-360: The code currently flattens every inbound rx.try_recv()
string into ChatMessage::Assistant via app.push_assistant, losing structured
tool/result events; update the drain loop to parse or deserialize the incoming
payload (the bridge in main that tags tool calls/results) and route it to the
proper structures: create/dispatch ChatMessage::ToolCall and
ChatMessage::ToolResult instead of Assistant when the tag indicates a tool
event, and update pending_chunk and active_tools state when streaming tool
chunks arrive; locate the loop reading rx, the call to app.push_assistant, and
the ChatMessage enum handling to implement this branching logic so tool events
populate the new blocks and streaming state rather than being flattened.
- Around line 282-285: The code mixes a u16::MAX sentinel with real line offsets
for scroll_offset, which breaks wrapped/multiline scrolling; change
scroll_offset from a raw u16 sentinel to Option<u16> (None = anchored at
bottom/auto-scroll, Some(n) = explicit line offset), update places that set it
(e.g. where self.messages.push(...) and where auto_scroll is toggled) to set
None when auto_scroll is true and Some(pos) when user scrolls, and update all
comparisons/usages (including the code in src/tui/chat.rs that passes
scroll_offset into Paragraph::scroll and the blocks around the other occurrences
at the ranges corresponding to lines ~295-306 and ~453-465) to handle Option
properly (treat None as bottom and map Some(n) to the numeric offset) so
PageUp/PageDown and multiline wrapping behave correctly.
- Around line 282-290: Currently the code appends ChatMessage::User to
self.messages and clears the input (via scroll_offset/autoscroll) before
attempting tx.try_send, so when try_send fails the UI shows a sent message that
was never delivered; change the flow in the send logic (the block that currently
references self.messages, ChatMessage::User, tx.try_send, self.spinner and
self.push_system) to attempt tx.try_send(text.clone()) first, and only on Ok do
the append to self.messages, set self.spinner =
Some(SpinnerState::new("pondering")) and adjust scroll_offset; on Err avoid
appending and instead call self.push_system(...) and restore the input field
state (or mark the message as failed) so the textarea isn’t cleared while the
message wasn't sent.

In `@src/tui/sidebar.rs`:
- Around line 108-119: The scroll calculation uses lines.len() but wrapping is
enabled, so long logical lines can occupy multiple visual rows and break
scrolling; before computing inner_height/scroll, pre-wrap each entry to visual
lines using the available width (area.width minus 2 for left/right borders) and
produce wrapped_lines, then compute scroll = wrapped_lines.len() > inner_height
? try_from(wrapped_lines.len() - inner_height) : 0 and pass
Paragraph::new(wrapped_lines) (keep or remove Wrap as appropriate), or
alternatively disable Wrap and truncate each logical line to fit the width so
lines.len() matches visual rows.

---

Nitpick comments:
In `@src/tui/events.rs`:
- Around line 6-17: The current stub implementations of handle_turn_event and
handle_observer_event silently drop updates; replace their no-op bodies with a
loud failure (e.g., call todo! or unimplemented! with a descriptive message) so
wiring them up before the real parser fails fast; specifically update the
function bodies for handle_turn_event and handle_observer_event to call
todo!("handle_turn_event not implemented - replace with real mapping") and
todo!("handle_observer_event not implemented - replace with real mapping") (or
convert them to return Result and return Err with a clear error) so
agent/observer updates do not get silently ignored.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 1e3bb7f1-ec6b-4811-9918-8aaf25714f6c

📥 Commits

Reviewing files that changed from the base of the PR and between 6c6f971 and a819f7d.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock, !**/Cargo.lock
📒 Files selected for processing (12)
  • Cargo.toml
  • src/agent/loop_.rs
  • src/agent/mod.rs
  • src/main.rs
  • src/tui/chat.rs
  • src/tui/command.rs
  • src/tui/events.rs
  • src/tui/input.rs
  • src/tui/mod.rs
  • src/tui/sidebar.rs
  • src/tui/theme.rs
  • src/tui/widgets.rs
💤 Files with no reviewable changes (1)
  • src/tui/widgets.rs

Comment on lines +5174 to +5179
loop {
match user_rx.recv().await {
Some(msg) if msg == "__CANCEL__" => {}
Some(msg) => {
match agent.turn_streamed(&msg, event_tx.clone()).await {
Ok(_) => {}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify control/data multiplexing and in-flight cancellation behavior in TUI path.
rg -n --type=rust -C3 '__CANCEL__|run_tui\(|turn_streamed\('

Repository: 5queezer/hrafn

Length of output: 4867


🏁 Script executed:

# 1. Get the full run_tui function to examine error handling and send patterns
sed -n '4945,5201p' src/agent/loop_.rs | cat -n

Repository: 5queezer/hrafn

Length of output: 12219


🏁 Script executed:

# 2. Check the full turn_streamed signature to confirm cancellation support
sed -n '980,990p' src/agent/agent.rs | cat -n

Repository: 5queezer/hrafn

Length of output: 597


🏁 Script executed:

# 3. Examine the TUI code that sends messages to see if user input is validated
rg -n -A 10 -B 3 "CANCEL_SENTINEL|__CANCEL__" src/tui/mod.rs

Repository: 5queezer/hrafn

Length of output: 1043


🏁 Script executed:

# 4. Check if there are any cancellation tokens passed to turn_streamed in any context
rg -n "turn_streamed" src/ -A 2 -B 2 | grep -i "cancel\|token"

Repository: 5queezer/hrafn

Length of output: 132


"__CANCEL__" cannot interrupt an in-flight turn and design conflates control/data on a string channel.

Because turn_streamed(...).await blocks the message loop (line 234), cancel is only handled after the active turn finishes (line 232 checks before entry). Additionally, any user message equal to "__CANCEL__" is silently swallowed instead of being sent to the agent. Consider using a typed enum channel (e.g., enum UserCommand { Message(String), Cancel }) or adding a true cancellation token to turn_streamed for mid-turn interruption.

Raw error messages should be sanitized before forwarding to the TUI. Line 239 formats the raw error object directly as format!("\n[error: {e}]"), which may expose sensitive provider or runtime details. Filter or wrap errors before sending to event_tx.

Channel send failures are silently ignored (lines 237, 246). If the TUI closes the event channel, the agent loop continues running. Check send results and break the loop on failure to clean up resources.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/agent/loop_.rs` around lines 5174 - 5179, The loop currently treats
control and data as raw strings on user_rx and only checks for "__CANCEL__"
before calling agent.turn_streamed, so cancellation cannot interrupt an
in-flight turn; change the user_rx message type to a typed enum (e.g.,
UserCommand { Message(String), Cancel }) and update handling in the loop and
callers, or add a cancellation token mechanism passed into agent.turn_streamed
so the turn can be aborted mid-flight (update the turn_streamed signature and
call sites accordingly). Sanitize errors before forwarding to the TUI: replace
direct format!("\n[error: {e}]") with a safe wrapper that maps internal errors
to user-friendly messages (use a helper like sanitize_error or ErrorDisplay) and
send that. Finally, check the Result from event_tx.send(...) wherever used and,
on Err, break the loop/clean up (so a closed TUI channel stops the agent loop)
instead of silently ignoring send failures.

src/tui/mod.rs Outdated
Comment on lines +282 to +285
self.messages.push(ChatMessage::User { text: text.clone() });
if self.auto_scroll {
self.scroll_offset = u16::MAX;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The scroll state mixes a sentinel with real line offsets.

src/tui/chat.rs passes scroll_offset straight into Paragraph::scroll, so this value is line-based. Using u16::MAX as “bottom” and comparing it against messages.len() breaks multiline content: PageUp stays pinned at the bottom, and PageDown can re-enable auto-scroll far too early once a few wrapped/markdown lines are on screen.

Also applies to: 295-306, 453-465

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tui/mod.rs` around lines 282 - 285, The code mixes a u16::MAX sentinel
with real line offsets for scroll_offset, which breaks wrapped/multiline
scrolling; change scroll_offset from a raw u16 sentinel to Option<u16> (None =
anchored at bottom/auto-scroll, Some(n) = explicit line offset), update places
that set it (e.g. where self.messages.push(...) and where auto_scroll is
toggled) to set None when auto_scroll is true and Some(pos) when user scrolls,
and update all comparisons/usages (including the code in src/tui/chat.rs that
passes scroll_offset into Paragraph::scroll and the blocks around the other
occurrences at the ranges corresponding to lines ~295-306 and ~453-465) to
handle Option properly (treat None as bottom and map Some(n) to the numeric
offset) so PageUp/PageDown and multiline wrapping behave correctly.

- Fix UTF-8 panic: use char_indices().nth(500) instead of byte slicing
- Stream chunks incrementally instead of batching entire turns
- Send user input before pushing ChatMessage::User to avoid phantom messages
- Sanitize agent errors before forwarding to TUI (no raw internals)
- Break agent loop when event_tx channel closes
- Parse [tool:NAME] and [result:NAME] tags in TUI drain loop so tool
  blocks actually render instead of flattening to Assistant messages
- Scroll command palette window with selection (skip/take around selected)
- Disable sidebar wrapping, truncate long lines for accurate scroll math
- Make events.rs stubs loud (todo!()) instead of silent no-ops

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/tui/mod.rs (1)

281-292: Differentiate try_send full vs closed errors.

The current message always says “channel full”, but a closed channel is a different failure mode and should be surfaced distinctly.

Minimal refinement
-            _ => match tx.try_send(text.clone()) {
+            _ => match tx.try_send(text.clone()) {
                 Ok(()) => {
                     self.messages.push(ChatMessage::User { text });
                     if self.auto_scroll {
                         self.scroll_offset = u16::MAX;
                     }
                     self.spinner = Some(SpinnerState::new("pondering"));
                 }
-                Err(_) => {
-                    self.textarea.insert_str(&text);
-                    self.push_system("[send failed — channel full]".into());
+                Err(err) => {
+                    self.textarea.insert_str(&text);
+                    match err {
+                        mpsc::error::TrySendError::Full(_) => {
+                            self.push_system("[send failed — channel full]".into());
+                        }
+                        mpsc::error::TrySendError::Closed(_) => {
+                            self.push_system("[send failed — agent channel closed]".into());
+                        }
+                    }
                 }
             },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tui/mod.rs` around lines 281 - 292, The error handling for tx.try_send in
the send branch conflates a full channel and a closed channel; change the Err(_)
arm to match on tokio::sync::mpsc::error::TrySendError (or TrySendError<...>)
and handle TrySendError::Full(_) by restoring textarea, pushing the existing
"[send failed — channel full]" system message and leaving behavior intact, and
handle TrySendError::Closed(_) by restoring textarea and pushing a distinct
"[send failed — channel closed]" system message (use the existing
self.textarea.insert_str and self.push_system calls for both paths, matching on
tx.try_send and preserving spinner/auto_scroll logic).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/tui/mod.rs`:
- Around line 267-270: The "/clear" command currently only clears self.messages
and resets self.scroll_offset, leaving streaming/tool state (self.pending_chunk,
self.active_tools, self.spinner) intact; update the "/clear" branch to also
reset self.pending_chunk to None or empty, clear/stop self.active_tools (e.g.,
clear the collection and ensure any tool state is stopped), and reset
self.spinner to its idle/default state so no stale UI elements remain after
clearing messages.

---

Nitpick comments:
In `@src/tui/mod.rs`:
- Around line 281-292: The error handling for tx.try_send in the send branch
conflates a full channel and a closed channel; change the Err(_) arm to match on
tokio::sync::mpsc::error::TrySendError (or TrySendError<...>) and handle
TrySendError::Full(_) by restoring textarea, pushing the existing "[send failed
— channel full]" system message and leaving behavior intact, and handle
TrySendError::Closed(_) by restoring textarea and pushing a distinct "[send
failed — channel closed]" system message (use the existing
self.textarea.insert_str and self.push_system calls for both paths, matching on
tx.try_send and preserving spinner/auto_scroll logic).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: f08c7bc4-34a7-484a-8e59-643a79006b42

📥 Commits

Reviewing files that changed from the base of the PR and between a819f7d and 91714c7.

📒 Files selected for processing (6)
  • src/agent/loop_.rs
  • src/main.rs
  • src/tui/command.rs
  • src/tui/events.rs
  • src/tui/mod.rs
  • src/tui/sidebar.rs
🚧 Files skipped from review as they are similar to previous changes (4)
  • src/tui/events.rs
  • src/main.rs
  • src/tui/sidebar.rs
  • src/agent/loop_.rs

Comment on lines 267 to 270
"/clear" => {
self.output.clear();
self.messages.clear();
self.scroll_offset = 0;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

/clear leaves residual streaming/tool state.

Line 268 clears messages, but pending_chunk, active_tools, and spinner are retained, so stale UI can remain after a clear.

Minimal patch
             "/clear" => {
                 self.messages.clear();
+                self.pending_chunk.clear();
+                self.active_tools.clear();
+                self.spinner = None;
                 self.scroll_offset = 0;
             }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
"/clear" => {
self.output.clear();
self.messages.clear();
self.scroll_offset = 0;
}
"/clear" => {
self.messages.clear();
self.pending_chunk.clear();
self.active_tools.clear();
self.spinner = None;
self.scroll_offset = 0;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/tui/mod.rs` around lines 267 - 270, The "/clear" command currently only
clears self.messages and resets self.scroll_offset, leaving streaming/tool state
(self.pending_chunk, self.active_tools, self.spinner) intact; update the
"/clear" branch to also reset self.pending_chunk to None or empty, clear/stop
self.active_tools (e.g., clear the collection and ensure any tool state is
stopped), and reset self.spinner to its idle/default state so no stale UI
elements remain after clearing messages.

@5queezer 5queezer marked this pull request as draft April 12, 2026 18:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant