feat(tui): OpenCode-style TUI with agent integration#178
feat(tui): OpenCode-style TUI with agent integration#178
Conversation
…layout Upgrade ratatui 0.29→0.30, crossterm 0.28→0.29, tui-textarea→tui-textarea-2 0.10, and add tui-markdown 0.3 for markdown rendering support. Restructure src/tui/ from 3 files to 7-module architecture: - mod.rs: App state with ChatMessage enum, sidebar/palette state, new keybindings - chat.rs: chat area rendering with message variants and tool status display - sidebar.rs: toggleable sidebar (Ctrl+B) with Agent/Tools/Channels/Memory/Peripherals - input.rs: extracted textarea creation and rendering - command.rs: command palette overlay (Ctrl+P) with fuzzy filtering - events.rs: TurnEvent/Observer mapping stubs for agent wiring - theme.rs: expanded with ACCENT/WARN/ERROR colors and new style functions Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…nd palette Chat area now renders markdown via pulldown-cmark (bold, italic, code blocks, headings, lists, block quotes, links). Tool calls display as styled box-drawing blocks with live elapsed timers. Streaming text shows a block cursor indicator. Sidebar sections use ACCENT-colored headers, animated spinners for active tools, and colored status dots for channels. Content scrolls when it overflows. Command palette supports keyboard navigation (Up/Down), selection highlight, and executes slash commands on Enter. Default items populated with /quit, /clear, /help. Note: tui-markdown replaced with pulldown-cmark due to ratatui-core type incompatibility with ratatui 0.30. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Replace the echo-back placeholder in run_interactive_tui() with full agent integration. The TUI now loads config, initializes all agent subsystems (observer, memory, tools, MCP, provider), and processes user messages through Agent::turn_streamed(). Add agent::run_tui() function (cfg(feature = "tui")) that performs subsystem setup and enters a channel-driven message loop. A bridge task converts TurnEvent variants to display strings for the TUI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughAdded a TUI feature: updated TUI dependencies, introduced an agent-facing async Changes
Sequence Diagram(s)sequenceDiagram
participant User as User (Terminal)
participant TUI as TUI (UI task)
participant Bridge as Bridge Task
participant Agent as Agent Loop (agent::run_tui)
participant Render as Renderer (chat/sidebar)
User->>TUI: enter text / submit
TUI->>Agent: user_tx.send(String)
Agent->>Agent: process message, stream turn
Agent->>Bridge: send TurnEvent (Chunk/Thinking/ToolCall/ToolResult)
Bridge->>TUI: send formatted display String
TUI->>Render: update messages/state and draw frame
Render->>User: updated terminal UI
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested labels
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
🧹 Nitpick comments (1)
src/tui/events.rs (1)
6-17: Don't leave the event adapters as silent no-ops.If either helper gets wired up before the parser exists, agent/observer updates will be dropped with no signal. I'd rather make this fail loudly (
todo!(),unimplemented!(), or aResult) until the real mapping lands.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/events.rs` around lines 6 - 17, The current stub implementations of handle_turn_event and handle_observer_event silently drop updates; replace their no-op bodies with a loud failure (e.g., call todo! or unimplemented! with a descriptive message) so wiring them up before the real parser fails fast; specifically update the function bodies for handle_turn_event and handle_observer_event to call todo!("handle_turn_event not implemented - replace with real mapping") and todo!("handle_observer_event not implemented - replace with real mapping") (or convert them to return Result and return Err with a clear error) so agent/observer updates do not get silently ignored.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/agent/loop_.rs`:
- Around line 5181-5184: The code currently forwards the raw error value e into
the TUI stream via event_tx.send(TurnEvent::Chunk { delta: format!("\n[error:
{e}]") }), which can leak sensitive provider/runtime details; instead, transform
e into a sanitized description before sending (e.g., implement or call a
sanitize_error(&e) helper that returns a short, non-sensitive message such as
"[error]" or a safe error category), and use that sanitized string in the
TurnEvent::Chunk payload; update the send site in loop_.rs (the event_tx send of
TurnEvent::Chunk) to use the sanitized result rather than the raw e.
- Around line 5174-5179: The loop currently treats control and data as raw
strings on user_rx and only checks for "__CANCEL__" before calling
agent.turn_streamed, so cancellation cannot interrupt an in-flight turn; change
the user_rx message type to a typed enum (e.g., UserCommand { Message(String),
Cancel }) and update handling in the loop and callers, or add a cancellation
token mechanism passed into agent.turn_streamed so the turn can be aborted
mid-flight (update the turn_streamed signature and call sites accordingly).
Sanitize errors before forwarding to the TUI: replace direct format!("\n[error:
{e}]") with a safe wrapper that maps internal errors to user-friendly messages
(use a helper like sanitize_error or ErrorDisplay) and send that. Finally, check
the Result from event_tx.send(...) wherever used and, on Err, break the
loop/clean up (so a closed TUI channel stops the agent loop) instead of silently
ignoring send failures.
- Around line 5181-5194: The code ignores send failures on event_tx (sending
TurnEvent::Chunk) so the loop continues even if the display bridge is closed;
modify the send calls on event_tx.send(...) in the task handling loop to check
the Result, and if Err(_) is returned (channel closed), break/return to stop the
loop/task instead of continuing; do this for both the error chunk send and the
final empty-chunk end-of-turn send so the loop exits cleanly when event_tx is
closed.
In `@src/main.rs`:
- Around line 99-114: The bridge currently accumulates chunks in current_text
and only sends when an empty sentinel arrives (TurnEvent::Chunk handling), which
collapses streamed output; change the bridge_agent_tx semantics to forward
incremental updates: on each non-empty delta send that delta (or a small Update
enum variant) immediately via bridge_agent_tx.send(...) instead of appending
only, and keep the empty-sentinel behavior for final-flush/EndOfTurn (or send a
distinct End marker) so the TUI can render incremental fragments from
turn_streamed() as they arrive; update any receiver type of
bridge_agent_tx/agent_tx to accept deltas or an Update enum to preserve
compatibility.
- Around line 131-136: The truncation uses byte slicing (&output[..500]) which
can panic on UTF-8 multi-byte boundaries; update the TurnEvent::ToolResult
handling to truncate by characters not bytes: compute a 500-character prefix of
`output` (e.g., using chars().take(500).collect into a String or find a safe
char boundary with char_indices/is_char_boundary) and append "..." only when the
original `output` exceeds that character count, otherwise use the original
`output` for `display`.
In `@src/tui/command.rs`:
- Around line 43-60: The list always renders from filtered[0] so the selected
index can be off-screen; compute a scrolling window start based on selected and
max_visible (e.g. let half = max_visible / 2; let mut start =
selected.saturating_sub(half); start =
start.min(filtered.len().saturating_sub(max_visible).saturating_add(0)); then
iterate filtered.iter().enumerate().skip(start).take(max_visible) so the visible
page is centered on (or otherwise contains) selected; keep using the original
enumerate index (i) for marker/selection comparisons and update the loop to use
skip(start).take(max_visible) instead of just .take(max_visible).
In `@src/tui/mod.rs`:
- Around line 356-360: The code currently flattens every inbound rx.try_recv()
string into ChatMessage::Assistant via app.push_assistant, losing structured
tool/result events; update the drain loop to parse or deserialize the incoming
payload (the bridge in main that tags tool calls/results) and route it to the
proper structures: create/dispatch ChatMessage::ToolCall and
ChatMessage::ToolResult instead of Assistant when the tag indicates a tool
event, and update pending_chunk and active_tools state when streaming tool
chunks arrive; locate the loop reading rx, the call to app.push_assistant, and
the ChatMessage enum handling to implement this branching logic so tool events
populate the new blocks and streaming state rather than being flattened.
- Around line 282-285: The code mixes a u16::MAX sentinel with real line offsets
for scroll_offset, which breaks wrapped/multiline scrolling; change
scroll_offset from a raw u16 sentinel to Option<u16> (None = anchored at
bottom/auto-scroll, Some(n) = explicit line offset), update places that set it
(e.g. where self.messages.push(...) and where auto_scroll is toggled) to set
None when auto_scroll is true and Some(pos) when user scrolls, and update all
comparisons/usages (including the code in src/tui/chat.rs that passes
scroll_offset into Paragraph::scroll and the blocks around the other occurrences
at the ranges corresponding to lines ~295-306 and ~453-465) to handle Option
properly (treat None as bottom and map Some(n) to the numeric offset) so
PageUp/PageDown and multiline wrapping behave correctly.
- Around line 282-290: Currently the code appends ChatMessage::User to
self.messages and clears the input (via scroll_offset/autoscroll) before
attempting tx.try_send, so when try_send fails the UI shows a sent message that
was never delivered; change the flow in the send logic (the block that currently
references self.messages, ChatMessage::User, tx.try_send, self.spinner and
self.push_system) to attempt tx.try_send(text.clone()) first, and only on Ok do
the append to self.messages, set self.spinner =
Some(SpinnerState::new("pondering")) and adjust scroll_offset; on Err avoid
appending and instead call self.push_system(...) and restore the input field
state (or mark the message as failed) so the textarea isn’t cleared while the
message wasn't sent.
In `@src/tui/sidebar.rs`:
- Around line 108-119: The scroll calculation uses lines.len() but wrapping is
enabled, so long logical lines can occupy multiple visual rows and break
scrolling; before computing inner_height/scroll, pre-wrap each entry to visual
lines using the available width (area.width minus 2 for left/right borders) and
produce wrapped_lines, then compute scroll = wrapped_lines.len() > inner_height
? try_from(wrapped_lines.len() - inner_height) : 0 and pass
Paragraph::new(wrapped_lines) (keep or remove Wrap as appropriate), or
alternatively disable Wrap and truncate each logical line to fit the width so
lines.len() matches visual rows.
---
Nitpick comments:
In `@src/tui/events.rs`:
- Around line 6-17: The current stub implementations of handle_turn_event and
handle_observer_event silently drop updates; replace their no-op bodies with a
loud failure (e.g., call todo! or unimplemented! with a descriptive message) so
wiring them up before the real parser fails fast; specifically update the
function bodies for handle_turn_event and handle_observer_event to call
todo!("handle_turn_event not implemented - replace with real mapping") and
todo!("handle_observer_event not implemented - replace with real mapping") (or
convert them to return Result and return Err with a clear error) so
agent/observer updates do not get silently ignored.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 1e3bb7f1-ec6b-4811-9918-8aaf25714f6c
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock,!**/Cargo.lock
📒 Files selected for processing (12)
Cargo.tomlsrc/agent/loop_.rssrc/agent/mod.rssrc/main.rssrc/tui/chat.rssrc/tui/command.rssrc/tui/events.rssrc/tui/input.rssrc/tui/mod.rssrc/tui/sidebar.rssrc/tui/theme.rssrc/tui/widgets.rs
💤 Files with no reviewable changes (1)
- src/tui/widgets.rs
src/agent/loop_.rs
Outdated
| loop { | ||
| match user_rx.recv().await { | ||
| Some(msg) if msg == "__CANCEL__" => {} | ||
| Some(msg) => { | ||
| match agent.turn_streamed(&msg, event_tx.clone()).await { | ||
| Ok(_) => {} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify control/data multiplexing and in-flight cancellation behavior in TUI path.
rg -n --type=rust -C3 '__CANCEL__|run_tui\(|turn_streamed\('Repository: 5queezer/hrafn
Length of output: 4867
🏁 Script executed:
# 1. Get the full run_tui function to examine error handling and send patterns
sed -n '4945,5201p' src/agent/loop_.rs | cat -nRepository: 5queezer/hrafn
Length of output: 12219
🏁 Script executed:
# 2. Check the full turn_streamed signature to confirm cancellation support
sed -n '980,990p' src/agent/agent.rs | cat -nRepository: 5queezer/hrafn
Length of output: 597
🏁 Script executed:
# 3. Examine the TUI code that sends messages to see if user input is validated
rg -n -A 10 -B 3 "CANCEL_SENTINEL|__CANCEL__" src/tui/mod.rsRepository: 5queezer/hrafn
Length of output: 1043
🏁 Script executed:
# 4. Check if there are any cancellation tokens passed to turn_streamed in any context
rg -n "turn_streamed" src/ -A 2 -B 2 | grep -i "cancel\|token"Repository: 5queezer/hrafn
Length of output: 132
"__CANCEL__" cannot interrupt an in-flight turn and design conflates control/data on a string channel.
Because turn_streamed(...).await blocks the message loop (line 234), cancel is only handled after the active turn finishes (line 232 checks before entry). Additionally, any user message equal to "__CANCEL__" is silently swallowed instead of being sent to the agent. Consider using a typed enum channel (e.g., enum UserCommand { Message(String), Cancel }) or adding a true cancellation token to turn_streamed for mid-turn interruption.
Raw error messages should be sanitized before forwarding to the TUI. Line 239 formats the raw error object directly as format!("\n[error: {e}]"), which may expose sensitive provider or runtime details. Filter or wrap errors before sending to event_tx.
Channel send failures are silently ignored (lines 237, 246). If the TUI closes the event channel, the agent loop continues running. Check send results and break the loop on failure to clean up resources.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/agent/loop_.rs` around lines 5174 - 5179, The loop currently treats
control and data as raw strings on user_rx and only checks for "__CANCEL__"
before calling agent.turn_streamed, so cancellation cannot interrupt an
in-flight turn; change the user_rx message type to a typed enum (e.g.,
UserCommand { Message(String), Cancel }) and update handling in the loop and
callers, or add a cancellation token mechanism passed into agent.turn_streamed
so the turn can be aborted mid-flight (update the turn_streamed signature and
call sites accordingly). Sanitize errors before forwarding to the TUI: replace
direct format!("\n[error: {e}]") with a safe wrapper that maps internal errors
to user-friendly messages (use a helper like sanitize_error or ErrorDisplay) and
send that. Finally, check the Result from event_tx.send(...) wherever used and,
on Err, break the loop/clean up (so a closed TUI channel stops the agent loop)
instead of silently ignoring send failures.
src/tui/mod.rs
Outdated
| self.messages.push(ChatMessage::User { text: text.clone() }); | ||
| if self.auto_scroll { | ||
| self.scroll_offset = u16::MAX; | ||
| } |
There was a problem hiding this comment.
The scroll state mixes a sentinel with real line offsets.
src/tui/chat.rs passes scroll_offset straight into Paragraph::scroll, so this value is line-based. Using u16::MAX as “bottom” and comparing it against messages.len() breaks multiline content: PageUp stays pinned at the bottom, and PageDown can re-enable auto-scroll far too early once a few wrapped/markdown lines are on screen.
Also applies to: 295-306, 453-465
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/mod.rs` around lines 282 - 285, The code mixes a u16::MAX sentinel
with real line offsets for scroll_offset, which breaks wrapped/multiline
scrolling; change scroll_offset from a raw u16 sentinel to Option<u16> (None =
anchored at bottom/auto-scroll, Some(n) = explicit line offset), update places
that set it (e.g. where self.messages.push(...) and where auto_scroll is
toggled) to set None when auto_scroll is true and Some(pos) when user scrolls,
and update all comparisons/usages (including the code in src/tui/chat.rs that
passes scroll_offset into Paragraph::scroll and the blocks around the other
occurrences at the ranges corresponding to lines ~295-306 and ~453-465) to
handle Option properly (treat None as bottom and map Some(n) to the numeric
offset) so PageUp/PageDown and multiline wrapping behave correctly.
- Fix UTF-8 panic: use char_indices().nth(500) instead of byte slicing - Stream chunks incrementally instead of batching entire turns - Send user input before pushing ChatMessage::User to avoid phantom messages - Sanitize agent errors before forwarding to TUI (no raw internals) - Break agent loop when event_tx channel closes - Parse [tool:NAME] and [result:NAME] tags in TUI drain loop so tool blocks actually render instead of flattening to Assistant messages - Scroll command palette window with selection (skip/take around selected) - Disable sidebar wrapping, truncate long lines for accurate scroll math - Make events.rs stubs loud (todo!()) instead of silent no-ops Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/tui/mod.rs (1)
281-292: Differentiatetry_sendfull vs closed errors.The current message always says “channel full”, but a closed channel is a different failure mode and should be surfaced distinctly.
Minimal refinement
- _ => match tx.try_send(text.clone()) { + _ => match tx.try_send(text.clone()) { Ok(()) => { self.messages.push(ChatMessage::User { text }); if self.auto_scroll { self.scroll_offset = u16::MAX; } self.spinner = Some(SpinnerState::new("pondering")); } - Err(_) => { - self.textarea.insert_str(&text); - self.push_system("[send failed — channel full]".into()); + Err(err) => { + self.textarea.insert_str(&text); + match err { + mpsc::error::TrySendError::Full(_) => { + self.push_system("[send failed — channel full]".into()); + } + mpsc::error::TrySendError::Closed(_) => { + self.push_system("[send failed — agent channel closed]".into()); + } + } } },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tui/mod.rs` around lines 281 - 292, The error handling for tx.try_send in the send branch conflates a full channel and a closed channel; change the Err(_) arm to match on tokio::sync::mpsc::error::TrySendError (or TrySendError<...>) and handle TrySendError::Full(_) by restoring textarea, pushing the existing "[send failed — channel full]" system message and leaving behavior intact, and handle TrySendError::Closed(_) by restoring textarea and pushing a distinct "[send failed — channel closed]" system message (use the existing self.textarea.insert_str and self.push_system calls for both paths, matching on tx.try_send and preserving spinner/auto_scroll logic).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/tui/mod.rs`:
- Around line 267-270: The "/clear" command currently only clears self.messages
and resets self.scroll_offset, leaving streaming/tool state (self.pending_chunk,
self.active_tools, self.spinner) intact; update the "/clear" branch to also
reset self.pending_chunk to None or empty, clear/stop self.active_tools (e.g.,
clear the collection and ensure any tool state is stopped), and reset
self.spinner to its idle/default state so no stale UI elements remain after
clearing messages.
---
Nitpick comments:
In `@src/tui/mod.rs`:
- Around line 281-292: The error handling for tx.try_send in the send branch
conflates a full channel and a closed channel; change the Err(_) arm to match on
tokio::sync::mpsc::error::TrySendError (or TrySendError<...>) and handle
TrySendError::Full(_) by restoring textarea, pushing the existing "[send failed
— channel full]" system message and leaving behavior intact, and handle
TrySendError::Closed(_) by restoring textarea and pushing a distinct "[send
failed — channel closed]" system message (use the existing
self.textarea.insert_str and self.push_system calls for both paths, matching on
tx.try_send and preserving spinner/auto_scroll logic).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f08c7bc4-34a7-484a-8e59-643a79006b42
📒 Files selected for processing (6)
src/agent/loop_.rssrc/main.rssrc/tui/command.rssrc/tui/events.rssrc/tui/mod.rssrc/tui/sidebar.rs
🚧 Files skipped from review as they are similar to previous changes (4)
- src/tui/events.rs
- src/main.rs
- src/tui/sidebar.rs
- src/agent/loop_.rs
| "/clear" => { | ||
| self.output.clear(); | ||
| self.messages.clear(); | ||
| self.scroll_offset = 0; | ||
| } |
There was a problem hiding this comment.
/clear leaves residual streaming/tool state.
Line 268 clears messages, but pending_chunk, active_tools, and spinner are retained, so stale UI can remain after a clear.
Minimal patch
"/clear" => {
self.messages.clear();
+ self.pending_chunk.clear();
+ self.active_tools.clear();
+ self.spinner = None;
self.scroll_offset = 0;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "/clear" => { | |
| self.output.clear(); | |
| self.messages.clear(); | |
| self.scroll_offset = 0; | |
| } | |
| "/clear" => { | |
| self.messages.clear(); | |
| self.pending_chunk.clear(); | |
| self.active_tools.clear(); | |
| self.spinner = None; | |
| self.scroll_offset = 0; | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/tui/mod.rs` around lines 267 - 270, The "/clear" command currently only
clears self.messages and resets self.scroll_offset, leaving streaming/tool state
(self.pending_chunk, self.active_tools, self.spinner) intact; update the
"/clear" branch to also reset self.pending_chunk to None or empty, clear/stop
self.active_tools (e.g., clear the collection and ensure any tool state is
stopped), and reset self.spinner to its idle/default state so no stale UI
elements remain after clearing messages.
Summary
src/tui/into 7-module architecture: chat, sidebar, input, command palette, events, themeagent::run_tui()callingturn_streamed()with TurnEvent bridgeLayout
Key decisions
agent::run_tui()replicates subsystem setup fromrun()rather than modifying the existing functionTest plan
cargo check --features tuipassescargo clippy --features tui -- -D warningspassescargo check(default features) passeshrafnwith no args → TUI opens in alternate screen🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
UI Improvements