Bug Description
Starting from v1.3.4, assistant text sometimes does not appear in the TUI when MCP servers are enabled. Disabling any MCP server immediately restores the response. Upstream logs and older versions (v1.3.3) confirm the assistant response exists — this is a local regression, not an upstream issue.
Root Cause
v1.3.4 upgraded from AI SDK 5 ([email protected]) to AI SDK 6 ([email protected]).
When the model finishes a generation step, it sends back a "finish reason" — a short label explaining why it stopped. Common ones are stop (done talking), tool-calls (wants to use a tool), or length (hit token limit).
OpenCode has a loop that keeps running when the model needs to do more work. It checks the finish reason to decide: should I keep going, or stop?
Before v1.3.4 (AI SDK v5):
- Unknown/unmapped finish reasons came back as "unknown"
- The loop said: keep going on "tool-calls" or "unknown"
- So if the model returned something unexpected, the loop kept running and eventually got the final text
After v1.3.4 (AI SDK v6):
- The same unmapped finish reasons now come back as "other" instead of "unknown"
- But nobody updated the loop — it still only continued on "tool-calls" and "unknown"
- "other" was not in the list, so the loop stopped immediately
This introduced two interacting changes:
1. Finish-reason mapping changed
The default (unmapped) finish reason changed from unknown to other in the Responses adapter:
packages/opencode/src/provider/sdk/copilot/responses/map-openai-responses-finish-reason.ts
2. Prompt loop exit logic removed "unknown" without adding "other"
In packages/opencode/src/session/prompt.ts, the main prompt loop previously continued on ["tool-calls", "unknown"]. In v1.3.4, "unknown" was removed — only "tool-calls" keeps the loop alive. But the new default "other" was never added as a replacement.
This means: after a tool-enabled generation step, if the provider returns an unmapped finish reason (now other), the loop exits immediately — before the model generates final assistant text.
Secondary issue: request middleware param key mismatch
LLM.stream middleware in session/llm.ts still targeted args.params.prompt for applying ProviderTransform.message(...). In AI SDK 6, Responses requests use messages or input instead. This meant message history normalization (tool-call IDs, provider options, content parts) could be silently skipped.
Fix
Two changes:
-
src/session/prompt.ts: Add "other" to both loop-exit continue sets (main loop and structured output loop), replacing the stale "unknown" reference.
-
src/session/llm.ts: Transform all v6 request param shapes (messages, input, and legacy prompt) in the middleware.
Steps to Reproduce
- Configure a custom provider using
npm: "@ai-sdk/openai" (e.g., OracleCode Assist)
- Send a prompt that triggers tool usage
- Observe blank assistant text in TUI
Environment
Bug Description
Starting from
v1.3.4, assistant text sometimes does not appear in the TUI when MCP servers are enabled. Disabling any MCP server immediately restores the response. Upstream logs and older versions (v1.3.3) confirm the assistant response exists — this is a local regression, not an upstream issue.Root Cause
v1.3.4upgraded from AI SDK 5 ([email protected]) to AI SDK 6 ([email protected]).When the model finishes a generation step, it sends back a "finish reason" — a short label explaining why it stopped. Common ones are stop (done talking), tool-calls (wants to use a tool), or length (hit token limit).
OpenCode has a loop that keeps running when the model needs to do more work. It checks the finish reason to decide: should I keep going, or stop?
Before v1.3.4 (AI SDK v5):
After v1.3.4 (AI SDK v6):
This introduced two interacting changes:
1. Finish-reason mapping changed
The default (unmapped) finish reason changed from
unknowntootherin the Responses adapter:packages/opencode/src/provider/sdk/copilot/responses/map-openai-responses-finish-reason.ts2. Prompt loop exit logic removed
"unknown"without adding"other"In
packages/opencode/src/session/prompt.ts, the main prompt loop previously continued on["tool-calls", "unknown"]. Inv1.3.4,"unknown"was removed — only"tool-calls"keeps the loop alive. But the new default"other"was never added as a replacement.This means: after a tool-enabled generation step, if the provider returns an unmapped finish reason (now
other), the loop exits immediately — before the model generates final assistant text.Secondary issue: request middleware param key mismatch
LLM.streammiddleware insession/llm.tsstill targetedargs.params.promptfor applyingProviderTransform.message(...). In AI SDK 6, Responses requests usemessagesorinputinstead. This meant message history normalization (tool-call IDs, provider options, content parts) could be silently skipped.Fix
Two changes:
src/session/prompt.ts: Add"other"to both loop-exit continue sets (main loop and structured output loop), replacing the stale"unknown"reference.src/session/llm.ts: Transform all v6 request param shapes (messages,input, and legacyprompt) in the middleware.Steps to Reproduce
npm: "@ai-sdk/openai"(e.g., OracleCode Assist)Environment
v1.3.4through current (v1.4.3)[email protected],@ai-sdk/[email protected]