Skip to content

Blank assistant text with AI SDK v6 and MCP on: finish-reason regression when using responses api with @ai-sdk/openai #20465

@kkugot

Description

@kkugot

Bug Description

Starting from v1.3.4, assistant text sometimes does not appear in the TUI when MCP servers are enabled. Disabling any MCP server immediately restores the response. Upstream logs and older versions (v1.3.3) confirm the assistant response exists — this is a local regression, not an upstream issue.

Root Cause

v1.3.4 upgraded from AI SDK 5 ([email protected]) to AI SDK 6 ([email protected]).

When the model finishes a generation step, it sends back a "finish reason" — a short label explaining why it stopped. Common ones are stop (done talking), tool-calls (wants to use a tool), or length (hit token limit).
OpenCode has a loop that keeps running when the model needs to do more work. It checks the finish reason to decide: should I keep going, or stop?

Before v1.3.4 (AI SDK v5):

  • Unknown/unmapped finish reasons came back as "unknown"
  • The loop said: keep going on "tool-calls" or "unknown"
  • So if the model returned something unexpected, the loop kept running and eventually got the final text

After v1.3.4 (AI SDK v6):

  • The same unmapped finish reasons now come back as "other" instead of "unknown"
  • But nobody updated the loop — it still only continued on "tool-calls" and "unknown"
  • "other" was not in the list, so the loop stopped immediately

This introduced two interacting changes:

1. Finish-reason mapping changed

The default (unmapped) finish reason changed from unknown to other in the Responses adapter:

  • packages/opencode/src/provider/sdk/copilot/responses/map-openai-responses-finish-reason.ts

2. Prompt loop exit logic removed "unknown" without adding "other"

In packages/opencode/src/session/prompt.ts, the main prompt loop previously continued on ["tool-calls", "unknown"]. In v1.3.4, "unknown" was removed — only "tool-calls" keeps the loop alive. But the new default "other" was never added as a replacement.

This means: after a tool-enabled generation step, if the provider returns an unmapped finish reason (now other), the loop exits immediately — before the model generates final assistant text.

Secondary issue: request middleware param key mismatch

LLM.stream middleware in session/llm.ts still targeted args.params.prompt for applying ProviderTransform.message(...). In AI SDK 6, Responses requests use messages or input instead. This meant message history normalization (tool-call IDs, provider options, content parts) could be silently skipped.

Fix

Two changes:

  1. src/session/prompt.ts: Add "other" to both loop-exit continue sets (main loop and structured output loop), replacing the stale "unknown" reference.

  2. src/session/llm.ts: Transform all v6 request param shapes (messages, input, and legacy prompt) in the middleware.

Steps to Reproduce

  1. Configure a custom provider using npm: "@ai-sdk/openai" (e.g., OracleCode Assist)
  2. Send a prompt that triggers tool usage
  3. Observe blank assistant text in TUI

Environment

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions