最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

ai sdk - AI SDK: Prevent LLM from Returning Tool Output as Markdown While Keeping It as a UI Component - Stack Overflow

programmeradmin0浏览0评论

The AI SDK streamText() returns two parts inside message:

  1. Tool Result → A structured object containing the relevant data (this should be rendered as a dedicated UI component).
  2. Text Result from LLM → This response includes a Markdown-formatted version of the tool output, even though the data is already displayed as a component.

Ideally, the tool's output should still be referenced in the LLM's response, but without being fully repeated in Markdown. Instead, the LLM should generate a short explanation or context related to the tool result.

Versions

AI SDK 4.1.42 @ai-sdk/openai 1.1.13 React 19.0.0

/compontents/ChatInterface.tsx

'use client'

import { Message as MessageType, useChat } from '@ai-sdk/react'
import { FC, useEffect, useRef } from 'react'
import Message from './Message'
import InputField from './InputField'
import WaitingMessage from '../WaitingMessage'
import ProductTable from '../ProductTable'

interface ComponentProps {
  chatId?: string
  initialMessages: MessageType[],
}

const ChatInterface: FC<ComponentProps> = ({ chatId, initialMessages }) => {
  const { messages, input, handleInputChange, handleSubmit, status, reload } = useChat({ 
    initialMessages, 
    id: chatId, 
    api: '/api/chat',
    sendExtraMessageFields: true,
  })

  const hasReloaded = useRef(false)

  useEffect(() => {
    console.log('initialMessages', initialMessages)
    if (initialMessages.length === 1 && !hasReloaded.current) {
      hasReloaded.current = true
      reload()
    }
  }, [reload, initialMessages])

  return (
    <div className='flex flex-row justify-center pb-2 pt-20 h-dvh'>
      <div className='flex flex-col justify-between gap-4'>
        <div className='flex flex-col gap-6 h-full w-dvw items-center overflow-y-scroll'>
          {messages.map(message => (
            <>
              {message.parts.map((part, index) => {
                if (part.type === 'tool-invocation') {
                  const callId = part.toolInvocation.toolCallId
                  if (part.toolInvocation.toolName === 'getProducts') {
                    if (part.toolInvocation.state === 'call') return <WaitingMessage key={callId} message='Getting product information' />
                    if (part.toolInvocation.state === 'result') return <ProductTable key={callId} products={part.toolInvocation.result} />
                  }
                }
                if (part.type === 'text') return <Message key={`text-${index}`} role={message.role} text={part.text} /> 
                return null
              })}
            </>
          ))}
        </div>
        <div className='flex flex-col gap-3 w-dvw items-center'>
          <InputField 
            input={input} 
            status={status} 
            handleSubmit={handleSubmit} 
            handleInputChange={handleInputChange} 
          />
        </div>
      </div>
    </div>
  )
}

export default ChatInterface

/api/chat/route.ts

export async function POST(request: Request) {
  const { messages, id } = await request.json()

  const result = streamText({
    model: openai('gpt-4o-mini'),
    system: systemPrompt,
    messages,
    maxSteps: 10,
    tools,
  })

  return result.toDataStreamResponse()
}

Expected Behavior:

The Tool Result is rendered as a UI component. The Text Result should acknowledge or reference the tool result but not duplicate its content in Markdown. Instead, it should provide a brief textual summary.

I tried solving this by adjusting the system prompt, explicitly instructing the LLM to only provide a short summary and not repeat the tool output, but this did not work. The LLM still includes the full tool result in Markdown format.

Is there a way to configure the AI SDK so that the LLM only generates a short summary while still referring to the tool result, without formatting it as Markdown?

发布评论

评论列表(0)

  1. 暂无评论