最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

javascript - How to send OpenAI stream response from Nextjs API to client - Stack Overflow

programmeradmin4浏览0评论

I tried openai-streams + nextjs-openai, they only works for Node 18+, however, they failed on Node 17 and lower. I'm restricted to Node 17 and lower as Digital Oceans App Platform currently not supporting Node 18.

I also tried this method which works well on client side, but it exposes the API key. I want to implement within the NextJS API route, but I'm unable to pass the streaming response to the client.

With the code below, I can only get the first chunk of response from the API route, and not able to get the streaming response to have the ChatGPT effect. Please kindly help.

// /api/prompt.js

import { Configuration, OpenAIApi } from "openai";
import { Readable } from "readable-stream";

const configuration = new Configuration({
  apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

export default async function handler(req, res) {
  const pletion = await openai.createCompletion(
    {
      model: "text-davinci-003",
      prompt: "tell me a story",
      max_tokens: 500,
      stream: true,
    },
    { responseType: "stream" }
  );

  pletion.data.on("data", async (data) => {
    const lines = data
      .toString()
      .split("\n")
      .filter((line) => line.trim() !== "");

    for (const line of lines) {
      const message = line.replace(/^data: /, "");
      if (message === "[DONE]") {
        return;
      }
      try {
        const parsed = JSON.parse(message);
        const string = parsed.choices[0].text;
        Readable.from(string).pipe(res);
      } catch (error) {
        console.error("Could not JSON parse stream message", message, error);
      }
    }
  });

I tried openai-streams + nextjs-openai, they only works for Node 18+, however, they failed on Node 17 and lower. I'm restricted to Node 17 and lower as Digital Oceans App Platform currently not supporting Node 18.

I also tried this method which works well on client side, but it exposes the API key. I want to implement within the NextJS API route, but I'm unable to pass the streaming response to the client.

With the code below, I can only get the first chunk of response from the API route, and not able to get the streaming response to have the ChatGPT effect. Please kindly help.

// /api/prompt.js

import { Configuration, OpenAIApi } from "openai";
import { Readable } from "readable-stream";

const configuration = new Configuration({
  apiKey: process.env.NEXT_PUBLIC_OPENAI_API_KEY,
});
const openai = new OpenAIApi(configuration);

export default async function handler(req, res) {
  const pletion = await openai.createCompletion(
    {
      model: "text-davinci-003",
      prompt: "tell me a story",
      max_tokens: 500,
      stream: true,
    },
    { responseType: "stream" }
  );

  pletion.data.on("data", async (data) => {
    const lines = data
      .toString()
      .split("\n")
      .filter((line) => line.trim() !== "");

    for (const line of lines) {
      const message = line.replace(/^data: /, "");
      if (message === "[DONE]") {
        return;
      }
      try {
        const parsed = JSON.parse(message);
        const string = parsed.choices[0].text;
        Readable.from(string).pipe(res);
      } catch (error) {
        console.error("Could not JSON parse stream message", message, error);
      }
    }
  });

// /ponents/Completion.js

export default function Completion() {
  const [text, setText] = useState();

  const generate = async () => {
    const response = await fetch("/api/prompt");
    console.log("response: ", response);
    const text = await response.text();
    console.log("text: ", text);
    setText((state) => state + text);
  };
  
  // ... rest
}

Share Improve this question asked May 17, 2023 at 14:15 samchuangsamchuang 5111 gold badge8 silver badges18 bronze badges
Add a ment  | 

4 Answers 4

Reset to default 4

Here's a solution that uses the latest nextJS (uses the nextJS app router, not the older pages router), and without any extra libraries.

Its the plete code, with ments to help you to customize it to your needs.

import { NextResponse } from 'next/server'
import type OpenAI from 'openai';

export async function POST(request: Request) {
  const oaiResponse = await openai.chat.pletions.create({
    // ...
    stream: true,
  })

  // ...
  const encoder = new TextEncoder()

  async function* makeIterator() {
    // first send the OAI chunks
    for await (const chunk of oaiResponse) {
      const delta = chunk.choices[0].delta.content as string
      // you can do any additional post processing / transformation step here, like
      pleteMessage += delta

      // you can yield any string by `yield encoder.encode(str)`, including JSON:
      yield encoder.encode(JSON.stringify({ assistant_response_chunk: delta }))
    }

     // optionally, some additional info can be sent here, like
     yield encoder.encode(JSON.stringify({ thread_id: thread._id }))
  }

  return new Response(iteratorToStream(makeIterator()))
}


function iteratorToStream(iterator: any) {
  return new ReadableStream({
    async pull(controller) {
      const { value, done } = await iterator.next()

      if (done) {
        controller.close()
      } else {
        controller.enqueue(value)
      }
    },
  })
}

This solution was mainly inspired from the NextJS' steaming response, using the web APIs directly.

You can use the openai-streams/node entrypoint on Node <18, which will return a Node.js Readable instead of a WHATWG ReadableStream. I'll update the docs to be clearer soon.


Node: Consuming streams in Next.js API Route

If you cannot use an Edge runtime or want to consume Node.js streams for another reason, use openai-streams/node:

import type { NextApiRequest, NextApiResponse } from "next";
import { OpenAI } from "openai-streams/node";

export default async function test(_: NextApiRequest, res: NextApiResponse) {
  const stream = await OpenAI("pletions", {
    model: "text-davinci-003",
    prompt: "Write a happy sentence.\n\n",
    max_tokens: 25,
  });

  stream.pipe(res);
}

Well you can use StreamingTextResponse of vercel AI sdk

Attached is some example code form their docs

import { Configuration, OpenAIApi } from 'openai-edge'
import { OpenAIStream, StreamingTextResponse } from 'ai'

const config = new Configuration({
  apiKey: process.env.OPENAI_API_KEY
})
const openai = new OpenAIApi(config)

export const runtime = 'edge'

export async function POST(req) {
  const { messages } = await req.json()
  const response = await openai.createChatCompletion({
    model: 'gpt-4',
    stream: true,
    messages
  })
  const stream = OpenAIStream(response)
  return new StreamingTextResponse(stream)
}

Notes:

  1. This requires the Edge Runtime
  2. The library openai-edge requires Node 18+. However, this is expected to be deprecated soon with the support for streaming in the official openai library.

Don't use openai-edge or others, should keep use an official packages/modules

发布评论

评论列表(0)

  1. 暂无评论