最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

is there any way to stream response word by word of chatgpt api directly in react native (with javascript) - Stack Overflow

programmeradmin3浏览0评论

I want to use Chat GPT Turbo api directly in react native (expo) with word by word stream here is working example without stream

  fetch(``, {
  body: JSON.stringify({
    model: 'gpt-3.5-turbo',
    messages: [{ role: 'user', content: 'hello' }],
    temperature: 0.3,
    max_tokens: 2000,
  }),
  method: 'POST',
  headers: {
    'content-type': 'application/json',
    Authorization: 'Bearer ' + API_KEY,
  },
}).then((response) => {
  console.log(response); //If you want to check the full response
  if (response.ok) {
    response.json().then((json) => {
      console.log(json); //If you want to check the response as JSON
      console.log(json.choices[0].message.content); //HERE'S THE CHATBOT'S RESPONSE
    });
  }
});

what can i change to stream data word by word

I want to use Chat GPT Turbo api directly in react native (expo) with word by word stream here is working example without stream

  fetch(`https://api.openai./v1/chat/pletions`, {
  body: JSON.stringify({
    model: 'gpt-3.5-turbo',
    messages: [{ role: 'user', content: 'hello' }],
    temperature: 0.3,
    max_tokens: 2000,
  }),
  method: 'POST',
  headers: {
    'content-type': 'application/json',
    Authorization: 'Bearer ' + API_KEY,
  },
}).then((response) => {
  console.log(response); //If you want to check the full response
  if (response.ok) {
    response.json().then((json) => {
      console.log(json); //If you want to check the response as JSON
      console.log(json.choices[0].message.content); //HERE'S THE CHATBOT'S RESPONSE
    });
  }
});

what can i change to stream data word by word

Share Improve this question asked Mar 23, 2023 at 17:37 IbadIbad 7842 gold badges9 silver badges17 bronze badges 6
  • What do you mean by stream data word by word? – devpolo Commented Mar 23, 2023 at 17:43
  • @devpolo like in chat gpt website. it streams word by word – Ibad Commented Mar 23, 2023 at 17:45
  • I try to give stream: true, in the body it does not work also i ask GPT it give answers which can work on web but not in react native – Ibad Commented Mar 23, 2023 at 17:50
  • The snippet is a REST based API call. – Anshuman Kumar Commented Mar 29, 2023 at 8:35
  • 2 We've used this React Hook and it seems to work: github./XD2Sketch/react-chat-stream – Kevin Goedecke Commented Jul 13, 2023 at 11:31
 |  Show 1 more ment

4 Answers 4

Reset to default 6

OpenAI APIs rely on SSE (Server Side Events) to stream the response back to you. If you pass the stream parameter in your API request, you will receive chunks of data when they are calculated by OpenAI. This creates the illusion of a real-time response that mimics someone typing.

The hardest part to figure out might be how to connect your frontend with your backend. Every-time the backend receives a new chunk you want to display it in the frontend.

I created a simple NextJs project on Replit that demonstrates just that. Live demo

you will need to install better-sse package

npm install better-sse

Server side In an API route file

import {createSession} from "better-sse";

const session = await createSession(req, res);
      if (!session.isConnected) throw new Error('Not connected');

const { data } = await openai.createCompletion({
  model: 'text-davinci-003',
  n: 1,
  max_tokens: 2048,
  temperature: 0.3,
  stream: true,
  prompt: `CHANGE TO YOUR OWN PROMPTS`
}, {
  timeout: 1000 * 60 * 2,
  responseType: 'stream'
});

//what to do when receiving data from the API
data.on('data', text => {
  const lines = text.toString().split('\n').filter(line => line.trim() !== '');
  for (const line of lines) {
    const message = line.replace(/^data: /, '');
    if (message === '[DONE]') { //OpenAI sends [DONE] to say it's over
      session.push('DONE', 'error');
      return;
    }
    try {
      const { choices } = JSON.parse(message);
      session.push({text:choices[0].text});
    } catch (err) {
      console.log(err);
    }
  }
});

//connection is close
data.on('close', () => { 
  console.log("close")
  res.end();
});

data.on('error', (err) => {
  console.error(err);
});

On your front end you can now call this API route

let [result, setResult] = useState("");

//create the sse connection
const sse = new EventSource(`/api/pletion?prompt=${inputPrompt}`);

//listen to ining messages
sse.addEventListener("message", ({ data }) => {
  let msgObj = JSON.parse(data)
  setResult((r) => r + msgObj.text)
});

Hope this makes sense and help others with similar issue.

Visit this GitHub repository: https://github./jonrhall/openai-streaming-hooks. I remend exploring this library as it offers React hooks that function solely on the client-side, requiring no server support.

React native streaming is now possible with https://github./react-native-munity/fetch.

This was actually a bug that was never addressed by RN team for a while, and this repo emerged to provide a better fetch that plies with WHATWG Spec

This is a fork of GitHub's fetch polyfill, the fetch implementation React Native currently provides. This project features an alternative fetch implementation directy built on top of React Native's Networking API instead of XMLHttpRequest for performance gains. At the same time, it aims to fill in some gaps of the WHATWG specification for fetch, namely the support for text streaming.

Install

$ npm install react-native-fetch-api --save

Usage

fetch('https://jsonplaceholder.typicode./todos/1', { reactNative: { textStreaming: true } })
  .then(response => response.body)
  .then(stream => ...)

You can use the stream object like the normal browser fetch.

Hope this helps!

If you want to use ChatGPT api directly in react native (expo) with word by word (it's better to say chunk by chunk streaming) then you may take a look at the examples from their documentation on streaming
https://platform.openai./docs/api-reference/streaming

Here is example for TS/JS. Note that we use openai library that we need to install and configure to your project and API_KEY. More details here

Also note that we are just passing stream: true parameter to make the response streaming

import OpenAI from "openai";

const openai = new OpenAI();

async function main() {
  const stream = await openai.chat.pletions.create({
    model: "gpt-4o-mini",
    messages: [{
      role: "user",
      content: "Say this is a test"
    }],
    stream: true,
  });
  for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0] ? .delta ? .content || "");
  }
}

main();

If you want to use fetch directly, then take a look at this example.

const response = await fetch("https://api.openai./v1/chat/pletions", {
  method: "POST",
  headers: {
    "Content-Type": "application/json",
    Authorization: `Bearer YOUR_OPENAI_API_KEY`, // Replace with your API key
  },
  body: JSON.stringify({
    model: "gpt-4o-mini", // Replace with your desired model
    messages: [{
      role: "user",
      content: "Say this is a test"
    }],
    stream: true, // Enable streaming
  }),
});

const reader = response.body.getReader();
const decoder = new TextDecoder("utf-8");

while (true) {
  const {
    done,
    value
  } = await reader.read();
  if (done) break; // Exit the loop if stream is done

  const chunk = decoder.decode(value, {
    stream: true
  });
  // Process each chunk of the streamed response
  const parsedChunks = chunk.split("\n").filter(Boolean); // Split response into lines

  for (const line of parsedChunks) {
    if (line.startsWith("data: ")) {
      const json = line.slice("data: ".length);
      if (json === "[DONE]") return; // End of the stream

      try {
        const parsed = JSON.parse(json);
        const content = parsed.choices[0] ? .delta ? .content || "";
        process.stdout.write(content);
      } catch (err) {
        console.error("Error parsing JSON: ", err);
      }
    }
  }
}

!!!!IMPORTANT!!!!

Note, for streaming to work on ReactNative environments, you should polyfill the missing ReadableStream and TextEncoder.

For Expo you need to:

Create an index.js file and make your polyfill the first import:

import 'polyfill'
import 'expo-router/entry'

Then change the main field in the package.json to point to the "main": "./index" as stated here

Here is how I polyfilled in my expo project

// /index.js
import {
  polyfillGlobal
} from "react-native/Libraries/Utilities/PolyfillFunctions";
import {
  ReadableStream
} from "web-streams-polyfill";
import {
  fetch,
  Headers,
  Request,
  Response
} from "react-native-fetch-api";

polyfillGlobal("ReadableStream", () => ReadableStream);
polyfillGlobal(
  "fetch",
  () =>
  (...args: any[]) =>
  fetch(args[0], { ...args[1],
    reactNative: {
      textStreaming: true
    }
  }),
);
polyfillGlobal("Headers", () => Headers);
polyfillGlobal("Request", () => Request);
polyfillGlobal("Response", () => Response);

import "expo-router/entry";

If you receive Text Encoder doesn't exist

Try to also polyfill TextEncoder. Or use directly. I used import encoding from "text-encoding"; And then const decoder = new encoding.TextDecoder("utf-8"); to decode values from my response.body.getReader().read()

Hope this helps, feel fry to ment out and ask additional questions if it doesn't work for you :)

发布评论

评论列表(0)

  1. 暂无评论