Would you like to buy me a ☕️ instead?
ChatGPT has recently gained significant attention due to its powerful natural language understanding and generation capabilities. While the official ChatGPT client offers a decent UX, I wanted to create a better client as a pet project. In this article, I’ll share insights on integrating OpenAI response streaming with Nuxt 3, resulting in a ChatGPT-like experience. Let’s dive in!
Prerequisites
Before we begin, ensure you have a basic understanding of the following concepts and tools:
- Nuxt and Vue.js: Familiarity with the Nuxt framework and Vue.js library is essential.
- A Nuxt 3 project: You should have a Nuxt 3 project already set up and running.
- OpenAI API: Some experience with the OpenAI API is helpful.
- An OpenAI API key: To access the ChatGPT model, you’ll need an OpenAI API key.
With these prerequisites in place, we can build our ChatGPT-like application using response streaming and the OpenAI API.
Please note that the examples provided in this guide do not include error handling to keep the code concise and focused on the core concepts. However, when building a real-world application, it’s crucial to handle errors gracefully when fetching data. Be sure to implement proper error handling techniques when adapting these examples for your own projects.
Set up
First, install the openai
package by running npm install openai
. Next, create a .env
file in your project’s root directory and add the following line, replacing YOUR_API_KEY
with your actual API key:
# .env
NUXT_OPEN_AI_SECRET_KEY=YOUR_API_KEY
Don’t forget to add the .env
file to your .gitignore
to prevent accidentally sharing your API key.
To access the environment setting within our Nuxt runtime, we need to add to our nuxt.config.ts
file:
export default defineNuxtConfig({
runtimeConfig: {
openAi: {
secretKey: "",
},
},
});
Note that the empty secretKey
gets automatically replaced with the value of NUXT_OPEN_AI_SECRET_KEY
in our .env
file.
Integrating OpenAI API with Nuxt
First we need to install the openai
package:
npm install openai
Now let’s create a new file called ai.js
in our Nuxt project’s server/utils
directory.
// server/utils/ai.js
import { Configuration, OpenAIApi } from "openai";
const config = useRuntimeConfig();
const configuration = new Configuration({
apiKey: config.openAi.secretKey,
});
const openai = new OpenAIApi(configuration);
export const getChatStream = async ({ messages }) => {
const response = await openai.createChatCompletion(
{
max_tokens: 2048,
model: "gpt-4", // or `gpt-3.5-turbo`
temperature: 0.5,
messages,
stream: true,
},
{ responseType: "stream" }
);
return response.data;
};
In getChatStream()
, we wrap the OpenAI API’s createChatCompletion()
method, which, because we set stream: true
and { responseType: 'stream' }
, returns a readable stream.
To fetch data from OpenAI in our frontend, we need to create a new API endpoint:
// server/api/chat.post.js
import { getChatStream } from "../utils/ai";
export default defineEventHandler(async (event) => {
const { messages } = await readBody(event);
const stream = await getChatStream({ messages });
return sendStream(event, stream);
});
Client-side logic to handle streamed data
The /api/chat
endpoint enables us to fetch data in a new client-side repository file:
// repositories/chat.js
export const getAnswer = async ({ messages }) => {
const { body } = await fetch("/api/chat", {
method: "POST",
body: JSON.stringify({
messages,
}),
});
if (!body) throw new Error("Unknown error");
return body;
};
Next, we create a new file called chat-stream.js
in the composables
directory. This file defines a composable function called useChatStream
that handles the OpenAI API response stream we get as a response from getAnswer()
.
// composables/chat-stream.js
const resolveStream = async ({
data,
onChunk = () => {},
onReady = () => {},
stream,
}) => {
const reader = stream.pipeThrough(new TextDecoderStream()).getReader();
while (true) {
const stream = await reader.read();
if (stream.done) break;
const chunks = stream?.value
.replaceAll(/^data: /gm, "")
.split("\n")
.filter((c) => Boolean(c.length) && c !== "[DONE]")
.map((c) => JSON.parse(c));
for (let chunk of chunks) {
const content = chunk.choices[0].delta.content;
if (!content) continue;
data.value += chunk.choices[0].delta.content;
onChunk({ data: content });
}
}
onReady({ data: data.value });
};
export const useChatStream = ({
onChunk = () => {},
onReady = () => {},
stream,
}) => {
const data = ref("");
resolveStream({
data,
onChunk,
onReady,
stream,
});
return {
data: readonly(data),
};
};
The useChatStream()
function processes a given stream
and calls onChunk
and onReady
callbacks with the received data. It uses a reactive variable data
to accumulate the content and returns a read-only version. The asynchronous resolveStrem()
function handles text decoding and JSON parsing for the streamed data. We move the functionality into a separate function because composables prefixed with use
should not be asynchronous, but we need asynchronous code to handle the stream data.
Creating the chat interface
Now we’re ready to create a simple chat interface consisting of an input field for user questions and a message area to display chat messages. Open the app.vue
file to create our simple chat functionality. Here’s an example:
<script setup>
import { getAnswer } from "./repositories/chat";
const messages = ref([]);
const answer = ref(null);
const question = ref("");
const askQuestion = async () => {
messages.value.push({
role: "user",
content: question.value,
});
question.value = "";
const stream = await getAnswer({ messages: messages.value });
answer.value = {
role: "assistant",
content: "",
};
useChatStream({
stream,
onChunk: ({ data }) => {
answer.value.content += data;
},
onReady: () => {
messages.value.push(answer.value);
answer.value = null;
},
});
};
</script>
<template>
<form @submit.prevent="askQuestion">
<ul>
<li v-for="message in messages">
{{ message.role }}: {{ message.content }}
</li>
<li v-if="answer">{{ answer.role }}: {{ answer.content }}</li>
</ul>
<div>
<label>
Question:
<input v-model="question" type="text" />
</label>
<button type="submit">Ask</button>
</div>
</form>
</template>
We have successfully integrated the OpenAI API with our Nuxt application and implemented streaming functionality for a ChatGPT-like experience.
Wrapping it up
In this article, we explored how to integrate response streaming with a Nuxt application to create a ChatGPT-like experience. By leveraging the power of the OpenAI API, Nuxt, and Vue.js, we built a simple yet functional chat interface that streams responses from the OpenAI API. This tutorial is an excellent starting point for anyone looking to develop a more sophisticated and interactive chatbot using Nuxt and the OpenAI API.
References and resources
For further reading and exploration, check out the following resources:
- Nuxt: https://nuxt.com
- Vue.js: https://vuejs.org
- OpenAI API: https://platform.openai.com/docs/introduction
- ChatGPT by OpenAI: https://chat.openai.com/