Markus Oberlehner

Building a ChatGPT Client with Remix: Leveraging Response Streaming for a Chat-like Experience


In a previous article, we built a ChatGPT client using Nuxt 3. This time, we’ll create a similar client using the Remix framework. Again, we’ll leverage response streaming for a chat-like experience. Let’s dive in!

Prerequisites

Before we begin, ensure you have a basic understanding of the following concepts and tools:

We can build our ChatGPT-like application using response streaming and the OpenAI API with these prerequisites.

Please note that the examples provided in this guide do not include error handling to keep the code concise and focused on the core concepts. However, when building a real-world application, it’s crucial to handle errors gracefully when fetching data. So be sure to implement proper error-handling techniques when adapting these examples for your projects.

Set up

First, install the openai package by running npm install openai. Next, create a .env file in your project’s root directory and add the following line, replacing YOUR_API_KEY with your actual API key:

# .env
OPEN_AI_SECRET_KEY=YOUR_API_KEY

Don’t forget to add the .env file to your .gitignore to prevent accidentally sharing your API key.

Integrating OpenAI API with Remix

Now let’s create a new file called ai.js in our Remix project’s utils directory.

// app/utils/chat.server.js
import { Configuration, OpenAIApi } from "openai";

const configuration = new Configuration({
  apiKey: process.env.OPEN_AI_SECRET_KEY,
});
const openai = new OpenAIApi(configuration);

export const getChatStream = async ({ messages }) => {
  const response = await openai.createChatCompletion(
    {
      max_tokens: 2048,
      model: "gpt-4", // or `gpt-3.5-turbo`
      temperature: 0.5,
      messages,
      stream: true,
    },
    { responseType: "stream" },
  );

  return response.data;
};

In getChatStream(), we wrap the OpenAI API’s createChatCompletion() method, which, because we set stream: true and { responseType: 'stream' }, returns a readable stream.

To fetch data from OpenAI in our frontend, we need to create a new API endpoint:

// app/routes/api.chat.js
import { Response } from "@remix-run/node";

import { getChatStream } from "../utils/chat.server";

export const action = async ({ request }) => {
  return new Response(
    await getChatStream({
      messages: (await request.json()).messages,
    }),
  );
};

Client-side logic to handle streamed data

The /api/chat endpoint enables us to fetch data in a new client-side repository file:

// repositories/chat.js
export const ask = ({ messages }) => {
  return fetch("/api/chat", {
    method: "POST",
    body: JSON.stringify({ messages }),
  });
};

export const processChatResponse = async ({ response, onChunk }) => {
  const reader = response.body.pipeThrough(new TextDecoderStream()).getReader();
  let string = "";
  while (true) {
    const stream = await reader.read();
    if (stream.done) break;

    const chunks = stream?.value
      .replaceAll(/^data: /gm, "")
      .split("\n")
      .filter((c) => Boolean(c.length) && c !== "[DONE]")
      .map((c) => JSON.parse(c));

    for (let chunk of chunks) {
      const content = chunk.choices[0].delta.content;
      if (!content) continue;
      string += chunk.choices[0].delta.content;
      onChunk(string);
    }
  }
  return string;
};

The processChatResponse() function handles the OpenAI API response stream we get as a response from ask(). It receives the response and calls the onChunk callback with the received data from the response stream. In addition, it handles text decoding and JSON parsing for the streamed data.

Now we create two custom hooks for managing chat messages and user input:

// app/hooks/chat.js
import { useState } from "react";

export const useChat = () => {
  const [messages, setMessages] = useState([]);

  const addMessage = (message) =>
    setMessages((prevMessages) => [...prevMessages, message]);

  return { messages, addMessage };
};
// app/hooks/input.js
import { useState } from "react";

export const useInput = () => {
  const [input, setInput] = useState("");

  const handleInputChange = (event) => setInput(event.target.value);
  const resetInput = () => setInput("");

  return { input, handleInputChange, resetInput };
};

Creating the chat interface

Now we’re ready to create a simple chat interface consisting of an input field for user questions and a message area to display chat messages. So let’s create an app/routes/_index.jsx file for our simple chat functionality. Here’s an example:

// app/routes/_index.jsx
import { useState } from "react";
import { useChat } from "../hooks/chat";
import { useInput } from "../hooks/input";
import { ask, processChatResponse } from "../repositories/chat";

const ChatMessages = ({ messages }) => {
  return (
    <ul>
      {messages.map((message, index) => (
        <li key={message.content + index}>
          {message.role}: {message.content}
        </li>
      ))}
    </ul>
  );
};

export default function Index() {
  const { messages, addMessage } = useChat();
  const { input: question, handleInputChange, resetInput } = useInput();

  const [answer, setAnswer] = useState("");

  const askQuestion = async (event) => {
    event.preventDefault();

    const messageNew = { role: "user", content: question };
    addMessage(messageNew);
    resetInput();

    const response = await ask({ messages: [...messages, messageNew] });
    if (!response) return;

    const assistantMessageContent = await processChatResponse({
      response,
      onChunk: setAnswer,
    });
    setAnswer("");
    addMessage({ role: "assistant", content: assistantMessageContent });
  };

  return (
    <form onSubmit={askQuestion}>
      <ChatMessages
        messages={[...messages, { role: "assistant", content: answer }].filter(
          (m) => Boolean(m.content),
        )}
      />
      <div>
        <label>
          Question:
          <input value={question} onChange={handleInputChange} />
        </label>
        <button type="submit">Ask</button>
      </div>
    </form>
  );
}

We have successfully integrated the OpenAI API with our Remix application and implemented streaming functionality for a ChatGPT-like experience.

Wrapping it up

This article explored how to integrate response streaming with a Remix application to create a ChatGPT-like experience. By leveraging the power of the OpenAI API, Remix, and React, we built a simple yet functional chat interface that streams responses from the OpenAI API. This tutorial is an excellent starting point for anyone looking to develop a more sophisticated and interactive chatbot using Remix and the OpenAI API.

References and resources

For further reading and exploration, check out the following resources: