Building a Chatbot with Next.js Running LLaMA 2 Locally

  You block advertising 😢
Would you like to buy me a ☕️ instead?

LLaMA 2, a fresh Open Source language model by meta, is a powerful tool for natural language processing tasks. In this guide, we’ll build a chatbot using LLaMA 2 and Next.js, the popular React framework.

Disclaimer: This is a rough proof-of-concept style implementation you probably don’t want to use in production. However, this is a solid starting point if you want to play around with LLaMA running locally in your Next.js application.

Setting Up Next.js

First, let’s set up a new Next.js project by running the following command in your terminal:

npx create-next-app@latest llama-chatbot

Navigate to your new Next.js application:

cd llama-chatbot

Building the LLaMA 2 Model

Before building our chatbot, we must locally set up the LLaMA 2 model. Running LLaMA 2 locally on your Mac involves cloning the llama.cpp repository, building it, and downloading the model.

For easy access within our Next.js application, we’ll clone the LLaMA project within the root directory of our Next.js project. This setup will help keep our project organized.

In your terminal, navigate to the root directory of your Next.js project and clone the LLaMA repository:

# See:
git clone llama

Navigate into the cloned directory:

cd llama

Build the LLaMA model using the LLAMA_METAL=1 flag to enable the Metal backend:


Then, download the LLaMA 2 model.

wget ""

Or, if wget isn’t installed on your machine, you can use curl instead:

curl -LJO ""

Finally, to ensure the LLaMA project doesn’t interfere with our Next.js project, add it to the .gitignore file:

echo "/llama" >> .gitignore

We have successfully set up the LLaMA 2 model locally in our Next.js project with these steps.

Building the Chatbot

Next, we’ll integrate the LLaMA 2 model into our Next.js application and build a simple chat interface.

We start by creating a new API route file, src/app/api/chat/route.js. This API endpoint will handle the communication with the LLaMA 2 model.

// src/app/api/chat/route.js
import path from "path";

const { spawn } = require("child_process");

const getAnswer = ({ messages }) => {
  const messageString = => {
    if (m.role === "system") {
      return `<s>[INST] <<SYS>>\n${m.content}\n<</SYS>>\n\n`;
    if (m.role === "assistant") {
      return `${m.content}</s><s>[INST] `;

    return `${m.content} [/INST] `;

  return spawn(
      cwd: path.join(process.cwd(), "llama"),

const getAnswerStream = ({ messages }) => {
  const encoder = new TextEncoder();
  return new ReadableStream({
    start(controller) {
      const llama = getAnswer({ messages });

      let start = false;
      llama.stdout.on("data", (data) => {
        if (data.includes("[/INST]")) {
          start = true;
        if (!start) return;

        const chunk = encoder.encode(String(data));

      llama.stderr.on("data", (data) => {
        // TODO error handling

      llama.on("close", () => {

export async function POST(request) {
  const { messages } = await request.json();

  if (!messages) {
    return new Response("No message in the request", { status: 400 });

  return new Response(getAnswerStream({ messages }));

The getAnswer() function spawns a child process to run the LLaMA 2 model with a set of arguments. The arguments include the path to the model, the number of threads to use, and the text to process.

The getAnswerStream() function creates a ReadableStream that processes the data returned by the LLaMA 2 model. It uses the TextEncoder API to convert the string data into a stream of chunks.

The POST() function is an asynchronous function that handles POST requests to the /api/chat route. It extracts the messages from the request body and passes them to the getAnswerStream() function.

Building the LLaMA Chat UI with React

First, install the ai library, which we’ll use to build the chat interface:

npm install ai

This library provides a set of React hooks for building chat interfaces.

Finally, create a new file in the src/app directory called Home.tsx and add the following code:

// src/app/Home.tsx
"use client";
import { useChat } from "ai/react";

export default function Home() {
  const { messages, input, handleInputChange, handleSubmit } = useChat({
    api: "/api/chat",
    initialMessages: [
      { id: "system", role: "system", content: "You are a philosopher." },

  return (
    <form onSubmit={handleSubmit}>
      { => (
        <p key={}>
          {message.role}: {message.content}
      <button type="submit">Send Message</button>

The useChat() hook from the ai library provides a set of functions and variables for managing the chat interface. The initialMessages prop is used to set the initial state of the chat.

The handleSubmit() function sends a POST request to the /api/chat route whenever the user submits the form. The handleInputChange() function updates the state of the input field whenever the user types a message.

Wrapping Up

That’s it! We’ve successfully built a chatbot with the LLaMA 2 model and Next.js. This chatbot can serve as a starting point for more complex applications, such as a customer service bot or a language learning assistant. Feel free to experiment and enhance your chatbot with the capabilities of the LLaMA 2 model. Happy coding!

Do you want to learn how to build advanced Vue.js applications?

Register for the Newsletter of my upcoming book: Advanced Vue.js Application Architecture.

Do you enjoy reading my blog?

You can buy me a ☕️ on Ko-fi!

☕️ Support Me on Ko-fi