Running Nuxt 3 in a Docker Container
In this article, we’ll explore how to dockerize a Nuxt 3 application and how we can use Docker to streamline our development and deployment processes. We’ll set up our Nuxt Docker environment for both production and development. Using Docker to deploy our application, we can ensure consistency across environments and mimic production-like conditions even on our laptops. Creating an additional Docker image also for development makes it possible to get started working on a Nuxt project without ever worrying about installing Node.js or nvm on your local machine because everything is neatly packaged within a Docker container. Let’s get started!
Building a Nuxt Docker Production Container
To build a Docker production container for our Nuxt 3 application, we first need to create a new Dockerfile
. In this Dockerfile
we describe the steps to package our application into a Docker image.
Let’s take a closer look:
# syntax = docker/dockerfile:1
ARG NODE_VERSION=20.18.0
FROM node:${NODE_VERSION}-slim as base
ARG PORT=3000
WORKDIR /src
First, we specify the particular NODE_VERSION
we want to use. Doing so is the first substantial benefit of using Docker: we can ensure that we run the same Node.js version as on production. For most projects it’s highly recommended to use the latest LTS version of Node.js here.
Specifying the PORT
as an ARG PORT
allows us to override the port when starting the container. If we don’t provide the PORT
ARG
, we set 3000
as the default value. The WORKDIR
instruction sets the working directory within the container for any following instructions.
Next comes the build stage:
# Build
FROM base as build
COPY --link package.json package-lock.json .
RUN npm install
COPY --link . .
RUN npm run build
In this stage, we copy over our package.json
and package-lock.json
files and install our dependencies. We then copy the rest of our application code and run the build command. We split those two COPY
steps because this allows Docker to do its caching magic. The npm install
command will only run if we change one of the two previously copied files (package.json and package-lock.json). Otherwise, Docker will use the cached version of the node_modules
, saving us precious time.
Finally, we create the run stage:
# Run
FROM base
ENV PORT=$PORT
ENV NODE_ENV=production
COPY --from=build /src/.output /src/.output
# Optional, only needed if you rely on unbundled dependencies
# COPY --from=build /src/node_modules /src/node_modules
CMD [ "node", ".output/server/index.mjs" ]
We’re starting from our base image again and setting the port environment variable. We then copy over the built application from the build stage. If our application relies on unbundled dependencies, we must copy over the node_modules
directory. But by default, we don’t need to do this! Finally, we define the command to start our application.
As you can see, in this Dockerfile, we’re using Docker’s multi-stage builds. Multi-stage builds allow us to keep our final image small, as we only copy over the built application without any dependencies and files needed to build the application.
Moreover, notice the ARG
and ENV
instructions. ARG allows us to define variables that are available during the build process, while ENV
specifies environment variables that will be set in the running container. In this Dockerfile, we’re using ARG
to make the Node.js version and the port configurable and ENV
to set the Node environment and the port.
The complete Dockerfile
looks like this:
# syntax = docker/dockerfile:1
ARG NODE_VERSION=20.18.0
FROM node:${NODE_VERSION}-slim as base
ARG PORT=3000
WORKDIR /src
# Build
FROM base as build
COPY --link package.json package-lock.json .
RUN npm install
COPY --link . .
RUN npm run build
# Run
FROM base
ENV PORT=$PORT
ENV NODE_ENV=production
COPY --from=build /src/.output /src/.output
# Optional, only needed if you rely on unbundled dependencies
# COPY --from=build /src/node_modules /src/node_modules
CMD [ "node", ".output/server/index.mjs" ]
Before building our container, we also should create a .dockerignore
file. This file tells Docker to ignore specific files and directories when building the image. In our case, we’re ignoring the .nuxt
, .output
, and node_modules
directories, as well as the .gitignore
and README.md
files. Excluding unnecessary files helps improve the cache hit rate and keep the final image size small.
# .dockerignore
/.nuxt
/.output
/node_modules
.gitignore
README.md
We can now build and run the container using the following commands:
docker build -t my-app .
docker run --rm -it -p 3000:3000 --env-file .env.local --name my-app my-app
Yet we can also use docker compose
to simplify running our containers locally.
Running Your Nuxt Dev Environment in Docker
Running Docker containers in production environments has become the norm. However, using Docker to run Nuxt 3 in development mode offers several benefits, too. For example, it ensures that all developers are working in the same environment, reducing the chances of bugs caused by differences in local setups, such as different versions of Node.js–it eliminates the need to manually set up the correct Node.js version on each developer’s machine.
To run our Nuxt 3 dev environment in Docker, we’ll need a separate Dockerfile
named Dockerfile.dev
. Here’s an example:
# syntax = docker/dockerfile:1
ARG NODE_VERSION=20.18.0
FROM node:${NODE_VERSION}-slim as base
ENV NODE_ENV=development
WORKDIR /src
This part of the Dockerfile is similar to the production version. But this time, we set the Node environment to development
.
Next, we have the build stage:
# Build
FROM base as build
COPY --link package.json package-lock.json .
RUN npm install
Finally, we have the run stage:
# Run
FROM base
COPY --from=build /src/node_modules /src/node_modules
CMD [ "npm", "run", "dev" ]
In the run stage, we’re copying over only the node_modules
from the build stage. When starting the container, we will mount all the other data we need in dev mode. Last but not least, we’re then defining the command to start our application in development mode.
Here is the full version of the final Dockerfile.dev
again:
# syntax = docker/dockerfile:1
ARG NODE_VERSION=20.18.0
FROM node:${NODE_VERSION}-slim as base
ENV NODE_ENV=development
WORKDIR /src
# Build
FROM base as build
COPY --link package.json package-lock.json .
RUN npm install
# Run
FROM base
COPY --from=build /src/node_modules /src/node_modules
CMD [ "npm", "run", "dev" ]
This development Dockerfile
allows us to run our Nuxt 3 application in a Docker container in development mode. Using Docker for our development environment ensures that all developers work with the same dependencies and environment, reducing potential issues and simplifying the setup process.
Running Docker Containers
Once we’ve set up our Dockerfile
s for production and development, the next step is to build and run our Docker images. However, instead of using the docker build
and docker run
commands, we’ll leverage docker compose
to simplify this process.
For our Nuxt 3 application, we’ll create two docker-compose
files: one for production (docker-compose.yml
) and an override file for development (docker-compose.dev.yml
).
Production docker-compose.yml
File
Here is an example of a docker-compose.yml
file for our production environment:
version: "3"
services:
my-app:
build:
context: .
ports:
- "3000:3000"
In this file, we define a service called my-app
. The build
directive tells Docker Compose to build an image using the Dockerfile in the current directory (specified by context: .
). The ports
directive maps port 3000 in the container to port 3000 on the host machine.
To build and run our production Docker image, we can use the docker compose up
command:
docker compose up --build
With that, we start the services defined in the docker-compose.yml
file, and --build
tells Docker Compose to build the images before starting the containers.
Development docker-compose.dev.yml
File
Next, let’s create a separate docker-compose.dev.yml
file for our development environment. This file will override some directives in the production docker-compose.yml
file, tailoring the environment to our development needs.
Here is an example docker-compose.dev.yml
file:
version: "3"
volumes:
node_modules:
services:
my-app-dev:
build:
context: .
dockerfile: ./Dockerfile.dev
ports:
- "3000:3000"
- "24678:24678"
volumes:
- .:/src
- node_modules:/src/node_modules
In this file, we’re defining a new service called my-app-dev
. We’re using a different Dockerfile (Dockerfile.dev
) and mapping an additional port (24678
) to allow Vite to perform hot module reloading.
We’re also mounting the entire local file system into the /src
directory in the container, ensuring that changes made in our local development environment are reflected in the running container. However, we’re creating a separate volume for the node_modules
directory to, somewhat counterintuitively, prevent the Docker container from using any locally installed dependencies. To prevent overwriting the container’s node_modules
directory with node_modules
installed on the host system, we mount it separately to ensure dependencies installed inside the container are preserved. This is because the container runs Linux, and if we’re using a different host operating system (e.g., macOS), the dependencies built for the host system might not work correctly in the container Linux.
To build and run our development Docker image, we can use the docker compose
command with the -f
option to specify the override file:
docker compose -f docker-compose.yml -f docker-compose.dev.yml up --build
In this command, -f docker-compose.yml -f docker-compose.dev.yml
tells docker compose
to use both the production and development compose files, with the settings in the development file overriding those in the production file. up --build
builds the images and starts the containers.
Wrapping It Up
In conclusion, running Nuxt 3 in Docker offers several key advantages. It ensures a consistent environment across different stages of development, from local testing to production deployment. Docker’s layering and caching mechanisms can significantly speed up build times, while its ability to create small, optimized images makes deployment quicker and more efficient.
Locally running a production image allows for realistic testing. Last but not least, a development image ensures that all developers are working in the same environment, eliminating potential bugs caused by differences in local setups.