Would you like to buy me a ☕️ instead?
In this article, we’ll dive into the world of Nuxt 3 and Docker, exploring how they can work together to streamline our development and deployment processes. We’ll walk through setting up a Nuxt 3 application in a Docker environment for production and development. We’ll also touch on the benefits of this approach, from ensuring consistency across developer environments to ensuring we run our tests in an environment that mirrors production. Let’s get started!
Running Nuxt 3 in Docker
Docker is a platform that allows us to package our application and its dependencies into a standardized unit known as a container. We can run such a container on any system that has Docker installed, ensuring consistent behavior across different environments.
Regarding Nuxt 3, Docker allows us to define the exact versions of Node.js and other dependencies that our application needs, eliminating the “it works on my machine” problem.
Docker’s ability to cache build steps and to create small, optimized images for deployment can significantly reduce the footprint of our Nuxt 3 application, making it quicker to deploy to production.
Building a Docker Production Container
Building a Docker production container for our Nuxt 3 application involves creating a Dockerfile
that describes the steps to package our application into a Docker image.
Let’s take a closer look:
# syntax = docker/dockerfile:1
ARG NODE_VERSION=18.14.2
FROM node:${NODE_VERSION}-slim as base
ARG PORT=3000
ENV NODE_ENV=production
WORKDIR /src
First, we specify the particular NODE_VERSION
we want to use. Doing so is the first substantial benefit of using Docker: we can ensure that we run the same Node.js version as on production. Specifying the PORT
as an ARG PORT
allows us to override the port when starting the container. If we don’t provide the PORT
ARG
, we set 3000
as the default value. Because this is the Dockerfile
for our production system, we set the NODE_ENV
to production. The WORKDIR instruction sets the working directory for any instructions that follow it in the Dockerfile.
Next, we have the build stage:
# Build
FROM base as build
COPY --link package.json package-lock.json .
RUN npm install --production=false
COPY --link . .
RUN npm run build
RUN npm prune
In this stage, we copy over our package.json
and package-lock.json
files and install our dependencies. We then copy the rest of our application code and run the build command. We split those two COPY
steps because this allows Docker to do its caching magic. The npm install
command will only run if we change one of the two previously copied files. Otherwise, Docker will use the cached version of the node_modules
, saving us some time. We use the npm prune
command to remove unnecessary dependencies, reducing the image size.
Finally, we have the run stage:
# Run
FROM base
ENV PORT=$PORT
COPY --from=build /src/.output /src/.output
# Optional, only needed if you rely on unbundled dependencies
# COPY --from=build /src/node_modules /src/node_modules
CMD [ "node", ".output/server/index.mjs" ]
We’re starting from our base image again and setting the port environment variable. We then copy over the built application from the build stage. If our application relies on unbundled dependencies, we must copy over the node_modules
directory. But by default, we don’t need to do this. Finally, we define the command to start our application.
In this Dockerfile, we’re making use of Docker’s multi-stage builds. Multi-stage builds allow us to keep our final image small, as we only copy over the built application and any necessary dependencies.
Moreover, notice the ARG and ENV instructions. ARG allows us to define variables that are available during the build process, while ENV specifies environment variables that will be set in the running container. In this Dockerfile, we’re using ARG
to make the Node.js version and the port configurable and ENV
to set the Node environment and the port.
The complete Dockerfile
looks like this:
# syntax = docker/dockerfile:1
ARG NODE_VERSION=18.14.2
FROM node:${NODE_VERSION}-slim as base
ARG PORT=3000
ENV NODE_ENV=production
WORKDIR /src
# Build
FROM base as build
COPY --link package.json package-lock.json .
RUN npm install --production=false
COPY --link . .
RUN npm run build
RUN npm prune
# Run
FROM base
ENV PORT=$PORT
COPY --from=build /src/.output /src/.output
# Optional, only needed if you rely on unbundled dependencies
# COPY --from=build /src/node_modules /src/node_modules
CMD [ "node", ".output/server/index.mjs" ]
It’s also worth noting the use of a .dockerignore
file. This file tells Docker to ignore specific files and directories when building the image. In this case, we’re ignoring the .nuxt
, .output
, and node_modules
directories, as well as the .gitignore
and README.md
files. Excluding unnecessary files helps improve the cache hit rate and keep the final image size small.
# .dockerignore
/.nuxt
/.output
/node_modules
.gitignore
README.md
Running Your Nuxt Dev Environment in Docker
Running our Nuxt 3 development environment in Docker offers several benefits. It ensures that all developers are working in the same environment, reducing the chances of bugs caused by differences in local setups. It also eliminates the need to manually set up the correct Node.js version on each developer’s machine.
To run our Nuxt 3 dev environment in Docker, we’ll need a separate Dockerfile
named Dockerfile.dev
. Here’s an example:
# syntax = docker/dockerfile:1
ARG NODE_VERSION=18.14.2
FROM node:${NODE_VERSION}-slim as base
ENV NODE_ENV=development
WORKDIR /src
This part of the Dockerfile is similar to the production version. We’re defining the Node.js version to use and setting up a base image with that Node.js version. But this time, we set the Node environment to development
.
Next, we have the build stage:
# Build
FROM base as build
COPY --link package.json package-lock.json .
RUN npm install
Note that we’re not running npm install --production=false
as we did in the production Dockerfile. We don’t have to because we’ve set the NODE_ENV
to development
anyway.
Finally, we have the run stage:
# Run
FROM base
COPY --from=build /src/node_modules /src/node_modules
CMD [ "npm", "run", "dev" ]
In the run stage, we’re copying over only the node_modules
from the build stage. When starting the container, we will mount all the other data we need in dev mode. Last but not least, we’re then defining the command to start our application in development mode.
Here is the full version of the final Dockerfile.dev
again:
# syntax = docker/dockerfile:1
ARG NODE_VERSION=18.14.2
FROM node:${NODE_VERSION}-slim as base
ENV NODE_ENV=development
WORKDIR /src
# Build
FROM base as build
COPY --link package.json package-lock.json .
RUN npm install
# Run
FROM base
COPY --from=build /src/node_modules /src/node_modules
CMD [ "npm", "run", "dev" ]
This development Dockerfile
allows us to run our Nuxt 3 application in a Docker container in development mode. Using Docker for our development environment ensures that all developers work with the same dependencies and environment, reducing potential issues and simplifying the setup process.
Running Docker Containers
Once we’ve set up our Dockerfile
s for production and development, the next step is to build and run our Docker images. However, instead of using the docker build
and docker run
commands, we’ll leverage docker-compose
to simplify this process.
For our Nuxt 3 application, we’ll create two docker-compose
files: one for production (docker-compose.yml
) and an override file for development (docker-compose.dev.yml
).
Production docker-compose.yml
File
Here is an example of a docker-compose.yml
file for our production environment:
version: "3"
services:
my-app:
build:
context: .
ports:
- "3000:3000"
In this file, we define a service called my-app
. The build
directive tells Docker Compose to build an image using the Dockerfile in the current directory (specified by context: .
). The ports
directive maps port 3000 in the container to port 3000 on the host machine.
To build and run our production Docker image, we can use the docker-compose up
command:
docker compose up --build
In this command, up
starts the services defined in the docker-compose.yml
file, and --build
tells Docker Compose to build the images before starting the containers.
Development docker-compose.dev.yml
File
We’ll create a separate docker-compose.dev.yml
file for our development environment. This file will override some directives in the production docker-compose.yml
file, tailoring the environment to our development needs.
Here is an example docker-compose.dev.yml
file:
version: "3"
volumes:
node_modules:
services:
my-app-dev:
build:
context: .
dockerfile: ./Dockerfile.dev
ports:
- "3000:3000"
- "24678:24678"
volumes:
- .:/src
- node_modules:/src/node_modules
In this file, we’re defining a new service called my-app-dev
. We’re using a different Dockerfile (Dockerfile.dev
) and mapping an additional port (24678
) to allow Vite to perform hot module reloading.
We’re also mounting the entire local file system into the /src
directory in the container, ensuring that changes made in our local development environment are reflected in the running container. However, we’re creating a separate volume for the node_modules
directory. This is because the container runs Linux, and if we’re using a different operating system, the dependencies might not work correctly otherwise.
To build and run our development Docker image, we can use the docker-compose
command with the -f
option to specify the override file:
docker compose -f docker-compose.yml -f docker-compose.dev.yml up --build
In this command, -f docker-compose.yml -f docker-compose.dev.yml
tells docker compose
to use both the production and development compose files, with the settings in the development file overriding those in the production file. up --build
builds the images and starts the containers.
Wrapping It Up
In conclusion, running Nuxt 3 in Docker offers several key advantages. It ensures a consistent environment across different stages of development, from local testing to production deployment. Docker’s layering and caching mechanisms can significantly speed up build times, while its ability to create small, optimized images makes deployment quicker and more efficient.
Locally running a production image allows for realistic testing. And a development image ensures that all developers are working in the same environment, eliminating potential bugs caused by differences in local setups.