Markus Oberlehner

Parallelizing Specmatic Contract Tests with Playwright


Specmatic quickly became my favorite tool for contract testing microservices and for using it as a stub service to test frontend applications. Together with Playwright this makes for a great combo!

Yet there is a problem with this approach: Out of the box we can’t run tests in parallel with this setup because Specmatic’s stub server, by default, isn’t designed for parallel execution. It serves the last configured expectation, making concurrent tests with different stubbed responses impossible. Or so I thought.

After some tinkering, I found a solution that allows us to run truly parallel tests with Specmatic and Playwright, using a combination of HTTP headers and a Specmatic hook that allows us to manipulate OpenAPI specifications on the fly. This approach avoids parallel tests interfering with each other due to shared stub state.

The final result of our work

The Problem: Shared Stub State

With Specmatic, we define our API contract as an OpenAPI specification, and it automatically spins up a stub server that behaves according to that contract. In our frontend tests, we can dynamically configure specific responses using the /_specmatic/expectations endpoint.

A typical Playwright test might look like this:

// test/specs/example.spec.ts
test('it should render "Foo!"', async ({ page }, testInfo) => {
  await fetch("http://localhost:9000/_specmatic/expectations", {
    method: "POST",
    body: JSON.stringify({
      "http-request": {
        method: "GET",
        path: "/api/messages",
      },
      "http-response": {
        status: 200,
        body: {
          text: "Foo!",
        },
      },
    }),
  });

  await page.goto("/");
  await expect(page.getByText("Foo!")).toBeVisible();
});

This works great for sequential tests. But when we run tests in parallel, each test tries to configure the same stub server. The last test to set an expectation “wins,” and all other tests receive that response, leading to unpredictable and often failing tests.

The Solution: Worker-Specific Expectations

Luckily I found a workaround for how we can make the stub server aware of which test worker (each parallel test run gets asigned a worker) is making the request. We achieve this by using HTTP headers. Each Playwright worker will send a unique identifier in a custom header (X-WORKER-ID), and Specmatic will use this header to serve the correct stubbed response.

Step 1: Modifying the OpenAPI Specification

First, we need to tell Specmatic to expect this X-WORKER-ID header. We’ll modify our OpenAPI specification to include it as a required header parameter for our API endpoint. Instead of manually editing the schema/spec.yaml file, we’ll use a Specmatic hook and a Node.js script to dynamically add the header.

Here’s our original schema/spec.yaml:

openapi: 3.0.3
info:
  title: Messages API
  version: 1.0.0
  description: API for retrieving message data.
servers:
  - url: http://localhost:9000
paths:
  /api/messages:
    get:
      summary: Get message data
      description: Retrieves a message object with a text property.
      responses:
        "200":
          description: Successful response with message data.
          content:
            application/json:
              schema:
                type: object
                properties:
                  text:
                    type: string
                    description: The message text.
                    example: "Hello world!"
                required:
                  - text
        "500":
          description: Internal Server Error
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/Error"
components:
  schemas:
    Error:
      type: object
      properties:
        message:
          type: string
          description: Error message description.
      required:
        - message

In order to allow the additional X-WORKER-ID header, we’ll use the stub_load_contract hook in specmatic.json to run a script that modifies the contract and adds the X-WORKER-ID header.

{
  "sources": [
    {
      "provider": "filesystem",
      "consumes": ["schema/spec.yaml"]
    }
  ],
  "hooks": {
    "stub_load_contract": "node bin/parallel-test-id.mjs"
  }
}

And here’s the bin/parallel-test-id.mjs script:

import fs from "node:fs";
import yaml from "js-yaml";

const addGlobalWorkerIdHeader = () => {
  const contractFile = process.env.CONTRACT_FILE;

  if (!contractFile) {
    console.error("Error: CONTRACT_FILE environment variable is not set");
    process.exit(1);
  }

  try {
    const doc = yaml.load(fs.readFileSync(contractFile, "utf8"));

    if (doc.paths) {
      for (const path of Object.keys(doc.paths)) {
        const pathItem = doc.paths[path];

        // Initialize parameters array if it doesn't exist
        pathItem.parameters = pathItem.parameters || [];

        // Check if header already exists at path level
        const hasHeader = pathItem.parameters.some(
          (param) => param.in === "header" && param.name === "X-WORKER-ID"
        );

        if (!hasHeader) {
          pathItem.parameters.push({
            in: "header",
            name: "X-WORKER-ID",
            description: "Worker ID for parallel test isolation",
            schema: {
              type: "string",
              example: "0",
            },
            required: false,
          });
        }
      }
    }

    console.log(
      yaml.dump(doc, {
        lineWidth: -1,
        noRefs: true,
        sortKeys: false,
      })
    );
  } catch (e) {
    console.error("Error processing contract file:", e.message);
    process.exit(1);
  }
};

addGlobalWorkerIdHeader();

This script dynamically adds the X-WORKER-ID header to the OpenAPI specification before Specmatic loads it. This is crucial because it ensures Specmatic is aware of the header and can use it to differentiate requests.

Bonus: if you use Docker to run Specmatic, you must ensure that the Docker container can run Node.js:

FROM eclipse-temurin:21

WORKDIR /specmatic

ADD https://github.com/znsio/specmatic/releases/download/2.4.0/specmatic.jar /specmatic/specmatic.jar

# Here we install Node.js so we can run the
# `bin/parallel-test-id.mjs` script in the container.
RUN apt-get update && apt-get install -y nodejs npm && rm -rf /var/lib/apt/lists/*

WORKDIR /app

RUN npm install js-yaml

ENTRYPOINT ["java", "-jar", "/specmatic/specmatic.jar"]

Step 2: Configuring Playwright Workers

Next, we need to configure Playwright to send the X-WORKER-ID header with each request. To do this, we’ll create a custom test utility in test/utils.ts that extends the base Playwright test object:

// test/utils.ts
import { test as base } from "@playwright/test";
export { expect } from "@playwright/test";

export const test = base.extend({
  page: async ({ browser }, use, testInfo) => {
    // Create a new browser context with custom headers
    const context = await browser.newContext({
      extraHTTPHeaders: {
        "X-WORKER-ID": testInfo.workerIndex.toString(),
      },
    });
    const page = await context.newPage();
    await use(page);
    await context.close();
  },
});

This custom page fixture utilizes testInfo.workerIndex, a built-in Playwright variable that provides a unique index for each worker. We use this index as the value for our X-WORKER-ID header.

Step 3: Setting Expectations Per Worker

Now, within our Playwright tests, we can set expectations specific to each worker. We’ll use the testInfo.workerIndex again when setting expectations via Specmatic’s /_specmatic/expectations API:

// test/specs/example.spec.ts
import { test, expect } from "../utils";

test('it should render "Foo!"', async ({ page }, testInfo) => {
  await fetch("http://localhost:9000/_specmatic/expectations", {
    method: "POST",
    body: JSON.stringify({
      "http-request": {
        method: "GET",
        path: "/api/messages",
        headers: {
          "X-WORKER-ID": testInfo.workerIndex,
        },
      },
      "http-response": {
        status: 200,
        body: {
          text: "Foo!",
        },
      },
    }),
  });

  await page.goto("/");
  await expect(page.getByText("Foo!")).toBeVisible();
});

test('it should render "Bar!"', async ({ page }, testInfo) => {
  await fetch("http://localhost:9000/_specmatic/expectations", {
    method: "POST",
    body: JSON.stringify({
      "http-request": {
        method: "GET",
        path: "/api/messages",
        headers: {
          "X-WORKER-ID": testInfo.workerIndex,
        },
      },
      "http-response": {
        status: 200,
        body: {
          text: "Bar!",
        },
      },
    }),
  });

  await page.goto("/");
  await expect(page.getByText("Bar!")).toBeVisible();
});

test('it should render "Baz!"', async ({ page }, testInfo) => {
  await fetch("http://localhost:9000/_specmatic/expectations", {
    method: "POST",
    body: JSON.stringify({
      "http-request": {
        method: "GET",
        path: "/api/messages",
        headers: {
          "X-WORKER-ID": testInfo.workerIndex,
        },
      },
      "http-response": {
        status: 200,
        body: {
          text: "Baz!",
        },
      },
    }),
  });

  await page.goto("/");
  await expect(page.getByText("Baz!")).toBeVisible();
});

Each test now sets its own expectation, including the X-WORKER-ID header. Because Specmatic now sees these as distinct requests (due to the different header values), it can serve the correct stubbed response to each parallel test.

Step 4: Updating the Application Code

The final step is to ensure our application code also forwards the X-WORKER-ID header to the API.

Next.js 15 App Router example:

// middleware.ts
import { NextRequest, NextResponse } from "next/server";

export const config = {
  matcher: [
    "/((?!_next/static|_next/image|favicon.ico|sitemap.xml|robots.txt).*)",
  ],
};

export function middleware(request: NextRequest) {
  const workerId =
    ["development", "test"].includes(process.env.NODE_ENV) &&
    request.headers.get("X-WORKER-ID");

  if (workerId) {
    const response = NextResponse.next();
    response.cookies.set("X-WORKER-ID", workerId);
    return response;
  }
}
// utils/api.ts
import { cookies } from "next/headers";

const appendWorkerId = ["development", "test"].includes(process.env.NODE_ENV);

export const api = async (endpoint: string, options?: RequestInit) => {
  const defaultHeaders: Record<string, string> = {};

  if (appendWorkerId) {
    defaultHeaders["X-WORKER-ID"] =
      (await cookies()).get("X-WORKER-ID")?.value ?? "0";
  }

  const response = await fetch(`http://localhost:9000${endpoint}`, {
    headers: {
      ...defaultHeaders,
      ...options?.headers,
    },
    ...options,
  }).then((r) => r.json());
  return response;
};
// app/page.tsx
import { api } from "@/utils/api";

export default async function Home() {
  const getData = async () => {
    "use server";
    return await api("/api/messages");
  };

  return <div>{(await getData()).text}</div>;
}

React Router v7 example:

// app/utils/api.ts
export const api = async (
  endpoint: string,
  options: RequestInit & { request: Request }
) => {
  const response = await fetch(`http://localhost:9000/api/${endpoint}`, {
    headers: {
      "X-WORKER-ID": options.request.headers.get("X-WORKER-ID") ?? "0",
      ...options.headers,
    },
    ...options,
  }).then((r) => r.json());
  return response;
};
// app/routes/home.tsx
import type { Route } from "./+types/home";
import { api } from "../utils/api";

export async function loader({ request }: Route.LoaderArgs) {
  // We pass the `request` here to enable `api()`
  // to extract the `X-WORKER-ID` header from it.
  const message = await api("messages", { request });
  return message;
}

export default function Home({ loaderData }: Route.ComponentProps) {
  const { text } = loaderData;
  return <div>{text}</div>;
}

Wrapping It Up

By combining Playwright’s parallel worker capabilities with Specmatic’s stub server and a bit of header magic, we’ve created a robust solution for running parallel tests with distinct stubbed responses. This approach ensures test isolation, prevents flaky tests, and allows us to leverage the full power of parallel execution for faster feedback cycles. This method feels a tiny bit hacky, but I think it’s worth it, and in my experience, thanks to significantly improving the speed of our Playwright application tests.