v1

Sandboxes /

Sandboxes on Northflank

Northflank sandboxes are microVM-backed containers that provide VM-level isolation with container performance. They boot in under 1 second and prevent container escape, making them ideal for running untrusted code like LLM-generated code, user-submitted code, AI agents, and CI/CD pipelines.

Sandboxes use microVM-based virtualization and user-space kernel isolation to prevent breakout attacks while maintaining near-native performance. Each container runs in its own isolated environment with a separate kernel instance.

Create sandboxes with the SDK

This guide shows how to create and manage sandboxes programmatically using the Northflank JavaScript SDK.

Generate an API token

Click here to create an API token.

Create a project

Click here to create a project.

Create a project on Northflank Cloud or select a BYOC cluster. Note the project ID, you will pass it as a parameter in every API call that follows.

Install and initialize the SDK

Install the Northflank SDK and authenticate with your API token. The client has typed methods for every API endpoint and helpers for exec and streaming. See the SDK documentation for more.

$ npm install @northflank/js-client

Create an ApiClient instance with your API token. The context provider manages authentication, and throwErrorOnHttpErrorCode ensures failed requests throw rather than returning silently.

import {
  ApiClient,
  ApiClientInMemoryContextProvider,
} from "@northflank/js-client";

const contextProvider = new ApiClientInMemoryContextProvider();
await contextProvider.addContext({
  name: "context",
  token: process.env.NORTHFLANK_TOKEN,
});

const apiClient = new ApiClient(contextProvider, {
  throwErrorOnHttpErrorCode: true,
});

Create the sandbox service

Deploy a container image as a deployment service. Each sandbox maps to a single Northflank service, and the sandboxId is used as both the service name and the identifier in subsequent calls.

The service is created with instances: 0 so it doesn't start immediately. This gives you a chance to attach a volume or configure ports before booting it. If you don't need a volume, you can skip straight to the start step.

const sandboxId = `sandbox-${crypto.randomUUID().split('-')[4]}`;

await apiClient.create.service.deployment({
  parameters: {
    projectId: 'your-project-id',
  },
  data: {
    name: sandboxId,
    billing: {
      deploymentPlan: 'nf-compute-200', // 2 vCPU, 4GB RAM
    },
    deployment: {
      instances: 0, // start manually after setup, set to 1 to start instantly
      external: {
        imagePath: 'ubuntu:22.04',
      },
      storage: {
        ephemeralStorage: {
          storageSize: 2048,
        },
      },
    },
    runtimeEnvironment: {
      MY_VAR: 'hello-world',
    },
  },
});

The deploymentPlan controls CPU and memory allocation. Use external.imagePath to pull any public or private container image. Environment variables passed via runtimeEnvironment are injected into the container at runtime.

Attach a persistent volume (optional)

Skip this step if your sandbox does not need persistent storage. Volumes preserve data across restarts and pauses, so anything written to the mount path survives a scale to zero.

Create the service with instances: 0, attach a volume, then scale up. The mounts array defines where the volume appears inside the container, and attachedObjects binds it to an existing service.

// Create the service, paused
await apiClient.create.service.deployment({
  parameters: {
    projectId: "your-project-id",
  },
  data: {
    name: sandboxId,
    billing: {
      deploymentPlan: "nf-compute-200",
    },
    deployment: {
      instances: 0,
      external: {
        imagePath: "ubuntu:22.04",
      },
      storage: {
        ephemeralStorage: {
          storageSize: 2048,
        },
      },
    },
    runtimeEnvironment: {
      MY_VAR: "hello-world",
    },
  },
});

// Create and attach the volume
await apiClient.create.volume({
  parameters: {
    projectId: "your-project-id",
  },
  data: {
    name: `data-${sandboxId}`,
    mounts: [
      {
        containerMountPath: "/workspace",
      },
    ],
    spec: {
      accessMode: "ReadWriteMany",
      storageClassName: "nf-multi-rw",
      storageSize: 10240, // 10 GiB
    },
    attachedObjects: [
      {
        id: sandboxId,
        type: "service",
      },
    ],
  },
});

// Scale up
await apiClient.scale.service({
  parameters: {
    projectId: "your-project-id",
    serviceId: sandboxId,
  },
  data: {
    instances: 1,
  },
});

The recommended approach is to create the volume beforehand and attach it at service creation time using createOptions.volumesToAttach. This mounts the volume directly when the service starts:

const volumeName = `data-${sandboxId}`;

// Create the volume first
await apiClient.create.volume({
  parameters: {
    projectId: "your-project-id",
  },
  data: {
    name: volumeName,
    mounts: [
      {
        containerMountPath: "/workspace",
      },
    ],
    spec: {
      accessMode: "ReadWriteMany",
      storageClassName: "nf-multi-rw",
      storageSize: 10240, // 10 GiB
    },
  },
});

// Create the service and attach the volume
await apiClient.create.service.deployment({
  parameters: {
    projectId: "your-project-id",
  },
  data: {
    name: sandboxId,
    billing: {
      deploymentPlan: "nf-compute-200",
    },
    deployment: {
      instances: 1,
      external: {
        imagePath: "ubuntu:22.04",
      },
      storage: {
        ephemeralStorage: {
          storageSize: 2048,
        },
      },
    },
    createOptions: {
      volumesToAttach: [volumeName],
    },
    runtimeEnvironment: {
      MY_VAR: "hello-world",
    },
  },
});

Start the sandbox

Scale the service to 1 instance to boot the sandbox. Since deployments are asynchronous, poll the service status until it reaches COMPLETED (running) or FAILED.

await apiClient.scale.service({
  parameters: {
    projectId: "your-project-id",
    serviceId: sandboxId,
  },
  data: {
    instances: 1,
  },
});

// Poll until the service is running
async function waitForReady() {
  while (true) {
    const svc = await apiClient.get.service({
      parameters: {
        projectId: "your-project-id",
        serviceId: sandboxId,
      },
    });

    const status = svc.data?.status?.deployment?.status;
    if (status === "COMPLETED") return;
    if (status === "FAILED") throw new Error("Sandbox deployment failed");

    await new Promise((r) => setTimeout(r, 1000));
  }
}

await waitForReady();

The waitForReady helper is a simple polling loop. In production you may want to add a timeout or use exponential backoff.

Execute commands inside the sandbox

Run commands inside the sandbox using the exec API. This opens a session into the running container and returns stdout and stderr as Node.js readable streams.

const handle = await apiClient.exec.execServiceSession(
  {
    projectId: 'your-project-id',
    serviceId: sandboxId,
  },
  {
    shell: 'bash -c',
    command: "echo 'Hello from the sandbox!' && ls /workspace",
  }
);

const stdoutChunks = [];
const stderrChunks = [];

handle.stdOut.on('data', (data) => stdoutChunks.push(data.toString()));
handle.stdErr.on('data', (data) => stderrChunks.push(data.toString()));

const result = await handle.waitForCommandResult();

console.log('Exit code:', result.exitCode);
console.log('Stdout:', stdoutChunks.join(''));
console.log('Stderr:', stderrChunks.join(''));

The shell option controls which shell interprets the command. Use bash -c for shell expressions with pipes, redirects, or chained commands. The waitForCommandResult promise resolves once the command exits, returning the exit code.

Expose a port (optional)

If your sandbox runs a web server or any network service, expose a port to make it reachable over the internet. Northflank provisions a public DNS name automatically.

await apiClient.update.service.ports({
  parameters: {
    projectId: 'your-project-id',
    serviceId: sandboxId,
  },
  data: {
    ports: [
      {
        name: 'http',
        internalPort: 8080,
        public: true,
        protocol: 'HTTP',
      },
    ],
  },
});

// Retrieve the public DNS
const ports = await apiClient.get.service.ports({
  parameters: {
    projectId: 'your-project-id',
    serviceId: sandboxId,
  },
});

const publicUrl = ports.data.ports.find((p) => p.internalPort === 8080)?.dns;
console.log('Public URL:', publicUrl);

Set public: false if the port should only be reachable by other services within the same project. The protocol field supports HTTP, HTTP2, TCP, and UDP.

Pause or destroy the sandbox

Pause a sandbox by scaling to zero to stop compute billing while keeping the volume and service configuration intact. Resume later by scaling back to 1.

To permanently remove a sandbox, delete the service and its volume separately.

// Pause: scales to 0, volume data persists
await apiClient.scale.service({
  parameters: {
    projectId: 'your-project-id',
    serviceId: sandboxId,
  },
  data: {
    instances: 0,
  },
});

// Destroy: removes the service entirely
await apiClient.delete.service({
  parameters: {
    projectId: 'your-project-id',
    serviceId: sandboxId,
  },
});

// Delete the volume (if one was created)
await apiClient.delete.volume({
  parameters: {
    projectId: 'your-project-id',
    volumeId: volumeId,
  },
});

When paused, you are only billed for volume storage. Deleting the volume is irreversible and all persisted data will be lost.

© 2026 Northflank Ltd. All rights reserved.

northflank.com / Terms / Privacy / feedback@northflank.com