

What is KVM?
KVM, or Kernel-based Virtual Machine, is a virtualisation module built into the Linux kernel that lets a Linux host run multiple isolated virtual machines. It uses CPU hardware virtualisation extensions, Intel VT-x or AMD-V, to enforce isolation between VMs at the hardware level, making it the foundation that technologies like Firecracker, QEMU, and Cloud Hypervisor build on.
This article covers how KVM works, what it is and is not, how it relates to microVMs and container sandboxing, and where it fits in the broader virtualisation stack. If you are looking for KVM switches (Keyboard, Video, Mouse hardware), that is a different technology entirely.
- KVM (Kernel-based Virtual Machine) is a Linux kernel module that exposes CPU hardware virtualisation extensions to user-space processes, enabling a Linux host to run isolated virtual machines
- It has been part of the mainline Linux kernel since version 2.6.20, merged in 2007, and requires Intel VT-x or AMD-V hardware support on the host CPU
- KVM is the virtualisation layer that Firecracker, QEMU, Cloud Hypervisor, and gVisor's KVM mode all build on
- Understanding KVM matters if you are running microVMs, building sandboxes, or evaluating isolation technologies for untrusted workloads
Northflank is a full-stack cloud platform that runs microVM-backed sandboxes using KVM-based technologies including Kata Containers, Firecracker, and Cloud Hypervisor, alongside gVisor for syscall-interception isolation. In production since 2021 across startups, public companies, and government deployments. Get started (self-serve) or book a session with an engineer for specific infrastructure or compliance requirements.
KVM is a full virtualisation solution built into the Linux kernel. It exposes hardware virtualisation capabilities (Intel VT-x on Intel processors, AMD-V on AMD processors) to processes running in user space. Any software that needs to create and manage virtual machines on Linux uses KVM as the underlying mechanism.
When KVM is loaded, the Linux host effectively becomes a hypervisor. Each virtual machine runs as a regular Linux process, but with its own virtualised CPU, memory, network interface, and storage. The hardware enforces isolation between VMs, so one VM cannot access the memory or resources of another.
KVM consists of two kernel modules: kvm.ko, which provides the core virtualisation infrastructure, and a processor-specific module, either kvm-intel.ko or kvm-amd.ko depending on the host CPU. Both are included in the mainline Linux kernel.
Hypervisors are commonly categorised as Type 1 (bare-metal) or Type 2 (hosted).
A Type 2 hypervisor runs on top of an existing operating system as an application. VirtualBox and VMware Workstation are Type 2. They are easy to install but add an extra software layer between the VM and the hardware, which increases overhead.
A Type 1 hypervisor runs directly on hardware without a general-purpose OS underneath. VMware ESXi and Xen are Type 1. They have lower overhead and are the standard for production virtualisation.
KVM blurs this distinction. It runs inside the Linux kernel, which means the host OS and the hypervisor are the same thing. When KVM is loaded, Linux itself becomes a Type 1 hypervisor. This is why KVM is sometimes described as a Type 1.5 hypervisor: it has the performance characteristics of bare-metal virtualisation while still running on a general-purpose OS.
KVM works by exposing CPU hardware virtualisation extensions as file descriptors that user-space programs can interact with. The primary interface is /dev/kvm, a character device that a VMM (Virtual Machine Monitor) opens to create and manage VMs.
Here is the sequence at a high level:
1. The VMM opens /dev/kvm: A user-space program like QEMU, Firecracker, or Cloud Hypervisor opens this device to access KVM.
2. The VMM creates a VM: An ioctl call to /dev/kvm creates a new VM file descriptor.
3. Virtual CPUs are created: The VMM creates one or more vCPUs for the VM, each represented as a file descriptor.
4. Memory is mapped: The VMM maps guest physical memory into the process's address space.
5. The vCPU enters guest mode: The VMM issues a run ioctl and the CPU switches from host mode to guest mode, executing the guest code directly on the hardware.
6. VM exits: When the guest needs something it cannot handle alone (a device access, a privileged instruction), the CPU exits back to host mode, and the VMM handles the request before re-entering guest mode.
The guest code runs directly on the CPU hardware during step 5, which is why KVM delivers near-native performance. The VM exit mechanism is how isolation is enforced: the guest cannot access host resources directly.
| KVM | VMware ESXi | VirtualBox | Xen | |
|---|---|---|---|---|
| Type | Type 1 (in-kernel) | Type 1 (bare-metal) | Type 2 (hosted) | Type 1 (bare-metal) |
| License | Open source (GPL) | Commercial | Open source / Commercial | Open source (GPL) |
| Host OS required | Linux | No | Yes | No (Dom0 Linux) |
| Performance | Near-native | Near-native | Higher overhead | Near-native |
| MicroVM support | Yes (via Firecracker, CLH) | Limited | No | Limited |
| Primary use case | Cloud, servers, microVMs | Enterprise virtualisation | Desktop development | Cloud, servers |
KVM's open-source licence and inclusion in the Linux kernel make it the dominant virtualisation layer in cloud infrastructure. Most major cloud providers run their virtualisation stack on top of KVM or KVM-derived technology.
This is where KVM becomes directly relevant to modern container security and AI workload isolation.
Standard containers share the host kernel. If a workload exploits a kernel vulnerability, it can affect the host and every other container on it. MicroVMs solve this by giving each workload its own dedicated kernel, and KVM is the enforcement layer that makes that boundary hardware-enforced rather than software-enforced.
Every major microVM technology uses KVM:
- Firecracker: uses KVM to create microVMs with approximately 125ms boot time and less than 5 MiB of memory overhead per instance. AWS Lambda and Fargate run on Firecracker. See What is AWS Firecracker?
- Cloud Hypervisor: uses KVM to run cloud-optimised VMs with support for GPU passthrough and live migration
- QEMU: uses KVM for accelerated virtualisation when hardware extensions are available
- gVisor's KVM mode: uses KVM to intercept syscalls with better performance than its Systrap mode, without booting a full guest OS per workload. See What is gVisor?
- Kata Containers: orchestrates Firecracker, Cloud Hypervisor, or QEMU on top of KVM to bring microVM isolation to Kubernetes workloads. See What is a microVM?
Without KVM on the host, none of these technologies can run. KVM is the hardware isolation primitive everything else builds on.
KVM requires the following to run:
- A Linux host: KVM is a Linux kernel module. It does not run on Windows or macOS hosts natively.
- Hardware virtualisation support: The host CPU must support Intel VT-x or AMD-V. Most server and desktop CPUs manufactured in the past decade include these extensions, but they may need to be enabled in the BIOS/UEFI.
- Kernel version 2.6.20 or later: KVM has been in the mainline Linux kernel since 2007, so this is satisfied by any modern Linux distribution.
- Loaded kernel modules: The
kvm.koand processor-specific modules must be loaded. Most distributions load them automatically if the CPU supports virtualisation.
For cloud environments where you are running VMs inside VMs, for example, running Firecracker microVMs on a cloud instance, the host cloud provider must expose hardware virtualisation to the guest instance. This is called nested virtualisation and not all cloud providers or instance types support it.
- Linux only: KVM is a Linux kernel feature. Running it on non-Linux hosts requires additional layers that negate most of its advantages.
- Requires hardware virtualisation: Hosts without Intel VT-x or AMD-V cannot use KVM. In environments where nested virtualisation is unavailable, technologies like gVisor's Systrap mode provide an alternative isolation approach without requiring KVM.
- User-space tooling required: KVM itself is just a kernel module. A VMM like QEMU, Firecracker, or Cloud Hypervisor is needed to actually create and manage VMs. KVM alone does not give you a usable virtualisation environment.
- Operational complexity at scale: Managing many KVM-based VMs in production requires orchestration tooling. Most teams use Kata Containers to abstract this complexity in Kubernetes environments.
Northflank's sandbox infrastructure uses Kata Containers with Cloud Hypervisor as the primary VMM, with Firecracker applied for workloads that benefit from its minimal device model. gVisor is applied where syscall-interception isolation is sufficient or where nested virtualisation is unavailable.
The platform has been in production since 2021 across startups, public companies, and government deployments. Sandboxes spin up in approximately 1 to 2 seconds, with compute pricing starting at $0.01667 per vCPU per hour and $0.00833 per GB of memory per hour. See the pricing page for full details.
Northflank supports both ephemeral and persistent sandbox environments on managed cloud or inside your own VPC, self-serve into AWS, GCP, Azure, Oracle, CoreWeave, Civo, on-premises, or bare-metal via bring your own cloud.
Get started with Northflank sandboxes
- Sandboxes on Northflank: overview and concepts: architecture overview and core sandbox concepts
- Deploy sandboxes on Northflank: step-by-step deployment guide: step-by-step deployment guide
- Deploy sandboxes in your cloud: BYOC deployment guide: run sandboxes inside your own VPC
- Create a sandbox with the SDK: programmatic sandbox creation: programmatic sandbox creation via the Northflank JS client
Get started (self-serve), or book a session with an engineer if you have specific infrastructure or compliance requirements.
KVM stands for Kernel-based Virtual Machine. It is a virtualisation module built into the Linux kernel that uses CPU hardware extensions to run isolated virtual machines on a Linux host.
KVM runs inside the Linux kernel, so when it is loaded, the Linux host effectively becomes a bare-metal hypervisor. It is commonly described as Type 1 because it has the performance characteristics of bare-metal virtualisation, though technically it runs within a general-purpose OS.
KVM is the kernel module that provides hardware-accelerated virtualisation. QEMU is a user-space emulator and VMM that uses KVM to run VMs. QEMU handles device emulation, VM lifecycle, and user interaction. KVM handles the hardware isolation. The two are complementary: QEMU without KVM runs in software emulation mode, which is significantly slower.
KVM is the virtualisation layer. A microVM is a lightweight virtual machine that runs on top of KVM via a minimal VMM like Firecracker or Cloud Hypervisor. KVM enforces the hardware isolation boundary. The microVM is the workload running inside that boundary with a dedicated guest kernel and minimal device model.
No. KVM requires Intel VT-x or AMD-V CPU extensions. Without them, the kernel modules will not load. QEMU can run without KVM in software emulation mode, but this is orders of magnitude slower and not suitable for production workloads.
KVM is an open-source Linux kernel module. VMware ESXi is a commercially licensed bare-metal hypervisor that runs independently of any general-purpose OS. Both provide hardware-level VM isolation, but KVM is free, included in Linux, and is the foundation of most open-source virtualisation and cloud infrastructure. VMware is common in enterprise environments with existing VMware tooling and support contracts.
A KVM switch is a hardware device that lets you control multiple computers from a single keyboard, monitor, and mouse. It is an entirely different technology from Kernel-based Virtual Machine. This article covers KVM, the hypervisor. KVM switches are used in data centre operations and multi-machine desktop setups and are unrelated to virtualisation.
- What is a microVM?: how microVMs use KVM for hardware-enforced isolation and which technologies implement them
- What is AWS Firecracker?: a technical breakdown of Firecracker's architecture and how it uses KVM to create microVMs
- What is gVisor?: how gVisor uses KVM in its KVM execution mode for syscall interception without booting a full VM
- Kata Containers vs Firecracker vs gVisor: how the three leading KVM-based and syscall-interception isolation technologies compare
- Firecracker vs gVisor: a focused comparison of hardware-level and syscall-level isolation approaches
- Firecracker vs Docker: how microVM isolation compares to standard container isolation and when each is the right choice
- Containers vs virtual machines: the broader comparison covering containers, VMs, and where KVM-based virtualisation fits in the stack
- How to spin up a secure code sandbox and microVM in seconds with Northflank: a step-by-step guide to deploying KVM-backed microVM workloads on Northflank

