Gpu Workloads /
GPUs on Northflank
You can deploy services and run jobs on Northflank with GPU access, using Northflank's managed cloud or in your own cloud account.
This lets you run GPU workloads for AI, ML, data analysis, simulations, graphics rendering, and other tasks which require high-performance computing on Northflank.
Deploy GPUs on Northflank's managed cloud
You can create GPU-enabled projects on Northflank's managed cloud and only pay for the resources consumed, so you can deploy workloads with access powerful GPUs without having to configure your own Kubernetes clusters.
Deploy GPUs in your own cloud
You can deploy and manage GPU-enabled nodes on other cloud providers using Northflank. This allows you to retain control of your infrastructure and existing billing relationships, while using Northflank to deploy GPU workloads.
Any Northflank projects deployed to your cluster will be able to make use of the GPU-enabled nodes.
Deploy GPU node pools
Configure workloads to deploy on GPU nodes
Allow multiple workloads to use a GPU with timeslicing
Schedule GPU workloads to specific nodes
Configure and deploy GPU workloads
Generally your GPU workloads will not require any extra configuration to run on Northflank, and you can select a GPU model for your service or job when creating or updating a service.
You will need to build with, or deploy, images compatible with the GPU model you wish to use, and you will need to correctly configure your deployed applications to make use of available GPUs.
Configure applications to use GPUs
You can directly deploy or build your applications with Docker images that are optimised for your desired GPU model and AI/ML libraries.
Build with GPU-optimised images
You can directly deploy or build your applications with Docker images that are optimised for your desired GPU model and AI/ML libraries.
Right-size resources for GPU workloads
Scale CPU, memory, and ephemeral storage to handle GPU workloads.
Persist models and data
You can directly deploy or build your applications with Docker images that are optimised for your desired GPU model and AI/ML libraries.