Cloud Providers /
Manage and use your Kubernetes cluster with Northflank
You can monitor and manage your cluster from the clusters page in your account settings.
To begin using your chosen cloud provider on Northflank create a new project and select your cluster as the provider. Every resource created and used in the project will be built, deployed, and served from your cluster.
Workloads will be automatically scheduled to nodes and node pools based on load and capacity.
You can monitor your cluster from by opening the overview from the clusters page in account settings:
- Details: see details of the cluster, including the state of the cluster and nodes, cloud provider, region, and Kubernetes version
- Nodes: contains the status of individual nodes, their ID and associated pool
- Node pools: view and edit node pool configurations
- Components: see the status of the cluster and Northflank platform components
- Cluster history: review the history of the cluster state, for example to check when an update took place and how long it took
- Projects: a list of projects created on the cluster
You can add, scale, and delete node pools on the node pools page in your cluster overview.
Click add node pool to add another pool, and to delete a node pool. Each cluster requires at least one node pool (Azure requires one pool to be the system node pool).
You should increase your nodes or add a node pool if your services are failing to progress from the
staging state, which indicates your cluster is at capacity. You can either increase the number of nodes in a pool, or add another node pool if the capacity of your node pools is exceeded. As each node in a pool is identical, adding another pool will allow you to add nodes of a different type, with autoscaling, spot instances, or larger disk sizes.
The types of node available depend on your selected cloud provider. Refer to the documentation for your cloud provider to select a type of node with sufficient resources for your workloads.
|Provider||Available node types|
|Amazon (EKS)||m5.large, m5.xlarge, m5.2xlarge, m5d.2xlarge, m5a.2xlarge, t3.large, t3.xlarge, t3.2xlarge|
|Azure (AKS)||Standard_D2ds_v5, Standard_D4ds_v5, Standard_D8ds_v5, Standard_D16ds_v5|
|Google (GKE)||n2-standard-4, n2-standard-8, n2-standard-16, n2-custom-30-131072, c2-standard-8, c2-standard-16, c2d-standard-8, c2d-standard-16, n2d-standard-8, n2d-standard-16, n2d-standard-32|
Select the number of nodes to create in the pool. Each node will be created with the same specifications defined by the node type.
You can enable autoscaling to allow the cluster to automatically increase and decrease the number of nodes in the pool based on workload demand. Define a minimum and maximum number of nodes to ensure consistent availability, and to cap usage. Autoscaling can help prevent issues from attempting to run too many workloads for a set number of nodes, but will also mean your billing from your cloud provider will vary if more nodes are deployed.
You can select the size of disk to assign to each node in the pool. Each node will use a disk of the specified size as ephemeral storage for workloads, cached image layers, and container logs.
You can enable spot instances for a node pool to run workloads at a reduced cost on your chosen cloud provider, but these instances may be restarted at short notice. You may encounter issues provisioning node pools if the amount of available spot instances in a region or availability zone on a cloud provider is reduced.
You should not use spot instances for uninterruptible workloads, such as a production web server.
Shutdown warning for spot instances
|Amazon (EKS)||Azure (AKS)||Google (GKE)|
|120 seconds||up to 30 seconds||30 seconds|