v1

Cloud Providers /

Configure your cluster

You can configure how Northflank will manage volume deletion, builds, and resources requests for your cluster.

Click here to view your clusters.

Select build infrastructure

You can configure the build infrastructure for the clusters you create, which allows you to define where all builds in projects on that cluster will take place, and what compute resources will be granted to builds (build plans).

This option can be found under advanced when creating a cluster, or in settings on the details page for existing clusters.

Select a build cluster

You can select another cluster that you have provisioned specifically for builds, and the cluster you are currently configuring will then deploy all build workloads to the build cluster. All builds will use the build plan configured on the target build cluster.

This gives you the flexibility to:

  • ensure build workloads have capacity to be scheduled and completed by using a pooled build cluster, separate from your deployment clusters
  • ensure your deployments and jobs are not delayed by build workloads
  • select spot instances to reduce costs
  • use another cloud provider and different node types, if required

Build using the Northflank platform as a service

You can select this option to build using Northflank's platform as a service, paying only for the usage of compute resources. Specific build plans can be selected in each service or job.

Build on the same cluster

You can choose to build and deploy on the same cluster, and select the specific build plan for all builds to use. The selected build plan will override build plans selected by users in their services and jobs created on that cluster. You should ensure that your node pools have sufficient resources and nodes to provision both deployments and build workloads.

Configure resources

You can configure the minimum resources requested by containers on a cluster by changing the request modifiers under advanced when creating a cluster, or in settings on the details page for existing clusters.

Containers request resources from a node according to the selected compute or build plan for a service, job, or addon. The request modifiers reduce the resources specified in the plan by a percentage, so a request modifier of 0.7 will request 70% of the resource defined in the plan. This defines the minimum resources that will be requested by a container from a node, but the container can use the full amount of resources specified in the plan if there is available capacity on a node. The resources available to the container will be throttled to the minimum request, as calculated by the container's plan and the request modifiers, if the available resources are reduced by other containers deployed to the node.

Reducing the request modifiers allows you to over-provision a node with containers, which can optimise your costs if your workload's average requirements are expected to be smaller than the resources of the selected plan. For example, you could provision a node with 10 vCPUs and use compute plans that request 2 vCPUs for deployment services, which would mean that 5 containers would be deployed on a node (ignoring overheads). By setting the service CPU request modifier to 0.5 the same node could now deploy up to 10 containers for the deployment service, with each container throttled to less than 2 vCPU as more workloads are deployed on the node, until they reach a minimum of 1 vCPU for each container.

  • Workloads will be increasingly throttled as the node reaches full CPU utilisation and as additional containers are deployed. Consider how much CPU and memory your workloads will require when configuring request modifiers to avoid situations where nodes become oversaturated.
  • An excessively aggressive memory request modifier can lead to memory usage exceeding the node memory. This will cause the node to experience out of memory (OOM) errors and workloads will become unavailable. This can also potentially cause cascading failures across nodes.

Request modifiers affect all container deployments on the cluster, and will reduce the resources requested by all compute and build plans by the same percentage. Changing the request modifiers for addons on an existing cluster will restart any addons on the cluster.

Custom resource plans

You can create resource plans for your team to deploy workloads with custom vCPU and memory requests and limits.

Set volume deletion preferences

By default, Northflank will delete all volumes and volume snapshots (from addon backups) when you delete a cluster.

If you want to retain these volumes you can expand the advanced menu and uncheck them, or uncheck them in the settings on an existing cluster, on the cluster details page.

Please note that you will still be billed by your cloud provider for any volumes left after cluster deletion. You should always delete your Northflank-created clusters through the Northflank interface to avoid orphaned resources.

© 2024 Northflank Ltd. All rights reserved.