Containers
Run and schedule containers
Containers on Northflank
Why use containerized code
- Once code is designed to run inside a container, deployment both on-premises and in the cloud is done in the same way.
- Provides a consistent deployment environment, therefore reducing environment variables.
- Keep code neatly separated, to create a microservices infrastructure.
- Isolated services and applications, making applications more secure.
- Maximise scalability and availability
Container image registries
- Amazon Elastic Container Registry
- Google Cloud Container Registry
- GitHub
- GitLab
- Northflank
- Quay
Container image builders
- Docker
- BuildKit
- Kaniko
- Buildpacks
- Paketo Buildpacks
- CNBs - Cloud Native Buildpacks
Platform Teams
Northflank makes it easier for infrastructure and platform teams to focus on levelling up the developer experience for their teams building great apps.
Common Containers questions among developers and platform teams:
Health checks can be configured so that incoming traffic is only routed to available and healthy containers. Under health checks, you can configure the type, path, protocol and frequency with which the checks will run.
When creating a service, under Resources you can choose the compute plan (memory and virtual CPU), storage and replicas. Once the service is created, under resources you will always be able to change these to adapt to your needs.
When creating a service, under Networking you can choose the ports at which the service will be accessible. You can choose “Publicly expose this port to the internet”, your service will be given a public URL and it will be externally accessible.
You can inject build arguments and environment variables to your services to be injected at build or runtime respectively. The editor for both allows you to view arguments and variables, edit them as key-value pairs, or in JSON or ENV format.
Containerized Applications on Kubernetes with Northflank
Northflank makes building containerized images, either from public container registries or your VCS account, very simple.
The complex and challenging Kubernetes container orchestration is immediately usable via the Northflank UI and API.
Immediately set up CI/CD flows, scale the RAM and CPU resources to meet your requirements, and customise the exposed ports.
Persistent and stateful containers on Northflank
Volumes are great for persisting data generated by and used by containers. On Northflank, volumes can be attached to services to persist data across restarts.
Run a container with an image on DockerHub via API
await apiClient.create.service.deployment({
data: {
"name": "Example Service",
"description": "A service description",
"deployment": {
"instances": 1,
"external": {
"imagePath": "nginx:latest"
}
},
"ports": [
{
"name": "port-1",
"internalPort": 8080,
"public": true,
"domains": [
"app.example.com"
],
"protocol": "HTTP"
}
],
"runtimeEnvironment": {
"VARIABLE_1": "abcdef"
}
}
});
Scale containers via API
await apiClient.scale.service({
data: {
"instances": 1,
"deploymentPlan": "nf-compute-20"
}
});