

Deploy Elastic Stack on Northflank - Elasticsearch, Kibana, & Logstash
The Elastic stack, often referred to as the ELK stack, is a powerful combination of tools to collect, analyze, and visualize data in real-time. The stack includes Elasticsearch for data storing and indexing, Logstash for data processing, and Kibana for visualization. Together, these tools provide actionable insights across diverse use cases like application monitoring, log analytics, and search functionality.
Deploying the Elastic stack on Northflank introduces significant advantages, as the platform combines scalability, containerization, and seamless DevOps integration, making it an ideal environment to host the stack efficiently.
This article will guide you through the step-by-step process of deploying the ELK stack on Northflank. We'll cover how to set up persistent storage for Elasticsearch to ensure your data is secure and accessible, making the deployment both robust and production-ready.
We'll then look at more advanced Northflank features that you can use to scale and manage your project, using pipelines and release flows to take your stack from development through to production environments, and using secret groups for configuration and storing sensitive values.
Before you deploy the ELK stack on Northflank, make sure you have done the following:
- Sign up or log in to your Northflank account and create or select a team
- Create a project for your ELK stack, or choose an existing one to deploy into
- Connect your Git account containing to enable building and deploying directly from your repositories
In this demo, we will use a single repo with separate folders for ELK components (Elasticsearch, Logstash, and Kibana) to build the docker images. For this demo, this is the structure of the Elasticsearch directory, the same structure will be used for the other components:
.
├── elasticsearch/
│ ├── bin/
│ │ ├── docker-entrypoint.sh
│ │ └── docker-openjdk
│ ├── config/
│ │ ├── elasticsearch.yml
│ │ └── log4j2.properties
│ └── Dockerfile
├── kibana/
├── logstash/
└── README.md
You can follow this guide to set up your existing repository, or fork our example repo.
First we'll set up Elasticsearch in Northflank using separate build and deploy services. These services can be added to a pipeline later, and we can manage deployments using release flows. Then, we'll add Logstash and Kibana with the same method.
We have configured Elasticsearch in elasticsearch/config/elasticsearch.yml
to run with the configuration below, the discovery type is “single-node” and the security feature has been disabled for the purpose of the demo.
discovery.type: "single-node"
cluster.name: "docker-cluster"
network.host: 0.0.0.0
xpack.security.enabled: false
We can build the Elasticsearch image to deploy by adding a Dockerfile to the Git repository and then linking this repository to a Northflank build service. This allows you to tailor the image to your specific needs, such as including plugins or pre-configuring settings.
- Navigate to your project then choose create new > service from the project dashboard and select
build service
. - Give it a name (
build-elasticsearch
), then go to the repository section and pick your Elasticsearch repository. You can set the build rules for pull requests and branches here, and the build service will automatically build any commits pushed to the repo that match your rules.
- Select Dockerfile as the build type. You can change the location of your Dockerfile if it's not in your repository root, or change your build context if your Elasticsearch configuration files are in a subdirectory of the repo. Based on the demo's directory structure, we specify
/elasticsearch
as the build context and/elasticsearch/Dockerfile
as the Dockerfile location.
- We'll use the recommended default resources, which provide 4 vCPU and 16GB memory for the build process.
- Next, create the build service and click the start build button. Select the branch and then the commit you want and trigger a build. You can then click on the build in progress to view logs and metrics, when the build has completed successfully the status will be updated in the service.
Now, you need to create a deployment service, which will deploy the image created in the previous step. Choose create new > service from the project dashboard. Make sure deployment service is selected, and give it a name (
elasticsearch
)In the deployment section select Northflank as the source, and choose the Elasticsearch build service created in the previous step from the list.
In the networking section, you need to configure certain ports to allow connectivity for Elasticsearch as below:
Port Protocol Accessibility 9200 TCP Private 9300 TCP Private You should select a compute plan with sufficient resources, the Northflank 100-1 plan has 1 vCPU and 1GB memory, which is enough to run Elasticsearch for this demo. You can scale your resource plan at any time. After selecting a plan, create the service.
After creation you can go to the volumes section of the deployment service and add a persistent volume for the Elasticsearch data. Give it a name and select the type and size of disk to use (you can increase the disk size later if required). Set the mount path to
/usr/share/elasticsearch/data
, where Elasticsearch stores and indexes its data, then save & redeploy.
- You can click on a container to check the logs and metrics of the deployment when it has successfully deployed. Open Ports & DNS, change port 9200 to HTTP and expose it to the internet. This is a temporary change to check the endpoint, alternatively you can use the Northflank CLI to securely forward resources in your project without exposing them to the public internet.
- You should be able to see that Elasticsearch is accessible through the public endpoint as below, then you can switch the port type back to TCP.
Repeat the steps above to create build and deployment services for Logstash, with the following changes:
- The build context for the Logstash build service will be ‘/logstash’ and the Dockerfile path will be ‘/logstash/Dockerfile’
- The ports to be configured for Logstash are as below:
Port | Protocol | Accessibility |
---|---|---|
9600 | TCP | Private |
5044 | TCP | Private |
Logstash is configured to generate a “Hello world” message, to be sent to the private Elasticsearch endpoint, obtained by environment variable, then indexed as below.
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "hello-world-logs-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
After starting the Logstash deployment service, you should be able to see logs containing “successfully started Logstash API endpoint” and that the Elasticsearch service has been detected.
[INFO][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[INFO][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
As this is just a loop for the example this process will end and the service will restart, in normal usage Logstash would run continuously.
Next we’ll create build and deployment services for Kibana, with the below changes:
- The build context for the Kibana build service will be ‘/kibana’ and the Dockerfile path will be ‘/kibana/Dockerfile’
- The network port to be configured for the Kibana deployment is as below:
Port | Protocol | Accessibility |
---|---|---|
5601 | HTTP | Public |
The Elasticsearch private endpoint needs to be configured in the /kibana/config/kibana.yml
file:
elasticsearch.hosts: [ "[http://elasticsearch:9200](http://deploy-elasticsearch:9200/)" ]
After starting the Kibana deployment service, you should be able to see logs mentioning Kibana is running on port 5601, and that it is successfully connected to the Elasticsearch service.
[INFO][http.server.Kibana] http server running at [http://0.0.0.0:5601](http://0.0.0.0:5601/)
[INFO][elasticsearch-service] Successfully connected to Elasticsearch after waiting for 678 milliseconds
Check the public Kibana subdomain, found in the header of the service, and you should be able to see the Kibana welcome page.
Now we have successfully deployed the ELK stack on Northflank. In the next section, we will navigate through Kibana to make sure it is receiving data and indexing it, also we will test data persistence by restarting the Elasticsearch service.
Now let’s check our data in Kibana by navigating to management > data > index management. You should see that data is retrieved successfully as below. You can then create an index for this pattern to explore data through a single view.
To make sure data is persistent, you can try restarting the Elasticsearch deployment service through the rollout restart button found in the service header, or on your project service overview. After restarting, you can check that the data in Kibana is still present.
To ensure a seamless deployment of the Elastic stack components without any downtime, we will utilize the release flow feature of the Northflank platform. This approach allows us to deploy each component in a controlled sequence, maintaining system stability throughout the process. We’ll begin by deploying Elasticsearch, as it serves as the core data store and must be operational before the other components. Once Elasticsearch is up and running, we’ll deploy Logstash to handle data ingestion and transformation, ensuring it can immediately connect to Elasticsearch. Finally, we’ll deploy Kibana, which depends on both Elasticsearch and Logstash, to provide real-time visualization and analytics. By following this stepwise release flow, we can guarantee a smooth deployment with minimal risk of service interruptions.
To create a release flow, create a pipeline in your project first and then add your deployment services to a certain stage (e.g., development).
Then you can click add release flow on the pipeline stage that contains your resources. You will be directed to the release flow editor, where you configure it using the visual editor:
- Drag and drop a parallel workflow into the sequential workflow. This will allow the nodes inside it to run simultaneously, without waiting for other nodes to complete.
- Drag and drop three start build nodes into the parallel workflow, and select each build service in the project for each part of the ELK stack. Make sure
wait for completion
is enabled. - Drag a deploy build node and drop it after the parallel workflow. Edit the node, and select the reference to the start build node for Elasticsearch, and use the same reference for the branch and build fields.
- Drag and drop an await condition node after the deploy build node. Set the kind to
service
, choose theelasticstack
service as the resource, and wait until the resource is running. - Repeat the previous two steps to deploy Logstash and then Kibana, waiting until the previous deployment has completed.
If you find that services are being deployed before other services are ready to serve traffic, you can add health checks to make sure your applications have initialised after the container has started. Try adding a readiness probe to your Elasticsearch deployment. Select TCP
and port 9200
, and set the initial delay to 30
seconds, with 15
second intervals and 6
max failures. This should give the application time to start up before Logstash is started.
You can run the release flow by returning to the pipeline and clicking run on the relevant stage. It will proceed according to the template we defined, building Elasticsearch, Logstash, and Kibana, and then deploying each service in order.
To run releases automatically, you can add a Git trigger to run the release flow when a commit is pushed to a branch that matches the trigger's rules. For example, you could watch your repository for any changes to the main
branch, and trigger a release when it's updated. For this demo, we have configured the release flow to be triggered whenever a new commit is pushed to the main
branch.
You can make any change in your repository and commit these changes to the main
branch, and you will notice that the release flow is triggered automatically. You can click through to watch the run in progress as it executes each node.
In this demo we used the default connection details and settings for our services. You can configure your Elastic services using environment variables, which allows you to use the same repository for multiple environments, or test new settings without having to commit the configuration to your repository.
In our example we could set the hosts for our Kibana and Logstash service using environment variables either set directly on the service, or in a secret group.
For example, the host set in kibana.yml
by elasticsearch.hosts
could be set by the environment variable ELASTICSEARCH_HOSTS
with the value "http://elasticsearch:9200"
, the private endpoint of our Elasticsearch service.
Environment variables will be inherited by all services in your project, unless you restrict them.
You can configure each service in the ELK stack using environment variables:
You can also upload secret files to provide configuration files to your services.
This article demonstrated how Northflank offers a powerful platform for deploying, managing, and scaling ELK stack components. With features like release flows, secret groups and different types of services that manage the lifecycle of the application, Northflank simplifies CI/CD workflows while maintaining flexibility and efficiency for deploying ELK stack.
Whether you’re a solo developer or part of a large team, Northflank empowers you to focus on delivering high-quality applications while it handles the headache of infrastructure management.
You can continue reading other guides on how to deploy different applications on Northflank or explore the benefits of Bring Your Own Cloud feature on Northflank.
Northflank allows you to deploy your code and databases within minutes. Sign up for a Northflank account and create a free project to get started.
- Deployment of Docker containers
- Create your own stateful workloads
- Persistent volumes
- Observe & monitor with real-time metrics & logs
- Low latency and high performance
- Backup, restore and fork databases