← Back to Guides
Profile image for Nele Uhlemann

By Nele Uhlemann

Published 6th December 2023

Application Performance Monitoring on Northflank with Autometrics

Adapted from the original post, Application Performance Monitoring on Northflank, on the Autometrics blog.

Autometrics is an open-source micro framework for application monitoring. It’s built on Prometheus, and offers language libraries in C#, Go, Java, JS, Python, and Rust, simplifying instrumentation based on your code's functions. Autometrics automates the creation of PromQL queries, providing insights into latency, errors, and request rates for each function. This makes it easy to glean valuable insights from your application metrics.

In this guide, we’ll detail how to set up Autometrics for your Northflank-hosted applications.

1. Instrument application & SLOs

The Autometrics language libraries use the concept of wrappers, or meta programming, for instrumenting functions in the application code. Check out the Autometrics docs to learn how the instrumentation works for different languages.

In addition to instrumentation, the libraries expose helpers for defining Service Level Objectives (SLOs). You can reference this guide to learn more about SLOs, and how to configure them in different languages.

2. Set application port

To store metrics, Autometrics writes metrics to a designated API endpoint, typically /metrics, which Prometheus regularly polls to collect and store data. To enable this connection within Northflank, ensure that the application exposes a port and activates private network access.

Adding a private port to a deployment in the Northflank application

3. Deploy and set up Prometheus

To deploy Prometheus on your Northflank Cloud, begin by setting up a repository that includes a Dockerfile and a prometheus.yml file. The prometheus.yml will configure the specific endpoint that Prometheus will use to retrieve data. In this context, the file should specify the /metrics endpoint of your application.

scrape_configs:
  - job_name: internal-endpoint2
    metrics_path: /metrics
    static_configs:
      # Replace the port with the port your /metrics endpoint is running on
      - targets: ['to-do-backend:8000']
    # For a real deployment, you would want the scrape interval to be
    # longer but for testing, you want the data to show up quickly
    scrape_interval: 120s

4. Connect Autometrics Explorer to Prometheus

To connect Autometrics Explorer to your running Prometheus instance, you need to set up a private network and port for deploying the Prometheus instance. It is important to choose a TCP connection.

Adding a private port to a deployment in the Northflank application

5. Deploy and set up Autometrics Explorer

You can deploy the Autometrics Explorer directly from Dockerhub. To facilitate the connection with your running Prometheus instance, it's necessary to configure an environment variable within the Northflank setup of the Explorer.

Adding an environment variable to a deployment in the Northflank application

Lastly, make sure to configure a public endpoint to ensure that the Autometrics Explorer UI can be accessed.

Adding a public port to a deployment in the Northflank application

The Autometrics Explorer is a specialized user interface that offers an in-depth view of function call graphs and robust alerting capabilities. You can also visualize Autometrics data via their VS Code plugin and/or preconfigured Grafana Dashboards. Using these tools, you can now monitor your application's performance with function-based metrics, Service Level Objectives (SLOs), and function call graphs. These metrics provide valuable insights for debugging and enhancing your application code.

The Autometrics dashboard view after integrating an application running on Northflank

Originally published at https://autometrics.dev/blog/application-performance-monitoring-on-northflank.

Share this article with your network