
If you're working with GPU-intensive tasks—such as training deep learning models with TensorFlow, processing massive datasets, or performing real-time simulations—a Jupyter Notebook with TensorFlow pre-installed can dramatically streamline your workflow. This setup accelerates computation, shortens development cycles, and enables rapid experimentation directly within a familiar, browser-based interface.
When deployed into your own cloud account using Northflank's BYOC, you gain the full flexibility of Jupyter's interactive environment along with the raw performance of NVIDIA GPU acceleration. It’s an ideal solution for AI development, scientific computing, and any high-throughput workload that demands efficient, scalable compute resources, all in your own cloud infrastructure.
This guide covers integrating your cloud provider account with Northflank, creating a new cluster using Northflank, and finally deploying and using Jupyter Notebook optimised for GPU acceleration.
You can use our stack templates to deploy a new cluster with a Jupyter TensorFlow Notebook project:
You can open the visual editor to view and edit the configuration of any node, and make changes to the cluster, custom plan, and deployments.
- A Northflank account
- An account with a supported cloud provider
First you’ll need to integrate your cloud provider account with Northflank. This will let you deploy Northflank-managed clusters into your chosen cloud and fine-tune your infrastructure and networking to suit your requirements.
- Create a new integration in your team
- Give it a name and select your provider
- Follow the instructions and create the integration
Next you’ll need to provision a new cluster in your cloud account using Northflank.
- Give it a name, select the provider, and choose your credentials
- Choose the region to deploy your cluster into. You’ll need to have the quota to deploy your desired node types in the region.
- Add your node pools. Each cluster requires at least one node pool, and a combined minimum of 4 vCPU and 8GB memory across all node pools for system components. To enable the deployment of GPU-workloads add a node pool with a GPU-enabled node type.
- Create your new cluster and wait for it to provision (this can take 15-30 minutes)
Learn more about configuring clusters on Northflank.
- In your Northflank account create a new project
- Select bring your own cloud and enter a name
- Choose your cluster and click create
- Create a new deployment service in your project
- Name your service, and select external image in the deployment section
- Enter the image name
quay.io/jupyter/tensorflow-notebook:cuda-latest
- Add port
8888
in the networking section, choose HTTP and publicly expose the port to the internet - Choose a deployment plan with sufficient resources for your requirements, as well as a GPU from the dropdown. You can define a custom resource plan to make full use of your cluster’s nodes.
- Click create service to deploy into your cluster
You will be redirected to your new service dashboard, which will look something like this:
Since containers on Northflank are ephemeral any data will be lost when the container restarts. To make use of persistent storage, you will need to add a volume to your service.
To do this:
- Open volumes in the left-hand menu in your service dashboard
- Add a volume and name it something sensible
- Adjust the storage size to meet your requirements
- For the container mount path, enter
/home/jovyan
, this is where Jupyter stores data - Click create and attach volume. Your deployment will restart when the volume is attached and will be limited to 1 instance.
Once your container has started up, you will be able to access your new deployment via the URL in the top right corner of your service dashboard. This will take you to the web interface for your Jupyter server.
To log in, you need to grab your authentication token. Return to your service dashboard and click the shell button on your running container. Once you’re in the shell, run the command jupyter server list
. This will return a local URL with the token appended.
Copy just the token (34e…144
), and head back to your Jupyter web interface. Paste the token into the ‘password or token’ field, set a new password, and log in. You can now use your Jupyter Notebook with Tensorflow with GPU acceleration. Try running the following command, and you should see a GPU device listed:
import tensorflow as tf
tf.config.list_physical_devices('GPU')
Northflank allows you to deploy your code and databases within minutes. Sign up for a Northflank account and create a free project to get started.
- Build, deploy, scale, and release from development to production
- Deploy clusters in your own cloud accounts
- Run GPU workloads
- Observe & monitor with real-time metrics & logs
- Deploy managed databases and storage
- Manage infrastructure as code