v1

Databases And Persistence / Migrate From Database Providers to Northflank /

Migrate from RDS to Northflank

For RDS instances that are configured as private (not publicly accessible) and can only be reached from within the same VPC, there are two approaches to migrating your data to Northflank:

  1. Stream MySQL data from private RDS to public Northflank addon via EC2 — Provision EC2, connect to the internal (non-public) RDS and stream the database data to a publicly-exposed Northflank addon.

  2. S3 bucket for dumping and restore data — Provision EC2 with S3 role, connect to internal RDS, run script to take the dump and store on S3. Generate credentials for S3 bucket. Deploy private Northflank addon. Deploy Northflank service provided to you and the script, this will trigger the restore using the AWS bucket credentials in order to restore the dump to the Northflank database.

1. Stream MySQL data from private RDS to public Northflank addon via EC2

Provision EC2 Instance

Take into account how your database is setup, security-groups and relevant subnet. The following are suggestions to simplify the process but adapt it to your specific setup.

Find your RDS instance details:

aws rds describe-db-instances \
  --db-instance-identifier your-db-name \
  --query 'DBInstances[0].[DBSubnetGroup.VpcId,DBSubnetGroup.Subnets[0].SubnetIdentifier,VpcSecurityGroups[0].VpcSecurityGroupId]' \
  --output table

Note: Replace "your-db-name" with your actual RDS instance identifier.

Get your RDS subnet ID and export it as an env var:

export RDS_SUBNET=$(aws rds describe-db-instances \
  --db-instance-identifier your-db-name \
  --query 'DBInstances[0].DBSubnetGroup.Subnets[0].SubnetIdentifier' \
  --output text)

Get your RDS security group ID and export it as an env var:

export RDS_SG_ID="sg-xxxxx"

Get the latest Amazon Linux 2023 AMI:

AMI_ID=$(aws ec2 describe-images \
  --owners amazon \
  --filters "Name=name,Values=al2023-ami-2023.*-x86_64" \
  --query 'sort_by(Images, &CreationDate)[-1].ImageId' \
  --output text)

Launch instance. At this stage consider how you'll connect to the instance. If it's via ssh consider providing --key-name=<key>. Also it is recommended to assign it to a subnet with automatic public ipv4 provisioning which will be used later.

INSTANCE_ID=$(aws ec2 run-instances \
  --image-id $AMI_ID \
  --instance-type c5.2xlarge \
  --security-group-ids $RDS_SG_ID \
  --subnet-id $RDS_SUBNET \
  --query 'Instances[0].InstanceId' \
  --output text)

echo "Instance ID: $INSTANCE_ID"

Wait for instance to be ready (takes 1-2 minutes):

aws ec2 wait instance-running --instance-ids $INSTANCE_ID

Deploy Northflank addon

The following is the spec for the Northflank addon to deploy, can be deployed via UI, template or CLI.

{
  "kind": "Addon",
  "spec": {
    "name": <ADDON_DATABASE_NAME>,
    "infrastructure": {
      "architecture": "x86"
    },
    "type": "mysql",
    "version": "8.4.7",
    "billing": {
      "deploymentPlan": "nf-compute-800-16",
      "storageClass": "nvme",
      "storage": 409600,
      "replicas": 1
    },
    "tlsEnabled": true,
    "externalAccessEnabled": false
  }
}

Once both the EC2 instance and addon are ready, extract the public ipv4 assigned to the EC2 node (if available) and whitelist it on the Northflank addon:

EC2_IP=$(aws ec2 describe-instances --instance-ids $INSTANCE_ID --query 'Reservations[0].Instances[0].PublicIpAddress' --output text)

northflank patch addons --projectId <PROJECT_ID> --addonId <ADDON_ID> -i "{\"externalAccessEnabled\": \"true\", \"ipPolicies\": [{\"addresses\": [\"$EC2_IP\"], \"action\": \"ALLOW\"}]}"

Install dependencies

Connect to the EC2 instance and run the following commands to install MySQLSH and northflank CLI.

sudo dnf update
curl -fsSL https://rpm.nodesource.com/setup_lts.x | sudo bash -
sudo dnf install -y https://dev.mysql.com/get/mysql84-community-release-el9-1.noarch.rpm nodejs
sudo rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023
sudo dnf install -y mysql-shell
sudo npm i -g @northflank/cli

Generate token from: https://app.northflank.com/t/<team>/settings/api/tokens

Login to northflank:

northflank login --token-login

Populate the relevant RDS environment variables so the EC2 instance can connect to the database:

export SOURCE_HOST=<RDS_HOST>
export SOURCE_PORT=<RDS_PORT>
export SOURCE_USER=<RDS_USER>
export SOURCE_PASSWORD=<RDS_PASSWORD>

Populate the following environment variables via the northflank CLI:

# northflank cli installed and authenticated
CREDS=$(northflank get addon credentials --projectId <PROJECT_ID> --addonId <ADDON_ID> -o json)

export TARGET_HOST=$(echo "$CREDS" | jq -r '.envs.HOST')
export TARGET_PORT=$(echo "$CREDS" | jq -r '.envs.EXTERNAL_PORT_PRIMARY')
export TARGET_USER=$(echo "$CREDS" | jq -r '.secrets.ADMIN_USERNAME')
export TARGET_PASSWORD=$(echo "$CREDS" | jq -r '.secrets.ADMIN_PASSWORD')

Set the number of threads based on the EC2 instance used:

export COPY_THREADS=<THREADS>

Then copy and paste the following script:

var sourceHost = os.getenv('SOURCE_HOST') || 'localhost';
var sourcePort = parseInt(os.getenv('SOURCE_PORT') || '3306');
var sourceUser = os.getenv('SOURCE_USER') || 'root';
var sourcePassword = os.getenv('SOURCE_PASSWORD') || '';

var targetHost = os.getenv('TARGET_HOST') || 'localhost';
var targetPort = parseInt(os.getenv('TARGET_PORT') || '3306');
var targetUser = os.getenv('TARGET_USER') || 'root';
var targetPassword = os.getenv('TARGET_PASSWORD') || '';

var threads = parseInt(os.getenv('COPY_THREADS') || '4');

if (!os.getenv('SOURCE_HOST') || !os.getenv('TARGET_HOST')) {
    print("Error: SOURCE_HOST and TARGET_HOST environment variables are required.\n\n");
    print("Required:\n");
    print("  SOURCE_HOST         - Source host\n");
    print("  TARGET_HOST         - Target host\n\n");
    print("Optional (with defaults):\n");
    print("  SOURCE_PORT         - Source port (default: 3306)\n");
    print("  SOURCE_USER         - Source user (default: root)\n");
    print("  SOURCE_PASSWORD     - Source password (default: empty)\n");
    print("  TARGET_PORT         - Target port (default: 3306)\n");
    print("  TARGET_USER         - Target user (default: root)\n");
    print("  TARGET_PASSWORD     - Target password (default: empty)\n");
    print("  COPY_THREADS        - Parallel threads (default: 4)\n");
    os.exit(1);
}

print("Connecting to source " + sourceHost + ":" + sourcePort + "...\n");
shell.connect({
    host: sourceHost,
    port: sourcePort,
    user: sourceUser,
    password: sourcePassword
});

print("Copying all schemas to " + targetHost + ":" + targetPort + "...\n");
util.copyInstance(
    {
        host: targetHost,
        port: targetPort,
        user: targetUser,
        password: targetPassword
    },
    {
        threads: threads,
        showProgress: true,
        ignoreExistingObjects: true,
        users: false,
    }
);

print("Copy completed successfully.\n");
session.close();

Run it with: mysqlsh --file mysqlsh_copy.js

It should start the process of streaming data from the RDS database into the Northflank instance.

2. S3 bucket for dumping and restore data

Provision EC2 Instance

Similar instructions as with the first approach. The extra step here is to create and attach a role to be able to access and manipulate the S3 bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:ListBucket",
                "s3:CreateBucket",
                "s3:ListBucketMultipartUploads"
            ],
            "Resource": [
                "arn:aws:s3:::<S3-BUCKET>",
                "arn:aws:s3:::<S3-BUCKET>/*"
            ]
        }
    ]
}

EC2 Instance configuration

Install dependencies

sudo dnf update
sudo dnf install -y https://dev.mysql.com/get/mysql84-community-release-el9-1.noarch.rpm
sudo rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2023
sudo dnf install -y mysql-shell

Configure the following env vars:

export DUMP_HOST=<RDS_HOST>
export DUMP_PORT=<RDS_PORT>
export DUMP_USER=<RDS_USER>
export DUMP_PASSWORD=<RDS_PASSWORD>
export DUMP_THREADS=<THREADS>
export DUMP_SCHEMA=<SCHEMA>
export DUMP_S3_BUCKET=<S3-BUCKET>
export DUMP_S3_PREFIX=<S3-PREFIX>
export DUMP_S3_REGION=<S3-REGION>

Copy the following scripts.

dump_s3.sh:

#!/bin/bash
set -e

if [ -z "$DUMP_SCHEMA" ] || [ -z "$DUMP_S3_BUCKET" ]; then
    echo "Error: DUMP_SCHEMA and DUMP_S3_BUCKET environment variables are required."
    exit 1
fi

# Fetch IAM role credentials from EC2 instance metadata (IMDSv2)
echo "Fetching IAM role credentials from instance metadata..."
TOKEN=$(curl -s -X PUT "http://169.254.169.254/latest/api/token" \
    -H "X-aws-ec2-metadata-token-ttl-seconds: 21600")

ROLE_NAME=$(curl -s -H "X-aws-ec2-metadata-token: $TOKEN" \
    http://169.254.169.254/latest/meta-data/iam/security-credentials/)

if [ -z "$ROLE_NAME" ]; then
    echo "Error: No IAM role found on this instance."
    exit 1
fi

CREDS=$(curl -s -H "X-aws-ec2-metadata-token: $TOKEN" \
    http://169.254.169.254/latest/meta-data/iam/security-credentials/$ROLE_NAME)

export AWS_ACCESS_KEY_ID=$(echo "$CREDS" | python3 -c "import sys,json; print(json.load(sys.stdin)['AccessKeyId'])")
export AWS_SECRET_ACCESS_KEY=$(echo "$CREDS" | python3 -c "import sys,json; print(json.load(sys.stdin)['SecretAccessKey'])")
export AWS_SESSION_TOKEN=$(echo "$CREDS" | python3 -c "import sys,json; print(json.load(sys.stdin)['Token'])")

echo "Credentials fetched for role: ${ROLE_NAME}"

mysqlsh --file ./mysqlsh_s3_dump.js

mysqlsh_s3_dump.js:

var host = os.getenv('DUMP_HOST') || 'localhost';
var port = parseInt(os.getenv('DUMP_PORT') || '3306');
var user = os.getenv('DUMP_USER') || 'root';
var password = os.getenv('DUMP_PASSWORD') || '';
var threads = parseInt(os.getenv('DUMP_THREADS') || '4');
var schema = os.getenv('DUMP_SCHEMA');

var s3Bucket = os.getenv('DUMP_S3_BUCKET');
var s3Prefix = os.getenv('DUMP_S3_PREFIX') || '';
var s3Region = os.getenv('DUMP_S3_REGION') || '';
var s3Profile = os.getenv('DUMP_S3_PROFILE') || '';
var s3Endpoint = os.getenv('DUMP_S3_ENDPOINT') || '';

if (!schema || !s3Bucket) {
    print("Error: DUMP_SCHEMA and DUMP_S3_BUCKET environment variables are required.\n\n");
    print("Required:\n");
    print("  DUMP_SCHEMA         - Schema to dump\n");
    print("  DUMP_S3_BUCKET      - S3 bucket name (must already exist)\n\n");
    print("Optional (with defaults):\n");
    print("  DUMP_HOST           - Source host (default: localhost)\n");
    print("  DUMP_PORT           - Source port (default: 3306)\n");
    print("  DUMP_USER           - Source user (default: root)\n");
    print("  DUMP_PASSWORD       - Source password (default: empty)\n");
    print("  DUMP_THREADS        - Parallel threads (default: 4)\n");
    print("  DUMP_S3_PREFIX      - Prefix/folder inside bucket (default: schema name)\n");
    print("  DUMP_S3_REGION      - AWS region (default: from AWS config)\n");
    print("  DUMP_S3_PROFILE     - AWS credentials profile (default: from AWS config)\n");
    print("  DUMP_S3_ENDPOINT    - Custom S3 endpoint for S3-compatible storage\n");
    os.exit(1);
}

print("Connecting to " + host + ":" + port + "...\n");
shell.connect({
    host: host,
    port: port,
    user: user,
    password: password
});

var dumpOptions = {
    threads: threads,
    compression: "zstd",
    showProgress: true,
    s3BucketName: s3Bucket
};

if (s3Region !== '') dumpOptions.s3Region = s3Region;
if (s3Profile !== '') dumpOptions.s3Profile = s3Profile;
if (s3Endpoint !== '') dumpOptions.s3EndpointOverride = s3Endpoint;

var outputPath = s3Prefix || schema;

print("Dumping schema '" + schema + "' to s3://" + s3Bucket + "/" + outputPath + "...\n");
util.dumpSchemas([schema], outputPath, dumpOptions);

print("Dump completed successfully.\n");
session.close();

Run it: chmod +x dump_s3.sh && ./dump_s3.sh

The dump should start uploading to the provided S3 destination. The credentials are fetched as part of the dump_s3.sh script via the metadata server relying on the s3-role created for the ec2 instance.

Northflank S3 bucket import

Generate an IAM User with the following permissions:

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"s3:GetObject",
				"s3:ListBucket",
				"s3:PutObject"
			],
			"Resource": [
				"arn:aws:s3:::<S3-BUCKET>",
				"arn:aws:s3:::<S3-BUCKET>/*"
			]
		}
	]
}

Generate Access Key and Secret for this User.

Import the following Northflank template to your project which will have the relevant resources to execute the restore. Make sure to update the relevant environment variables and AWS S3 credentials.

{
  "apiVersion": "v1.2",
  "spec": {
    "kind": "Workflow",
    "spec": {
      "type": "sequential",
      "steps": [
        {
          "kind": "Project",
          "ref": "project",
          "spec": {
            "name": "<PROJECT_ID>",
            "region": "europe-west"
          }
        },
        {
          "kind": "Workflow",
          "spec": {
            "type": "sequential",
            "steps": [
              {
                "kind": "Addon",
                "spec": {
                  "name": "database",
                  "infrastructure": {
                    "architecture": "x86"
                  },
                  "type": "mysql",
                  "version": "8.4.7",
                  "billing": {
                    "deploymentPlan": "nf-compute-800-16",
                    "storageClass": "nvme",
                    "storage": 409600,
                    "replicas": 1
                  },
                  "tlsEnabled": true,
                  "externalAccessEnabled": false,
                  "typeSpecificSettings": {
                    "mysqlRouterReplicas": 2
                  },
                  "projectId": "${refs.project.id}"
                },
                "ref": "database"
              },
              {
                "kind": "SecretGroup",
                "spec": {
                  "type": "secret",
                  "secretType": "environment",
                  "priority": 10,
                  "secrets": {
                    "variables": {
                      "LOAD_THREADS": "8",
                      "LOAD_S3_BUCKET": "<S3-BUCKET>",
                      "LOAD_S3_PREFIX": "<S3-PREFIX>",
                      "LOAD_S3_REGION": "<S3-REGION>",
                      "AWS_ACCESS_KEY_ID": "<ACCESS_KEY>",
                      "AWS_SECRET_ACCESS_KEY": "<SECRET_ACCESS_KEY>"
                    },
                    "files": {
                      "/mysqlsh_s3_load.js": {
                        "data": "IyEvdXNyL2Jpbi9lbnYKdmFyIGhvc3QgPSBvcy5nZXRlbnYoJ0xPQURfSE9TVCcpIHx8ICdsb2NhbGhvc3QnOwp2YXIgcG9ydCA9IHBhcnNlSW50KG9zLmdldGVudignTE9BRF9QT1JUJykgfHwgJzMzMDYnKTsKdmFyIHVzZXIgPSBvcy5nZXRlbnYoJ0xPQURfVVNFUicpIHx8ICdyb290JzsKdmFyIHBhc3N3b3JkID0gb3MuZ2V0ZW52KCdMT0FEX1BBU1NXT1JEJykgfHwgJyc7CnZhciB0aHJlYWRzID0gcGFyc2VJbnQob3MuZ2V0ZW52KCdMT0FEX1RIUkVBRFMnKSB8fCAnNCcpOwp2YXIgZHJvcFNjaGVtYSA9IG9zLmdldGVudignTE9BRF9EUk9QX1NDSEVNQScpIHx8ICd0cnVlJzsKdmFyIHNjaGVtYSA9IG9zLmdldGVudignTE9BRF9TQ0hFTUEnKTsKCnZhciBzM0J1Y2tldCA9IG9zLmdldGVudignTE9BRF9TM19CVUNLRVQnKTsKdmFyIHMzUHJlZml4ID0gb3MuZ2V0ZW52KCdMT0FEX1MzX1BSRUZJWCcpIHx8ICcnOwp2YXIgczNSZWdpb24gPSBvcy5nZXRlbnYoJ0xPQURfUzNfUkVHSU9OJykgfHwgJyc7CnZhciBzM1Byb2ZpbGUgPSBvcy5nZXRlbnYoJ0xPQURfUzNfUFJPRklMRScpIHx8ICcnOwp2YXIgczNFbmRwb2ludCA9IG9zLmdldGVudignTE9BRF9TM19FTkRQT0lOVCcpIHx8ICcnOwoKaWYgKCFzY2hlbWEgfHwgIXMzQnVja2V0KSB7CiAgICBwcmludCgiRXJyb3I6IExPQURfU0NIRU1BIGFuZCBMT0FEX1MzX0JVQ0tFVCBlbnZpcm9ubWVudCB2YXJpYWJsZXMgYXJlIHJlcXVpcmVkLlxuXG4iKTsKICAgIHByaW50KCJSZXF1aXJlZDpcbiIpOwogICAgcHJpbnQoIiAgTE9BRF9TQ0hFTUEgICAgICAgICAgIC0gVGFyZ2V0IHNjaGVtYSB0byBsb2FkIGludG9cbiIpOwogICAgcHJpbnQoIiAgTE9BRF9TM19CVUNLRVQgICAgICAgIC0gUzMgYnVja2V0IGNvbnRhaW5pbmcgdGhlIGR1bXBcblxuIik7CiAgICBwcmludCgiT3B0aW9uYWwgKHdpdGggZGVmYXVsdHMpOlxuIik7CiAgICBwcmludCgiICBMT0FEX0hPU1QgICAgICAgICAgICAgLSBUYXJnZXQgaG9zdCAoZGVmYXVsdDogbG9jYWxob3N0KVxuIik7CiAgICBwcmludCgiICBMT0FEX1BPUlQgICAgICAgICAgICAgLSBUYXJnZXQgcG9ydCAoZGVmYXVsdDogMzMwNilcbiIpOwogICAgcHJpbnQoIiAgTE9BRF9VU0VSICAgICAgICAgICAgIC0gVGFyZ2V0IHVzZXIgKGRlZmF1bHQ6IHJvb3QpXG4iKTsKICAgIHByaW50KCIgIExPQURfUEFTU1dPUkQgICAgICAgICAtIFRhcmdldCBwYXNzd29yZCAoZGVmYXVsdDogZW1wdHkpXG4iKTsKICAgIHByaW50KCIgIExPQURfVEhSRUFEUyAgICAgICAgICAtIFBhcmFsbGVsIHRocmVhZHMgKGRlZmF1bHQ6IDQpXG4iKTsKICAgIHByaW50KCIgIExPQURfRFJPUF9TQ0hFTUEgICAgICAtIERyb3AgYW5kIHJlY3JlYXRlIHNjaGVtYSBiZWZvcmUgbG9hZCAoZGVmYXVsdDogdHJ1ZSlcbiIpOwogICAgcHJpbnQoIiAgTE9BRF9TM19QUkVGSVggICAgICAgIC0gUHJlZml4L2ZvbGRlciBpbnNpZGUgYnVja2V0IChkZWZhdWx0OiBlbXB0eSlcbiIpOwogICAgcHJpbnQoIiAgTE9BRF9TM19SRUdJT04gICAgICAgIC0gQVdTIHJlZ2lvbiAoZGVmYXVsdDogZnJvbSBBV1MgY29uZmlnKVxuIik7CiAgICBwcmludCgiICBMT0FEX1MzX1BST0ZJTEUgICAgICAgLSBBV1MgY3JlZGVudGlhbHMgcHJvZmlsZSAoZGVmYXVsdDogZnJvbSBBV1MgY29uZmlnKVxuIik7CiAgICBwcmludCgiICBMT0FEX1MzX0VORFBPSU5UICAgICAgLSBDdXN0b20gUzMgZW5kcG9pbnQgZm9yIFMzLWNvbXBhdGlibGUgc3RvcmFnZVxuIik7CiAgICBvcy5leGl0KDEpOwp9CgpwcmludCgiQ29ubmVjdGluZyB0byAiICsgaG9zdCArICI6IiArIHBvcnQgKyAiLi4uXG4iKTsKc2hlbGwuY29ubmVjdCh7CiAgICBob3N0OiBob3N0LAogICAgcG9ydDogcG9ydCwKICAgIHVzZXI6IHVzZXIsCiAgICBwYXNzd29yZDogcGFzc3dvcmQKfSk7CgppZiAoZHJvcFNjaGVtYSA9PT0gJ3RydWUnKSB7CiAgICBwcmludCgiRHJvcHBpbmcgYW5kIHJlY3JlYXRpbmcgc2NoZW1hICciICsgc2NoZW1hICsgIicuLi5cbiIpOwogICAgc2Vzc2lvbi5ydW5TcWwoIkRST1AgREFUQUJBU0UgSUYgRVhJU1RTIGAiICsgc2NoZW1hICsgImAiKTsKICAgIHNlc3Npb24ucnVuU3FsKCJDUkVBVEUgREFUQUJBU0UgYCIgKyBzY2hlbWEgKyAiYCIpOwp9IGVsc2UgewogICAgcHJpbnQoIlNraXBwaW5nIGRyb3Ag4oCUIHJlc3VtaW5nIGxvYWQgaW50byBleGlzdGluZyBzY2hlbWEgJyIgKyBzY2hlbWEgKyAiJy4uLlxuIik7Cn0KCnZhciBsb2FkT3B0aW9ucyA9IHsKICAgIHNjaGVtYTogc2NoZW1hLAogICAgdGhyZWFkczogdGhyZWFkcywKICAgIHNob3dQcm9ncmVzczogdHJ1ZSwKICAgIHJlc2V0UHJvZ3Jlc3M6IGZhbHNlLAogICAgaWdub3JlRXhpc3RpbmdPYmplY3RzOiBkcm9wU2NoZW1hICE9PSAndHJ1ZScsCiAgICBzM0J1Y2tldE5hbWU6IHMzQnVja2V0Cn07CgppZiAoczNSZWdpb24gIT09ICcnKSBsb2FkT3B0aW9ucy5zM1JlZ2lvbiA9IHMzUmVnaW9uOwppZiAoczNQcm9maWxlICE9PSAnJykgbG9hZE9wdGlvbnMuczNQcm9maWxlID0gczNQcm9maWxlOwppZiAoczNFbmRwb2ludCAhPT0gJycpIGxvYWRPcHRpb25zLnMzRW5kcG9pbnRPdmVycmlkZSA9IHMzRW5kcG9pbnQ7Cgp2YXIgaW5wdXRQYXRoID0gczNQcmVmaXggfHwgc2NoZW1hOwoKcHJpbnQoIkxvYWRpbmcgZHVtcCBmcm9tIHMzOi8vIiArIHMzQnVja2V0ICsgIi8iICsgaW5wdXRQYXRoICsgIiBpbnRvIHNjaGVtYSAnIiArIHNjaGVtYSArICInLi4uXG4iKTsKdXRpbC5sb2FkRHVtcChpbnB1dFBhdGgsIGxvYWRPcHRpb25zKTsKCnByaW50KCJMb2FkIGNvbXBsZXRlZCBzdWNjZXNzZnVsbHkuXG4iKTsKc2Vzc2lvbi5jbG9zZSgpOwo=",
                        "encoding": "utf-8"
                      }
                    },
                    "dockerSecretMounts": {}
                  },
                  "addonDependencies": [
                    {
                      "addonId": "${refs.database.id}",
                      "keys": [
                        {
                          "keyName": "ADMIN_USERNAME",
                          "aliases": [
                            "LOAD_USER"
                          ]
                        },
                        {
                          "keyName": "ADMIN_PASSWORD",
                          "aliases": [
                            "LOAD_PASSWORD"
                          ]
                        },
                        {
                          "keyName": "PORT",
                          "aliases": [
                            "LOAD_PORT"
                          ]
                        },
                        {
                          "keyName": "HOST",
                          "aliases": [
                            "LOAD_HOST"
                          ]
                        },
                        {
                          "keyName": "DATABASE",
                          "aliases": [
                            "LOAD_SCHEMA"
                          ]
                        }
                      ]
                    }
                  ],
                  "externalAddonDependencies": [],
                  "name": "secrets",
                  "restrictions": {
                    "restricted": false,
                    "nfObjects": [],
                    "tags": []
                  },
                  "projectId": "${refs.project.id}"
                },
                "ref": "secrets"
              },
              {
                "kind": "DeploymentService",
                "spec": {
                  "deployment": {
                    "instances": 1,
                    "storage": {
                      "ephemeralStorage": {
                        "storageSize": 1024
                      },
                      "shmSize": 64
                    },
                    "docker": {
                      "configType": "customEntrypointCustomCommand",
                      "customEntrypoint": "/bin/bash -c ",
                      "customCommand": "'./setup.sh && echo \"Restorer ready\" && sleep infinity'"
                    },
                    "type": "deployment",
                    "external": {
                      "imagePath": "ubuntu:22.04"
                    }
                  },
                  "loadBalancing": {
                    "mode": "leastConnection"
                  },
                  "name": "s3-restorer",
                  "infrastructure": {
                    "architecture": "x86"
                  },
                  "billing": {
                    "deploymentPlan": "nf-compute-800-16"
                  },
                  "runtimeFiles": {
                    "/setup.sh": {
                      "data": "IyEvYmluL2Jhc2gKYXB0LWdldCB1cGRhdGUKYXB0LWdldCBpbnN0YWxsIC15IHdnZXQgbHNiLXJlbGVhc2UgZ251cGcgdG11eCB2aW0Kd2dldCBodHRwczovL2Rldi5teXNxbC5jb20vZ2V0L215c3FsLWFwdC1jb25maWdfMC44LjM2LTFfYWxsLmRlYgplY2hvICIzIiB8IGRwa2cgLWkgbXlzcWwtYXB0LWNvbmZpZ18wLjguMzYtMV9hbGwuZGViCmFwdC1nZXQgdXBkYXRlCmFwdC1nZXQgaW5zdGFsbCAteSBteXNxbC1zaGVsbCBteXNxbC1jbGllbnQ=",
                      "encoding": "utf-8"
                    }
                  },
                  "ports": [],
                  "runtimeEnvironment": {},
                  "projectId": "${refs.project.id}"
                },
                "ref": "s3-restorer"
              }
            ]
          }
        }
      ]
    }
  }
}

Run the template and it will provision relevant database, secrets and s3-restorer service. Once running, exec into s3-restorer and run mysqlsh --file mysqlsh_s3_load.js and it should start the restore process to the Northflank database from the S3 bucket.

© 2026 Northflank Ltd. All rights reserved.

northflank.com / Terms / Privacy / feedback@northflank.com