← Back to Guides
Profile image for Humberto Leal

By Humberto Leal

Published 10th March 2022

Connect Node.js to MinIO with TLS using AWS S3

MinIO is a highly-available object storage solution compatible with S3 API. Deploy a managed MinIO cluster on Northflank. In order to work with it using Node.js it can be done through the MinIO SDK or the AWS SDK. The following guide will focus on providing basic scripts for communicating to your MinIO instance using TLS and the official AWS S3 SDK.

Goals

By the end of this guide, you will be able to:

  • Connect to your MinIO.
  • Create and list buckets.
  • Create objects, upload files and list object keys.
  • Read objects and store them as files.

Prerequisites

  • Node.js
  • Npm
  • MinIO Server Running with TLS

Project structure

node-with-minio-and-s3/
├─ S3-create-buckets.js
├─ S3-create-objects.js
├─ S3-read-objects.js
├─ file.txt
└─ package.json

The full source code can be found in this git repo.

Installing dependencies

npm install @aws-sdk/client-s3
npm install dotenv

Connection

Environment variables .env file

Create a file named .env which will contain all the credentials and details for connecting to your MinIO instance, such as HOST, ENDPOINT, ACCESS_KEY and SECRET_KEY.

We’ll use the dotenv dependency for loading these values as environment variables when the project starts.

HOST=<host>
ENDPOINT=https://<host>:<port>
ACCESS_KEY=<value>
SECRET_KEY=<value>

The `https` prefix used on the `ENDPOINT` environment variable will make sure your connection is established with TLS.

Connection with AWS SDK

In order to connect to your MinIO instance, the first step is to load the environment variables.

  1. Import the dotenv dependency and call the config() method, which will do the job.
  2. Next, import the @aws-sdk/client-s3 dependency specifying the S3Client class.
  3. Then, create the S3Client object with the proper credentials and endpoint as shown in the code snippet below.
require("dotenv").config();
const { S3Client } = require(“@aws-sdk/client-s3”);
const s3 = new S3Client({
   credentials: {
       accessKeyId: process.env.ACCESS_KEY,
       secretAccessKey: process.env.SECRET_KEY,
   },
   endpoint: process.env.ENDPOINT,
   forcePathStyle: true,
});

The s3ForcePathStyle is important in case your MinIO deployment doesn’t support S3 `virtual-host` style for accessing objects.

Write & Read Examples

Next step after instantiating the S3 client is to create and list buckets.

Buckets represent the way MinIO server arranges the data. In order to create one is enough to provide the string that will be used to access the bucket. Buckets contain a list of objects and these represent the data stored on the server. Objects are identified by a key and also contain system and user metadata.

Create & List Buckets - AWS SDK

The following snippet shows the CreateBucketCommand class, which represents the request for creating a bucket.

  1. Instantiate the CreateBucketCommand with the bucket name, in this case, “first-bucket”. Then the request is sent using the s3 client send method.
  2. The next command is the ListBucketsCommand which follows the same pattern and will send the request for listing buckets.
  3. The result is an object which contains the Buckets attribute which is printed to the console.

In order to run this sample, create a script called S3-create-buckets.js and run it with node S3-create-buckets.js.

const {
    CreateBucketCommand,
    ListBucketsCommand,
} = require("@aws-sdk/client-s3");

const bucketName = "first-bucket";

(async () => {
    try {
        const createBucketResult = await s3.send(
            new CreateBucketCommand({ Bucket: bucketName })
        );

        console.log("CreateBucketResult: ", createBucketResult.Location);

        // an empty object has to be passed otherwise won't work
        const listBucketsResult = await s3.send(new ListBucketsCommand({}));

        console.log("ListBucketsResult: ", listBucketsResult.Buckets);
    } catch (err) {
        console.log("Error", err);
    }
})();

Upload Objects - AWS SDK

This first example puts an object identified with the key “first-entry.txt” on the bucket that was previously created. The PutObjectCommand represents the request for manipulating objects and later is sent to the MinIO instance. In this case, “file contents” is the Body data of the object called first-entry.txt.

Create or update the script S3-create-objects.js and run it with node S3-create-objects.js.

const { PutObjectCommand } = require("@aws-sdk/client-s3");

const bucketName = "first-bucket";

(async () => {
    try {
        // uploading object with string data on Body
        const objectKey = "first-entry.txt";
        await s3.send(
            new PutObjectCommand({
                Bucket: bucketName,
                Key: objectKey,
                Body: "Hello there again",
            })
        );

        console.log(`Successfully uploaded ${bucketName}/${objectKey}`);
    } catch (err) {
        console.log("Error", err);
    }
})();

Upload Objects as File - AWS SDK

Now we are going to upload a file and store it in MinIO. In this case the object’s data will be read from a file.

  1. The file “file.txt” located in the same directory is used with the fs package in order to read file contents as a buffer with the readFileSync() method.
  2. The read data is stored on the fileData variable.
  3. Once the contents are stored on the fileData variable, the method upload from the S3 client is used to upload the “first-entry.txt” object inside the “first-bucket”.
  4. This is uploaded using the same PutObjectCommand but instead of the Body attribute being only a string it’s using a Buffer, which is also supported.

Create or update the script S3-create-objects.js and run it with node S3-create-objects.js.

const fs = require(“fs”);
const { PutObjectCommand } = require("@aws-sdk/client-s3");

const bucketName = "first-bucket";

(async () => {
    try {
        const fileObjectKey = "uploaded-file.txt";
        const fileObjectData = fs.readFileSync("./file.txt");

        await s3.send(
            new PutObjectCommand({
                Bucket: bucketName,
                Key: fileObjectKey,
                Body: fileObjectData,
            })
        );

        console.log(`Successfully uploaded ${bucketName}/${fileObjectKey}`);
    } catch (err) {
        console.log("Error", err);
    }
})();

Read Object And Write It Into A File - AWS SDK

The GetObjectCommand class represents the request for fetching an object's data.

  1. The data needed for the command to work as expected is the bucket name and object key; this will locate the object’s data on the MinIO server.

  2. The process for reading data is done in chunks. Helpful in scenarios where we’re dealing with very big files, so anytime parts of data are sent from the server, the chunk of data is written on the stream. In this case a WriteStream object is used to write data to the ./read-in-chunks.txt file.

In order to run this sample, create a script called S3-read-objects.js and run it with node S3-read-objects.js.

const fs = require(“fs”);
const { GetObjectCommand } = require("@aws-sdk/client-s3");

const bucketName = "first-bucket";
const fileObjectKey = "first-entry.txt";

(async () => {
    try {
        const readObjectResult = await s3.send(
            new GetObjectCommand({ Bucket: bucketName, Key: fileObjectKey })
        );

        const writeStream = fs.createWriteStream(
            "./read-in-chunks.txt"
        );
        readObjectResult.Body.on("data", (chunk) => writeStream.write(chunk));
    } catch (err) {
        console.log("Error: ", err);
    }
})();

MinIO with Northflank

MinIO addon can be deployed on Northflank with TLS and external access to have public access to your storage servers. Currently it only supports path-style model for accessing objects. High availability is also supported starting from 4 replicas and can be deployed on your project’s region.


For more details on managed MinIO, this page covers all the features and benefits.

Using Northflank to deploy MinIO with TLS using AWS S3

Northflank allows you to deploy your code and databases within minutes. Sign up for a Northflank account and create a free project to get started.

  • Multiple read and write replicas
  • Observe & monitor with real-time metrics & logs
  • Low latency and high performance
  • Backup, restore and fork databases
  • Private and optional public load balancing as well as Northflank local proxy

Share this article with your network