Published 10th March 2022
MinIO is a highly-available object storage solution compatible with S3 API. Deploy a managed MinIO cluster on Northflank. In order to work with it using Node.js it can be done through the MinIO SDK or the AWS SDK. The following guide will focus on providing basic scripts for communicating to your MinIO instance using TLS and the official AWS S3 SDK.
By the end of this guide, you will be able to:
- Connect to your MinIO.
- Create and list buckets.
- Create objects, upload files and list object keys.
- Read objects and store them as files.
- Node.js
- Npm
- MinIO Server Running with TLS
node-with-minio-and-s3/
├─ S3-create-buckets.js
├─ S3-create-objects.js
├─ S3-read-objects.js
├─ file.txt
└─ package.json
The full source code can be found in this git repo.
npm install @aws-sdk/client-s3
npm install dotenv
Create a file named .env
which will contain all the credentials and details for connecting to your MinIO instance, such as HOST
, ENDPOINT
, ACCESS_KEY
and SECRET_KEY
.
We’ll use the dotenv
dependency for loading these values as environment variables when the project starts.
HOST=<host>
ENDPOINT=https://<host>:<port>
ACCESS_KEY=<value>
SECRET_KEY=<value>
The `https` prefix used on the `ENDPOINT` environment variable will make sure your connection is established with TLS.
In order to connect to your MinIO instance, the first step is to load the environment variables.
- Import the
dotenv
dependency and call theconfig()
method, which will do the job. - Next, import the
@aws-sdk/client-s3
dependency specifying theS3Client
class. - Then, create the
S3Client
object with the proper credentials and endpoint as shown in the code snippet below.
require("dotenv").config();
const { S3Client } = require(“@aws-sdk/client-s3”);
const s3 = new S3Client({
credentials: {
accessKeyId: process.env.ACCESS_KEY,
secretAccessKey: process.env.SECRET_KEY,
},
endpoint: process.env.ENDPOINT,
forcePathStyle: true,
});
The s3ForcePathStyle
is important in case your MinIO deployment doesn’t support S3 `virtual-host` style for accessing objects.
Next step after instantiating the S3 client is to create and list buckets.
Buckets represent the way MinIO server arranges the data. In order to create one is enough to provide the string that will be used to access the bucket. Buckets contain a list of objects and these represent the data stored on the server. Objects are identified by a key and also contain system and user metadata.
The following snippet shows the CreateBucketCommand
class, which represents the request for creating a bucket.
- Instantiate the
CreateBucketCommand
with the bucket name, in this case, “first-bucket”. Then the request is sent using the s3 clientsend
method. - The next command is the
ListBucketsCommand
which follows the same pattern and will send the request for listing buckets. - The result is an object which contains the
Buckets
attribute which is printed to the console.
In order to run this sample, create a script called S3-create-buckets.js
and run it with node S3-create-buckets.js
.
const {
CreateBucketCommand,
ListBucketsCommand,
} = require("@aws-sdk/client-s3");
const bucketName = "first-bucket";
(async () => {
try {
const createBucketResult = await s3.send(
new CreateBucketCommand({ Bucket: bucketName })
);
console.log("CreateBucketResult: ", createBucketResult.Location);
// an empty object has to be passed otherwise won't work
const listBucketsResult = await s3.send(new ListBucketsCommand({}));
console.log("ListBucketsResult: ", listBucketsResult.Buckets);
} catch (err) {
console.log("Error", err);
}
})();
This first example puts an object identified with the key “first-entry.txt” on the bucket that was previously created. The PutObjectCommand
represents the request for manipulating objects and later is sent to the MinIO instance. In this case, “file contents” is the Body data of the object called first-entry.txt
.
Create or update the script S3-create-objects.js
and run it with node S3-create-objects.js
.
const { PutObjectCommand } = require("@aws-sdk/client-s3");
const bucketName = "first-bucket";
(async () => {
try {
// uploading object with string data on Body
const objectKey = "first-entry.txt";
await s3.send(
new PutObjectCommand({
Bucket: bucketName,
Key: objectKey,
Body: "Hello there again",
})
);
console.log(`Successfully uploaded ${bucketName}/${objectKey}`);
} catch (err) {
console.log("Error", err);
}
})();
Now we are going to upload a file and store it in MinIO. In this case the object’s data will be read from a file.
- The file “file.txt” located in the same directory is used with the
fs
package in order to read file contents as a buffer with thereadFileSync()
method. - The read data is stored on the
fileData
variable. - Once the contents are stored on the
fileData
variable, the methodupload
from the S3 client is used to upload the “first-entry.txt” object inside the “first-bucket”. - This is uploaded using the same
PutObjectCommand
but instead of theBody
attribute being only a string it’s using aBuffer
, which is also supported.
Create or update the script S3-create-objects.js
and run it with node S3-create-objects.js
.
const fs = require(“fs”);
const { PutObjectCommand } = require("@aws-sdk/client-s3");
const bucketName = "first-bucket";
(async () => {
try {
const fileObjectKey = "uploaded-file.txt";
const fileObjectData = fs.readFileSync("./file.txt");
await s3.send(
new PutObjectCommand({
Bucket: bucketName,
Key: fileObjectKey,
Body: fileObjectData,
})
);
console.log(`Successfully uploaded ${bucketName}/${fileObjectKey}`);
} catch (err) {
console.log("Error", err);
}
})();
The GetObjectCommand
class represents the request for fetching an object's data.
The data needed for the command to work as expected is the bucket name and object key; this will locate the object’s data on the MinIO server.
The process for reading data is done in chunks. Helpful in scenarios where we’re dealing with very big files, so anytime parts of data are sent from the server, the chunk of data is written on the stream. In this case a
WriteStream
object is used to write data to the./read-in-chunks.txt
file.
In order to run this sample, create a script called S3-read-objects.js
and run it with node S3-read-objects.js
.
const fs = require(“fs”);
const { GetObjectCommand } = require("@aws-sdk/client-s3");
const bucketName = "first-bucket";
const fileObjectKey = "first-entry.txt";
(async () => {
try {
const readObjectResult = await s3.send(
new GetObjectCommand({ Bucket: bucketName, Key: fileObjectKey })
);
const writeStream = fs.createWriteStream(
"./read-in-chunks.txt"
);
readObjectResult.Body.on("data", (chunk) => writeStream.write(chunk));
} catch (err) {
console.log("Error: ", err);
}
})();
MinIO addon can be deployed on Northflank with TLS and external access to have public access to your storage servers. Currently it only supports path-style
model for accessing objects. High availability is also supported starting from 4 replicas and can be deployed on your project’s region.
For more details on managed MinIO, this page covers all the features and benefits.
Northflank allows you to deploy your code and databases within minutes. Sign up for a Northflank account and create a free project to get started.
- Multiple read and write replicas
- Observe & monitor with real-time metrics & logs
- Low latency and high performance
- Backup, restore and fork databases
- Private and optional public load balancing as well as Northflank local proxy
In this guide