

How to deploy pgvector in 1 minute (using Northflank)
If you’re building anything with embeddings, you’ve probably looked into pgvector. It adds vector search capabilities to Postgres, and it’s become the go-to extension for semantic search and AI-related workloads.
But what most guides don’t mention is how much setup it takes just to get started. You’ll often run into tutorials that require you to install pgvector
manually, build custom Docker images, or tweak local Postgres configurations just to enable the extension.
We wanted to skip all of that.
When you deploy a Postgres database on Northflank, pgvector is already available. You don’t have to install anything. You don’t need a special image. Just run:
CREATE EXTENSION vector;
And you’re good to go.
- Go to Northflank
- Create a new project
- Create a new PostgreSQL addon
- Click on “Create Addon”
- Connect to your database
- Run
CREATE EXTENSION vector;
That’s it. You’re now running pgvector on a managed Postgres instance, without touching Docker.
CREATE TABLE items (
id serial PRIMARY KEY,
name text,
embedding vector(3)
);
INSERT INTO items (name, embedding) VALUES
('item one', '[1,1,1]'),
('item two', '[2,2,2]'),
('item three', '[1,1,2]');
SELECT * FROM items
ORDER BY embedding <-> '[1,1,1]'::vector
LIMIT 1;
This is how you build fast, SQL-native vector search. It works with anything OpenAI, Cohere, custom models, etc.
This post is just about getting pgvector running quickly. If you want a full breakdown of what pgvector is, how it works, and how to use it with real-world AI tools, check out our longer guide:
PostgreSQL Vector Search Guide with pgvector
If you're ready to stop messing around with setup and start building, head here:
No custom builds. No manual install. Just Postgres with vector support, ready when you are.