← Back to Blog
Header image for blog post: Can you use Vercel for backend? What works and when to use something else
Deborah Emeni
Published 20th June 2025

Can you use Vercel for backend? What works and when to use something else

Yes, Vercel can handle backend functions but with limits. If you need background jobs, long-lived services, or more control over your backend setup, a platform like Northflank gives your more flexibility without giving up developer experience.

What do people mean when they search for “Vercel for backend”?

I’ve seen this question pop up over and over again on Reddit threads, in dev Slack channels, and even during team planning sessions. When someone searches for “Vercel for backend,” they’re not asking if Vercel can render a homepage. What they want to know is:

Can I rely on Vercel to run the backend logic of my application?

Take this Reddit post, for example:

Screenshot of Reddit post asking for a "Vercel but for backend" alternativeScreenshot of Reddit post asking for a "Vercel but for backend" alternative

This post nails the sentiment. Developers aren’t only looking for backend capability; they’re looking for the same ease, speed, and workflow that Vercel brings to the frontend, but applied to the backend.

So, the short answer is “YES”, Vercel can run backend functions, but with limitations, which I would tell you about shortly.

I’ll not only answer your question, but I’ll break down:

  1. What backend functionality Vercel supports, including serverless and edge functions
  2. The platform’s architectural limitations for backend services
  3. When Vercel is a suitable choice for lightweight backend tasks
  4. And when you need a more capable backend platform like Northflank for long-running services, custom runtimes, or background jobs, or preview environments

TL;DR: Can you use Vercel for backend?

Yes, but only up to a point.

Vercel works fine if you're building lightweight backend functions, that is, APIs tied to your frontend, form handlers, and simple serverless logic. If that’s all you need, then you’re good.

However, if you’re working with background jobs, long-running processes, persistent connections, or custom runtimes, Vercel becomes limiting because you don’t get containers, you don’t get control over your runtime, and you can’t run stateful services.

If your backend has infrastructure needs that go beyond serverless functions, you’re better off using a platform that doesn’t treat backend as an afterthought, like Northflank, which gives you full support for running services, jobs, APIs, and more without giving up the Git-based deploys and fast feedback loop Vercel nails on the frontend.

-> See how it works in action

What backend support does Vercel offer?

I’ll start by explaining how Vercel handles backend workloads, including what it supports, how it works, and where it fits your needs, before we discuss its limitations and the alternative tools you can use.

1. API routes in Next.js

If you’re using Next.js, any file under pages/api automatically becomes a serverless function on Vercel. You can write handlers using the familiar req/ res API, and Vercel takes care of deployment, scaling, regional routing, timeouts, and CORS. So, it’s great for small REST endpoints, form submissions, or any backend logic tightly coupled to the frontend.

For example: Let’s say you want to create a contact form. You’d build a POST endpoint at pages/api/contact.js, and write your handler like this:

export default async function handler(req, res) {
  const { name, message } = req.body;
  // Save to a database or send email
  res.status(200).json({ success: true });
}

As soon as you deploy, Vercel makes this live at https://your-app.vercel.app/api/contact.

2. Serverless functions (Fluid Compute)

For projects outside Next.js, Vercel supports standalone serverless endpoints in JavaScript, TypeScript, Python, Go, or Ruby placed in an api/ directory. They introduced their Fluid Compute model in early 2025, which allows functions to handle multiple concurrent requests in the same instance. It is beneficial for I/O-Intensive tasks, such as webhooks or database access, and it reduces cold starts through bytecode caching and idle-task reuse.

So, for example:

You’re building a webhook receiver for Stripe. You create a file like api/stripe-webhook.js, and Vercel deploys it as a serverless endpoint:

export default async function handler(req, res) {
  const signature = req.headers['stripe-signature'];
  const event = verifySignature(req.body, signature);
  // Process event
  res.status(200).send('Webhook received');
}

That’s all you need; you don’t need to manage a server or container.

3. Edge functions

If you're using Edge Functions, Vercel runs them in lightweight environments close to the user, using their CDN network. These functions are useful for things like geolocation-based logic, header manipulation, streaming responses, or modifying requests before they hit your app.

They run in a stripped-down runtime powered by V8 isolates. That means you don’t get access to typical Node.js modules like fs, net, or tls, and many npm packages won’t work unless they’re edge-compatible.

For example:

Let’s say you want to show a personalized banner to users based on their country. You could write an edge function like this:

export const config = { runtime: 'edge' }

export default async function handler(req) {
  const country = req.geo?.country || 'US';
  return new Response(`Welcome, visitor from ${country}`);
}

This runs at the edge, so the response is sent faster and closer to the user, before the request reaches the rest of your app.

4. Backend templates and community guides

Vercel provides starter guides for deploying frameworks such as Express and FastAPI, as well as boilerplates for Go and Ruby. These streamline the setup of serverless endpoints but still deploy as discrete functions, not as long-running services or containers.

For example:

Let’s say you already have a small Express app:

const express = require('express');
const app = express();

app.get('/api/ping', (req, res) => {
  res.send('pong');
});

You wrap it using serverless-http, and export the handler like this:

const serverless = require('serverless-http');
module.exports = serverless(app);

Then define it in vercel.json, and Vercel runs your Express logic as a serverless function.

5. Supported runtimes

Vercel natively supports these runtimes for serverless functions:

RuntimeStatus
Node.jsFull support, streaming enabled, bytecode caching, version selection (LTS: Node 18/20/22)
PythonBeta support (v3.12 default, 3.9 via legacy), streaming supported
GoFully supported
RubyFully supported (handler via Handler proc/class)

Community runtimes like PHP or Rust can also be used by specifying a custom runtime in vercel.json.

For example:

If you're writing a Python handler, you might create api/hello.py like this:

def handler(request, response):
    return response.send("Hello from Python")

Then Vercel deploys it just like any other function. Same deal for Go or Ruby, as long as it follows the expected function signature.

What Vercel isn’t built for (common backend limitations), and how tools like Northflank can help

So far, we’ve looked at how Vercel handles backend workloads through serverless and edge functions. That model works well for stateless, short-lived tasks. How about once you start needing more control, more runtime flexibility, or anything that has to run beyond a single HTTP request? It becomes limiting.

I'll walk you through some of the common limitations and how platforms like Northflank are designed to handle those cases from the start.

1. Cold starts and function timeouts

Vercel runs functions only when they’re needed. So if a function hasn’t been used in a while, the next request waits while it starts up. That’s what people call a “cold start”. On top of that, you’re dealing with time limits like 10 seconds on the free tier and 60 seconds on Pro. That might work for quick responses, but not for tasks that require slower processing, such as generating reports, resizing images, or handling file uploads.

So, how does Northflank handle it instead?

Northflank runs your backend code in always-on containers, no cold starts, no execution timeoutsIllustration comparing Vercel’s cold starts and function timeouts with Northflank’s always-on containers for backend workloads

Now, with Northflank, your code runs in a container that stays up. It’s already running when a request comes in, so there’s nothing to spin up. You’re not limited by a fixed timeout. And even if your handler takes a few minutes, that’s fine. It just keeps going.

Let’s say you're handling image uploads in the background, and each one takes around 90 seconds. With Vercel, you’d have to offload that to another platform or service that stays running in the background. With Northflank, you’d just run it in a background job or long-running container. You’re not rewriting your architecture to make the backend work.

Take a look at what you can do:

This is what your container looks like once it’s up and running:

Example: A persistent service running on Northflank - always-on, no cold starts, and full visibility into logs and port accessNorthflank dashboard showing a running container with live deployment logs, exposed ports, and container configuration

2. No background workers or long-lived processes

Vercel’s architecture is optimized for short-lived, stateless functions. If your app needs to run something that stays online in the background, like a queue worker, continuously running consumer, or long-running batch job, you’ll eventually run into limitations.

Vercel supports scheduled (cron) jobs, but only as isolated serverless or edge function invocations. You can’t run persistent background processes directly on Vercel. There’s no way to keep a process alive, manage retries, or handle state between runs.

So if you need more than a timed function, like jobs triggered by an event, running longer than 60 seconds, or interacting with persistent storage, you end up managing orchestration across third-party tools like CI runners, external schedulers, or hosted queues to fill in the gaps.

Diagram showing Northflank running scheduled jobs natively versus Vercel requiring external services like CI runners or cron schedulersIllustration comparing how Vercel and Northflank handle background jobs

However, with Northflank, you can run jobs that stay up for as long as you want; it could be every 5 minutes or indefinitely.

So, in place of trying to coordinate multiple tools or services, you schedule your job and let it run in a dedicated container. You can:

You get full visibility into job runs, retry attempts, time limits, and resource usage, all without extra setup.

See how it looks when scheduling a cron job in Northflank:

Northflank dashboard showing a scheduled job with retry limit, time limit, and concurrency policy all configured in the UINorthflank dashboard showing a scheduled job with retry limit, time limit, and concurrency policy all configured in the UI

With Vercel, you’d have to connect this kind of task to an external runner or CI job but with Northflank, you only have to configure the job, and it runs when scheduled. You don't have to do any workarounds.

3. No persistent file system or database layer

Vercel’s serverless model doesn’t support persistent storage. Each function runs in a stateless environment, anything written to disk is gone after the request finishes. So if you need to cache, store session data, or save files between runs, you're on your own.

It also doesn’t come with built-in database support. While you can connect to an external database, it lives outside the Vercel network. That means added latency and no way to run your data layer and services together in a private, low-latency environment.

How does Northflank approach this differently?

Diagram showing Northflank containers with attached storage and internal databases, contrasted with Vercel serverless functions lacking storage and pointing to external services like RDS and S3Illustration comparing how Northflank supports persistent storage and built-in databases, while Vercel relies on external services without native volume support.

With Northflank, you can attach persistent volumes to your deployments. These volumes use SSDs and let your containers keep data across restarts, ideal for caching layers, intermediate file processing, or any stateful backend.

See how easy it is to attach persistent storage to a service on Northflank with no extra setup or external provider needed:

Northflank dashboard showing the process of configuring and attaching a persistent volume with mount path /app/dataCreating a persistent volume in Northflank to store and access data across container restarts.

You can also spin up fully managed databases like PostgreSQL or Redis, right inside the platform. These live on the same internal network as your services, so you get secure, low-latency communication and don’t have to jump between providers:

So this way, you’re not piecing together a backend stack. You’re running services and stateful data together like it was meant to be.

4. WebSocket limitations

Vercel doesn’t support native WebSocket connections, which makes it difficult to run apps that rely on real-time communication or persistent state. If you're building a live dashboard, multiplayer game server, or collaborative editor, you'll have to delegate the WebSocket server to another platform or external service.

Since Vercel’s architecture is based on stateless functions and ephemeral execution, there’s no way to maintain a long-lived connection between a client and server.

So, how does Northflank help?

Side-by-side comparison showing that Vercel does not support WebSockets and requires external services, while Northflank supports WebSockets and gRPC natively with persistent client connectionsA visual comparison of WebSocket support: Vercel requires external workarounds, while Northflank supports persistent connections natively.

On Northflank, you’re not limited to short-lived requests. Your services run in always-on containers with support for multiple protocols and persistent connections, so you can expose WebSocket endpoints directly.

You can:

  • Expose any port using HTTP, HTTP/2, gRPC, TCP, or UDP.
  • Set whether a port is public or private to control traffic scope.
  • Route requests based on subdomain paths (e.g. /rpc or /socket).
  • Serve WebSocket traffic and standard HTTP traffic through the same deployment.

This flexibility makes it easy to host WebSocket-powered apps alongside your web server, without needing extra infrastructure or third-party brokers.

See how Northflank handles protocol-specific routing below. The interface lets you map routes like /api or /rpc to specific services and ports. This setup supports running a WebSocket server alongside other backend services under one subdomain.

Custom path-based routing with service-specific ports, ideal for real-time and HTTP traffic separationNorthflank UI showing subdomain path routing to separate services on different ports, including visual path-to-service mapping.

5. Previews are limited to the frontend only

Vercel’s preview environments are great for frontend changes. You push a new branch or open a pull request, and you’ll get a unique URL showing how the UI looks in production. But that preview stops there. It doesn’t include your backend logic, jobs, or database changes.

So if your pull request touches anything beyond the UI, like an API endpoint, background worker, or config value, those changes won’t be part of the preview. You’re testing the frontend in isolation, not the full experience.

In practice, this means extra setup just to simulate what the whole app would look like. Some teams spin up staging backends, use mock services, or manually update shared environments. It’s time-consuming and easy to misalign.

Northflank approaches this differently.

Diagram comparing Vercel and Northflank preview environments. Vercel shows frontend-only previews per PR. Northflank includes backend services, databases, background jobs, and secrets, all defined in a visual workflow builderComparison of preview environments: Vercel previews cover frontend changes only, while Northflank generates a full-stack environment per branch, including backend services, databases, and jobs.

Preview environments on Northflank are full-stack. Every branch gets its own environment, including:

  • Frontend and backend services
  • Databases and persistent volumes
  • Scheduled jobs and background workers
  • Secrets and shared resources

All connected in a single template, automatically spun up on every pull request.

See how that looks in action:

Visual preview template builder in Northflank with parallel and sequential steps, including Git triggers, build services, PostgreSQL and Redis addons, and deployment servicesVisual workflow builder showing Git triggers, build steps, deployments, secrets, and database addons configured as part of a single preview environment.

You can define everything in one place, from Git triggers and build flows to environment variables and access rules. Your QA, PMs, and reviewers can visit a single link and interact with the entire working app, rather than a static UI.

And because Northflank gives you control over how long these environments run, you’re not wasting resources because it allows you:

Go with Northflank if your team needs to preview everything: frontend, backend, and all the services in between.

Common questions about using Vercel for backend

I’ll provide some answers to the most frequently asked questions about using Vercel for backend work:

  • Can Vercel be used to host backend?

    Yes, but only with serverless functions. They’re stateless and suited for short-lived backend logic.

  • Can I deploy Node.js backend on Vercel?

    You can deploy serverless Node.js functions, but not a traditional long-running Node.js server.

  • Is Vercel just for frontend?

    It’s frontend-first and optimized for frameworks like Next.js. Backend support is limited to serverless functions.

  • Is Vercel backend free?

    The free plan includes backend functions, but limits apply to execution time, memory, and request volume.

  • What are the disadvantages of Vercel?

    No WebSocket support, no persistent storage, cold starts, and backend functions can’t maintain state between requests.

  • Do WebSockets work on Vercel?

    No, Vercel does not support WebSockets. You’ll need to use a separate service or platform.

  • Can I deploy full-stack app on Vercel?

    Yes, as long as your backend fits within the constraints of serverless functions and external services for state or persistence.

Before you choose Vercel, read these comparisons

If you’re looking for more Vercel comparisons and alternatives, look at the articles below. They can help you make the best decision for your project:

Share this article with your network
X