How to Upload Files to Cloudflare R2 from Node.js 2026
How to Upload Files to Cloudflare R2 from Node.js
Cloudflare R2 is S3-compatible object storage with zero egress fees. You use the same AWS SDK you already know, but pay nothing for data transfer out. This guide covers uploads, downloads, presigned URLs, and the migration path from S3. It also covers security hardening for file upload endpoints — one of the most common attack surfaces in web applications.
The zero egress fee is R2's defining feature. AWS S3 charges $0.09/GB to transfer data out to the internet — for a SaaS app serving 1TB of images or files per month, that's $90 in transfer costs alone, on top of storage fees. R2 eliminates that entirely. The storage pricing itself is also lower than S3 standard tier ($0.015/GB vs $0.023/GB). For storage-heavy workloads (images, videos, documents, backups), R2 is typically 50-90% cheaper than S3 at equivalent usage.
The trade-off is maturity. S3 has 18+ years of features, third-party integrations, and tooling. R2 launched in 2022 and doesn't support every S3 feature (notably: Object Lock, SSE-KMS, Transfer Acceleration). For new projects without S3-specific feature requirements, R2 is the obvious choice. For existing S3 workloads, the migration is straightforward thanks to API compatibility.
What You'll Build
- File upload and download (images, PDFs, any file type)
- Presigned URLs for direct browser uploads
- Public bucket with custom domain
- File listing and deletion
- Multipart uploads for large files
Prerequisites: Node.js 18+, Cloudflare account (R2 free tier: 10GB storage, 10M reads/month).
1. Setup
Create R2 Bucket
- Go to Cloudflare Dashboard → R2 Object Storage
- Click "Create bucket"
- Name it (e.g.,
my-app-uploads) - Choose location hint (Auto or specific region)
Generate API Token
- R2 → Manage R2 API Tokens → Create API Token
- Permissions: Object Read & Write
- Specify bucket (or all buckets)
- Copy the Access Key ID, Secret Access Key, and Account ID
Install SDK
npm install @aws-sdk/client-s3 @aws-sdk/s3-request-presigner
Initialize Client
// lib/r2.ts
import { S3Client } from '@aws-sdk/client-s3';
export const r2 = new S3Client({
region: 'auto',
endpoint: `https://${process.env.CLOUDFLARE_ACCOUNT_ID}.r2.cloudflarestorage.com`,
credentials: {
accessKeyId: process.env.R2_ACCESS_KEY_ID!,
secretAccessKey: process.env.R2_SECRET_ACCESS_KEY!,
},
});
export const BUCKET_NAME = process.env.R2_BUCKET_NAME!;
Environment Variables
# .env.local
CLOUDFLARE_ACCOUNT_ID=your_account_id
R2_ACCESS_KEY_ID=your_access_key
R2_SECRET_ACCESS_KEY=your_secret_key
R2_BUCKET_NAME=my-app-uploads
R2_PUBLIC_URL=https://files.yourdomain.com # If using custom domain
2. Upload Files
Server-Side Upload
// lib/upload.ts
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { r2, BUCKET_NAME } from './r2';
import { randomUUID } from 'crypto';
export async function uploadFile(
file: Buffer,
contentType: string,
folder: string = 'uploads'
) {
const key = `${folder}/${randomUUID()}-${Date.now()}`;
await r2.send(new PutObjectCommand({
Bucket: BUCKET_NAME,
Key: key,
Body: file,
ContentType: contentType,
}));
return {
key,
url: `${process.env.R2_PUBLIC_URL}/${key}`,
};
}
Upload API Route (Next.js)
// app/api/upload/route.ts
import { NextResponse } from 'next/server';
import { uploadFile } from '@/lib/upload';
export async function POST(req: Request) {
const formData = await req.formData();
const file = formData.get('file') as File;
if (!file) {
return NextResponse.json({ error: 'No file provided' }, { status: 400 });
}
// Validate file type
const allowedTypes = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf'];
if (!allowedTypes.includes(file.type)) {
return NextResponse.json({ error: 'File type not allowed' }, { status: 400 });
}
// Validate file size (10MB max)
if (file.size > 10 * 1024 * 1024) {
return NextResponse.json({ error: 'File too large' }, { status: 400 });
}
const buffer = Buffer.from(await file.arrayBuffer());
const result = await uploadFile(buffer, file.type, 'images');
return NextResponse.json(result);
}
Upload from Client
// components/FileUpload.tsx
'use client';
import { useState } from 'react';
export function FileUpload() {
const [uploading, setUploading] = useState(false);
const [url, setUrl] = useState<string | null>(null);
const handleUpload = async (e: React.ChangeEvent<HTMLInputElement>) => {
const file = e.target.files?.[0];
if (!file) return;
setUploading(true);
const formData = new FormData();
formData.append('file', file);
const res = await fetch('/api/upload', {
method: 'POST',
body: formData,
});
const data = await res.json();
setUrl(data.url);
setUploading(false);
};
return (
<div>
<input type="file" onChange={handleUpload} disabled={uploading} />
{uploading && <p>Uploading...</p>}
{url && <p>Uploaded: <a href={url}>{url}</a></p>}
</div>
);
}
3. Presigned URLs (Direct Browser Upload)
Skip your server — let the browser upload directly to R2:
// app/api/upload-url/route.ts
import { NextResponse } from 'next/server';
import { PutObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
import { r2, BUCKET_NAME } from '@/lib/r2';
import { randomUUID } from 'crypto';
export async function POST(req: Request) {
const { contentType, filename } = await req.json();
const key = `uploads/${randomUUID()}-${filename}`;
const signedUrl = await getSignedUrl(
r2,
new PutObjectCommand({
Bucket: BUCKET_NAME,
Key: key,
ContentType: contentType,
}),
{ expiresIn: 3600 } // 1 hour
);
return NextResponse.json({ uploadUrl: signedUrl, key });
}
Client-Side Direct Upload
async function uploadDirect(file: File) {
// 1. Get presigned URL from your server
const res = await fetch('/api/upload-url', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
contentType: file.type,
filename: file.name,
}),
});
const { uploadUrl, key } = await res.json();
// 2. Upload directly to R2 (no server processing)
await fetch(uploadUrl, {
method: 'PUT',
body: file,
headers: { 'Content-Type': file.type },
});
return key;
}
4. Download Files
Get Object
import { GetObjectCommand } from '@aws-sdk/client-s3';
export async function downloadFile(key: string) {
const response = await r2.send(new GetObjectCommand({
Bucket: BUCKET_NAME,
Key: key,
}));
return {
body: response.Body,
contentType: response.ContentType,
contentLength: response.ContentLength,
};
}
Presigned Download URL
import { GetObjectCommand } from '@aws-sdk/client-s3';
import { getSignedUrl } from '@aws-sdk/s3-request-presigner';
export async function getDownloadUrl(key: string) {
return getSignedUrl(
r2,
new GetObjectCommand({ Bucket: BUCKET_NAME, Key: key }),
{ expiresIn: 3600 }
);
}
5. List and Delete Files
import { ListObjectsV2Command, DeleteObjectCommand } from '@aws-sdk/client-s3';
// List files in a folder
export async function listFiles(prefix: string = '') {
const response = await r2.send(new ListObjectsV2Command({
Bucket: BUCKET_NAME,
Prefix: prefix,
MaxKeys: 100,
}));
return response.Contents?.map(obj => ({
key: obj.Key!,
size: obj.Size!,
lastModified: obj.LastModified!,
})) ?? [];
}
// Delete a file
export async function deleteFile(key: string) {
await r2.send(new DeleteObjectCommand({
Bucket: BUCKET_NAME,
Key: key,
}));
}
6. Public Bucket with Custom Domain
Enable Public Access
- R2 → Your bucket → Settings → Public Access
- Enable "Allow Access" → Adds
r2.devsubdomain - Or connect a custom domain (recommended)
Custom Domain Setup
- Add a CNAME record:
files.yourdomain.com→ your R2 bucket's public URL - Cloudflare automatically handles SSL
Files are now accessible at: https://files.yourdomain.com/uploads/image.jpg
Pricing Comparison
| Feature | Cloudflare R2 | AWS S3 |
|---|---|---|
| Storage | $0.015/GB/month | $0.023/GB/month |
| Reads (GET) | $0.36/million | $0.40/million |
| Writes (PUT) | $4.50/million | $5.00/million |
| Egress | $0 (free) | $0.09/GB |
| Free tier | 10GB + 10M reads | 5GB + 20K reads |
Example: 100GB storage + 1TB egress/month:
- R2: $1.50 (storage only)
- S3: $2.30 + $92.16 (egress) = $94.46
Common Mistakes
| Mistake | Impact | Fix |
|---|---|---|
| Exposing R2 credentials to client | Account compromise | Use presigned URLs for direct uploads |
| No file type validation | Malicious file uploads | Validate MIME type server-side |
| No file size limits | Storage abuse | Enforce max size (presigned URL + server) |
Using r2.dev domain in production | Rate limited, no caching | Use custom domain |
| Not setting Content-Type on upload | Files download instead of display | Always set ContentType |
Multipart Uploads for Large Files
The PutObjectCommand approach works for files up to ~5GB, but for large files (video, datasets, backups), multipart upload gives you parallel chunk uploading and resumability. The AWS SDK v3 has a high-level Upload utility that handles the multipart logic automatically:
Large file uploads should always use multipart with a progress callback so users see upload progress rather than a stalled UI. For video upload workflows, combine R2 multipart upload with Cloudflare Stream for post-upload video transcoding and adaptive bitrate streaming — R2 handles raw storage, Stream handles video delivery.
Security: File Upload Validation
The upload API route in this guide validates MIME type and file size, but server-side validation needs to go further in production.
Content-type spoofing: MIME type validation based on file.type is client-reported and easily spoofed. A user can rename malicious.php to image.jpg and set Content-Type: image/jpeg. Always validate file content by reading the file's magic bytes (the first few bytes that identify file type). The file-type npm package does this: const type = await fileTypeFromBuffer(buffer); if (type?.mime !== 'image/jpeg') — this checks the actual file header, not the client-reported type.
File size enforcement at upload time: The file.size > 10 * 1024 * 1024 check happens after the file is fully buffered into memory. For a 500MB file, that's 500MB of memory consumed before you reject it. Enforce size limits at the HTTP layer instead: in Next.js App Router, you can't configure bodyParser.sizeLimit like you could in Pages Router, so use a streaming approach that counts bytes as they arrive and aborts when the limit is hit.
Presigned URL size limits: When using presigned URLs for direct browser uploads, add ContentLength or ContentLengthRange constraints to the PutObjectCommand. Cloudflare R2 supports x-amz-content-sha256 and can enforce max content length when specified in the presigned URL parameters. Without this, a user could modify the presigned URL and upload arbitrarily large files directly to R2.
File name sanitization: Never use the original filename as the R2 object key — it could contain path traversal sequences (../../../etc/passwd), unicode tricks, or extremely long strings. Generate a UUID-based key as shown in the upload helper, and store the original filename as a metadata field or in your database separately.
Migrating from AWS S3
The switch to R2 is intentionally straightforward — R2 implements the S3 API. The main changes:
- Endpoint URL: Change from
https://s3.amazonaws.comtohttps://{ACCOUNT_ID}.r2.cloudflarestorage.com. Setregion: 'auto'(R2 ignores region but the SDK requires it). - Credentials: Use R2 API tokens instead of AWS IAM keys. Generate in the Cloudflare dashboard. The format is the same (
accessKeyId+secretAccessKey). - Public access: S3 has bucket policies and ACLs; R2 has a simpler public access toggle. If you used
ACL: 'public-read'in S3, enable public access on the R2 bucket and remove the ACL parameter from yourPutObjectCommandcalls (R2 ignores ACLs). - Lifecycle rules: R2 supports lifecycle rules (object expiration) but through the Cloudflare dashboard, not via S3 API calls. If you're managing lifecycle programmatically, you'll need to adjust.
- Cross-origin (CORS): Configure CORS in the Cloudflare dashboard, not via
PutBucketCorsCommand. The CORS rules use the same structure as S3 CORS.
Data migration for existing S3 buckets: Cloudflare offers a Super Slurper migration tool in the R2 dashboard that copies objects from S3 to R2 without egress charges (Cloudflare waives the egress on migration). For programmatic migration, use aws s3 sync s3://old-bucket s3://r2-bucket --endpoint-url https://ACCOUNT_ID.r2.cloudflarestorage.com.
DNS cutover strategy: When migrating, run R2 and S3 in parallel. New writes go to R2; old objects remain on S3. Serve from both using a CDN or smart routing based on object existence. Once all S3 objects have been copied to R2 (verify with object count), switch the CDN origin to R2-only and cancel your S3 subscription. This dual-write phase is typically 1-4 weeks depending on your dataset size and how many writes your system generates. For read-heavy workloads with infrequent writes, the migration window is shorter; for write-heavy workloads (continuous video uploads, high-frequency event logging), plan more carefully around the dual-write phase.
Bandwidth Alliance: Cloudflare has agreements with major cloud providers and CDN networks to eliminate egress charges when traffic flows within the Bandwidth Alliance. If your compute is on AWS EC2 in the same region, egress from S3 to EC2 is technically free within that region — but egress to end users (internet) is not. R2's zero egress to the internet is still the more impactful benefit for user-serving applications.
Methodology
This guide uses @aws-sdk/client-s3 v3.x and @aws-sdk/s3-request-presigner v3.x — the modular AWS SDK (not aws-sdk v2, which is in maintenance mode). R2's S3 API compatibility covers all operations used in this guide. R2's S3 API compatibility covers all operations used in this guide (PutObject, GetObject, ListObjectsV2, DeleteObject, presigned URLs). Features not yet supported by R2 as of 2026: S3 Object Lock, S3 Transfer Acceleration, SSE-KMS (only SSE-S3 is supported). Pricing data is from Cloudflare's pricing page and AWS S3 standard storage pricing as of early 2026. Free tier: R2 provides 10GB storage + 10M GET requests + 1M PUT/DELETE per month at no charge.
Choosing object storage? Compare Cloudflare R2 vs AWS S3 vs Backblaze B2 on APIScout — pricing, egress fees, and developer experience.
Related: AWS S3 vs Cloudflare R2: Object Storage Compared, Cloudflare R2 vs Backblaze B2, Uploadthing vs Cloudflare R2 vs S3 for Next.js 2026