haydenbleasel/files-sdk
VercelR2S3MinIO

files-sdk

A unified storage SDK for object and blob backends — S3, Cloudflare R2, Vercel Blob, MinIO. One small, honest API. Web-standards I/O. An escape hatch when you need the native client.

import { Files } from "files-sdk";import { s3 } from "files-sdk/s3";const files = new Files({  adapter: s3({ bucket: "uploads", region: "us-east-1" }),});await files.upload("hello.txt", "world");const url = await files.url("hello.txt");

Why

Object storage SDKs are all subtly different. files-sdk exposes the slice that's the same everywhere — upload, download, list, delete — behind a single class, and gets out of the way for anything provider-specific.

  • One small API across providers. Swap S3 for R2 for Vercel Blob for MinIO without rewriting calls.
  • Web-standards I/O. Accepts File, Blob, ReadableStream, ArrayBuffer, string. Runs on Node, Bun, Workers, Vercel — anywhere fetch runs.
  • Escape hatch via files.raw. The native client is always one property away, typed per adapter — versioning, lifecycle, ACLs, multipart, all of it.
  • Predictable errors. A single FilesError with a normalized code across providers, and the original error attached as cause.

Installation

npm install files-sdk

Quick start

Construct a Files instance with the adapter for your provider, then call methods on it. The adapter is fixed at construction — there is no functional put({ provider, ... }) form to keep call sites flat.

import { Files } from "files-sdk";import { s3 } from "files-sdk/s3";const files = new Files({  adapter: s3({ bucket: "uploads", region: "us-east-1" }),});await files.upload("hello.txt", "world");const file = await files.download("hello.txt");const { items } = await files.list({ prefix: "hello" });await files.delete("hello.txt");

Adapters

Each adapter is a subpath import. Bring only what you use; the others tree-shake away. Adapters auto-load credentials from the standard environment variables for that provider — pass options explicitly to override. If an adapter is constructed without enough info to authenticate, it throws at construction time naming the missing variable.

AWS S3 (and any S3-compatible bucket). Uses the standard AWS credential chain — environment, IAM role, shared profile.

import { Files } from "files-sdk";import { s3 } from "files-sdk/s3";const files = new Files({  adapter: s3({    bucket: "uploads",    region: "us-east-1",    // credentials auto-loaded from the AWS chain    // (env vars, IAM role, shared profile, ...)  }),});
  • bucket — required.
  • region — optional. Falls back to AWS_REGION.
  • credentials — optional. { accessKeyId, secretAccessKey, sessionToken? }.
  • endpoint — optional. Override for S3-compatible services.
  • publicBaseUrl — optional. When set, url() returns `${publicBaseUrl}/${key}` and skips signing — use this if your bucket is fronted by CloudFront or has a public-read policy. When unset, url() returns a presigned GetObject (1-hour default).

API reference

Every method is available on the Files instance. The unified surface only covers what every adapter can do cleanly — anything provider-specific lives on files.raw.

files.upload(key, body, options?)

Writes a body to key. Accepts native File, Blob, ReadableStream, ArrayBuffer, or string. Content type is inferred from the input when possible.

await files.upload("avatars/abc.png", file, {  contentType: "image/png",  cacheControl: "public, max-age=31536000",  metadata: { userId: "123" },});// → { key, size, contentType, etag, lastModified }

Options

  • contentType — string, optional. Inferred from File/Blob type when not set.
  • cacheControl — string, optional. Sent verbatim to the provider.
  • metadataRecord<string, string>, optional. Provider user-metadata, returned by head and list where the provider supports it. Vercel Blob does not expose user metadata, so it round-trips as undefined.

files.download(key, options?)

Reads an object. Returns a StoredFile by default (Blob-backed). Pass { as: "stream" } to opt into a ReadableStream for large objects.

const file = await files.download("avatars/abc.png");// → StoredFile (Blob-backed)const stream = await files.download("avatars/abc.png", { as: "stream" });// → ReadableStream

files.head(key)

Returns the same StoredFile shape as download, without materializing the body. Calling a body accessor on the result lazy-fetches.

const info = await files.head("avatars/abc.png");// → StoredFile with no body materialized

files.delete(key)

Removes an object. No-op friendly: a missing key resolves successfully on providers that treat delete as idempotent, and throws FilesError with code: "NotFound" on ones that don't.

await files.delete("avatars/abc.png");

files.copy(from, to)

Server-side copy where the provider supports it; falls back to read + write otherwise.

await files.copy("avatars/abc.png", "avatars/abc.bak.png");

files.list(options?)

Cursor-paginated listing with prefix filter. Each item is a StoredFile with a lazy body accessor.

const { items, cursor } = await files.list({  prefix: "avatars/",  limit: 100,});if (cursor) {  const next = await files.list({ prefix: "avatars/", cursor });}

Options

  • prefix — string, optional.
  • limit — number, optional. Provider-specific cap; defaults to 1000.
  • cursor — string, optional. Pass cursor from the previous result to continue.

files.url(key, options?)

Returns a URL the caller can use to fetch key. Every adapter returns the most direct URL it can produce. Signing adapters (S3, R2 over HTTP, MinIO, R2 binding when HTTP credentials are also configured) sign a GetObject — defaulting to a 1-hour expiry, override per-call via { expiresIn } or per-adapter via defaultUrlExpiresIn. If the adapter is constructed with a publicBaseUrl (CDN, custom domain, r2.dev), that wins and the URL is built without signing.

Two configurations have no URL primitive and throw: Vercel Blob in access: "private" mode, and an R2 Workers binding without either publicBaseUrl or HTTP credentials.

// One call, every adapter. S3 / R2 / MinIO sign a GetObject (1h default,// override with { expiresIn }); Vercel Blob (public) returns its CDN URL.// If you configured `publicBaseUrl` on the adapter, that wins and signing// is skipped.const url = await files.url("avatars/abc.png");const short = await files.url("avatars/abc.png", { expiresIn: 60 });// Force download (defeat stored XSS from user-uploaded HTML/SVG).// Forces signing even if `publicBaseUrl` is configured — a permanent// CDN URL has no signature to bind the override into, and silently// dropping a security ask would be a regression.const safe = await files.url("avatars/abc.png", {  responseContentDisposition: "attachment",});

Options

  • expiresIn — number of seconds, optional. Honored on signing adapters; ignored on Vercel Blob (no signing primitive). Defaults to the adapter's defaultUrlExpiresIn (1 hour).
  • responseContentDisposition — string, optional. Strongly recommended for buckets with user-uploaded content. Without it, the browser uses the stored Content-Type to decide whether to render or download — a user-uploaded .html (or SVG with embedded scripts) will execute inline at your bucket's origin. Pass "attachment" to force a download. Forces the signing path on adapters that can sign (overrides publicBaseUrl, because permanent CDN URLs can't carry the override). Throws on Vercel Blob (no Content-Disposition primitive) and on the R2 binding without HTTP credentials.

files.signedUploadUrl(key, options)

Returns a discriminated PUT-or-POST contract so a client (typically a browser) can upload directly to the bucket without proxying bytes through your server. The flow is: your server calls signedUploadUrl(), returns the result to the browser, the browser uploads straight to S3/R2/MinIO. Bandwidth and CPU stay off your server.

Without maxSize, the adapter returns a presigned PUT URL — simpler, but with no server-side size cap. With maxSize, the adapter switches to a presigned POST form whose policy enforces the size at the bucket via content-length-range. In practice you should always pass maxSize — without it, anyone with the URL can DoS your storage costs until expiresIn elapses.

Vercel Blob throws here — its upload model goes through handleUpload() from @vercel/blob/client instead of presigned URLs. The R2 Workers binding throws unless you've configured hybrid mode (binding + HTTP credentials).

// On your server: hand back an upload contract that lets the browser// PUT/POST the file directly to the bucket. Bytes never touch your server.const upload = await files.signedUploadUrl("avatars/abc.png", {  expiresIn: 60,  contentType: "image/png",  maxSize: 5_000_000,});// → { method: "PUT", url, headers? }//   | { method: "POST", url, fields }// In the browser: PUT path (no maxSize) is a plain fetch.await fetch(upload.url, {  method: "PUT",  body: file,  headers: upload.headers,});// POST path (with maxSize) is multipart with the signed policy fields.const form = new FormData();for (const [k, v] of Object.entries(upload.fields)) form.append(k, v);form.append("file", file);await fetch(upload.url, { method: "POST", body: form });

Options

  • expiresIn — number of seconds. Required.
  • contentType — string, optional. Bound into the signature so the upload's Content-Type must match.
  • maxSize — number of bytes, optional. Strongly recommended. Without it, the signed URL has no server-side size cap — anyone with the URL can upload an arbitrarily large file until expiresIn elapses. With it, the adapter switches to a presigned POST form that enforces the size via content-length-range.
  • minSize — number of bytes, optional. Defaults to 1 when maxSize is set, so empty uploads are rejected by the POST policy. Pass 0 to allow them.

The StoredFile type

Native File covers name, size, type, and lastModified, but storage adds three things it doesn't carry: a full key, an etag for cache validation, and user-defined metadata. StoredFile mirrors File's shape and adds those.

interface StoredFile {  // File-shaped:  name: string;        // = key  size: number;  type: string;        // = contentType  lastModified?: number;  arrayBuffer(): Promise<ArrayBuffer>;  text(): Promise<string>;  stream(): ReadableStream;  blob(): Promise<Blob>;  // Storage-specific:  key: string;  etag?: string;  metadata?: Record<string, string>;}

upload accepts a native File as input. download, head, and list all return StoredFile. The body accessors on results from head and list lazy-fetch on call.

Errors

All methods throw a single FilesError with a normalized code. The original provider error is attached as cause.

import { FilesError } from "files-sdk";try {  await files.download("missing.png");} catch (err) {  if (err instanceof FilesError && err.code === "NotFound") {    // handle gracefully  }  throw err;}

Codes

  • "NotFound" — key does not exist.
  • "Unauthorized" — credentials missing, expired, or insufficient.
  • "Conflict" — precondition failed, e.g. conditional write lost a race.
  • "Provider" — anything else; inspect cause for the underlying error.

Escape hatch

When you need a feature outside the unified surface — S3 versioning, lifecycle rules, ACLs, multipart, anything — drop down to the native client. The raw property is typed per adapter, so you keep autocomplete.

// Typed per adapter — S3Client, R2Bucket, VercelBlobClient, ...const s3 = files.raw;await s3.send(  new PutObjectAclCommand({ Bucket: "uploads", Key: "a.png", ACL: "public-read" }),);

Compatibility matrix

Every adapter implements the same nine-method surface, but the URL methods and a couple of edge cases vary by provider. Hover the warning and error icons for the why behind each one.

S3Cloudflare R2Vercel BlobMinIO
MethodHTTPbindinghybridpublicprivate
upload
download
delete
list
head
copy
url
signedUploadUrl

SupportedSupported with caveatThrows