files-sdk is a unified storage SDK that puts the same nine-method API on top of 30+ object storage and blob backends — S3, GCS, Azure, Cloudflare R2, Vercel Blob, MinIO, Supabase, Dropbox, Google Drive, and more — with web-standard Blob/ReadableStream I/O and no provider types leaking into your application code.
Why I starred it
Storage backends are a classic "solved problem" that still burns teams constantly. You start on Vercel Blob because it's fast to set up, then a client wants GCS, then your AI pipeline needs presigned upload URLs from S3. Every switch rewrites the same upload/download/delete logic with slightly different SDK shapes.
What caught my attention was two things: the adapter is tree-shakeable by design (each provider is a separate entry point — files-sdk/s3, files-sdk/r2, etc.), and there's a fully-baked AI tooling layer. Not a demo, not an afterthought — a complete files-sdk/claude subpath that integrates with the Claude Agent SDK, plus parallel packages for the Vercel AI SDK and OpenAI Agents. That's an unusual combination of boring-infrastructure thinking and forward-looking ergonomics.
How it works
The Adapter interface in src/index.ts is the entire contract. Nine methods: upload, download, head, exists, delete, copy, list, url, signedUploadUrl. Each adapter implements this, exposes its native client at raw, and that's it. The Files class is literally a thin wrapper that validates keys and delegates:
// src/index.ts
const assertValidKey = (key: string, label = "key"): void => {
if (typeof key !== "string" || key.length === 0) {
throw new FilesError("Provider", `${label} must be a non-empty string`);
}
if (key.includes("\0")) {
throw new FilesError("Provider", `${label} must not contain null bytes`);
}
};
const run = async <T>(fn: () => Promise<T>): Promise<T> => {
try {
return await fn();
} catch (error) {
throw FilesError.wrap(error);
}
};
That's the whole boundary layer. assertValidKey deliberately doesn't try to be exhaustive — it catches the obviously broken cases (empty string, null bytes) and lets the provider surface its own errors for the rest. The key validation comment in the source is honest about this: "those rules differ across S3/R2/Vercel and we'd rather surface real provider errors than enforce the strictest superset."
The real work happens in src/internal/core.ts, which is where the shared adapter infrastructure lives. The makeErrorMapper factory is the most interesting piece — each adapter registers its provider-specific error codes as ReadonlySet<string> and the factory builds a classifier that maps them to four canonical codes: NotFound, Unauthorized, Conflict, Provider. Auth failures never get silently swallowed as NotFound.
The resolveUrlStrategy function in core.ts codifies a security invariant I hadn't seen explicitly written before: if you pass responseContentDisposition, it forces signing even when you've configured a publicBaseUrl. The reasoning is documented inline — a permanent CDN URL can't bind a Content-Disposition header, and silently dropping that override is stored-XSS on user-uploaded HTML or SVG. The override wins.
The StoredFile implementation in src/internal/stored-file.ts handles the stream-vs-buffer tension carefully. It distinguishes three body source kinds — buffer, stream, and lazy — and manages a cache so that calling text() then arrayBuffer() doesn't re-fetch. Call stream() first and the source is consumed; try calling a buffering accessor after and you get a meaningful error rather than an empty read:
// src/internal/stored-file.ts
const consumedError = (): FilesError =>
new FilesError(
"Provider",
"StoredFile body was already consumed via stream(). For multi-format access, call text()/arrayBuffer()/blob() before stream() — those drain into a cache."
);
The Claude integration in src/claude/index.ts wraps the Files instance as an in-process MCP server using createSdkMcpServer from @anthropic-ai/claude-agent-sdk. It returns three things: mcpServers, allowedTools, and a ready-made canUseTool callback. Write operations are approval-gated by default, individually configurable, or omitted entirely with readOnly: true. The naming scheme (mcp__<serverName>__<toolName>) follows the SDK's MCP prefix convention so the agent can address tools without custom routing.
// files-sdk/claude usage
const tools = createClaudeFileTools({
files,
requireApproval: {
deleteFile: true,
uploadFile: false,
copyFile: false,
signUploadUrl: true,
},
});
for await (const message of query({
prompt: "List my files and download the most recent one.",
options: {
mcpServers: tools.mcpServers,
allowedTools: tools.allowedTools,
canUseTool: tools.canUseTool,
},
})) { /* handle */ }
Test coverage is thorough. There's a fake-adapter.ts that backs the unit tests, a test file per adapter, and the main files.test.ts covers the full round-trip including stream consumption semantics. The test suite runs under Bun.
Using it
Swapping backends requires changing one import:
import { Files } from "files-sdk";
import { r2 } from "files-sdk/r2"; // was: import { s3 } from "files-sdk/s3"
const files = new Files({
adapter: r2({ bucket: "uploads", accountId: process.env.CF_ACCOUNT_ID }),
});
// All of these work identically regardless of adapter:
const result = await files.upload("avatars/abc.png", file, { contentType: "image/png" });
const got = await files.download("avatars/abc.png");
const url = await files.url("avatars/abc.png", { expiresIn: 3600 });
const exists = await files.exists("avatars/abc.png");
When you need something provider-specific, the escape hatch is files.raw — the native S3Client, GCS Storage, whatever you configured — one property away.
The presigned upload path is worth noting. When you pass maxSize to signedUploadUrl, adapters that support it switch from a presigned PUT to a presigned POST with a content-length-range policy. Without maxSize, the PUT has no size limit. The docs call this out explicitly.
Rough edges
The dependency list is heavy. package.json lists all provider SDKs as direct dependencies, not peer deps — so npm install files-sdk currently pulls in @aws-sdk/client-s3, @azure/storage-blob, @google-cloud/storage, box-typescript-sdk-gen, dropbox, and seven others even if you only use files-sdk/vercel-blob. The package is marked sideEffects: false and the exports map is set up for tree-shaking at the module level, but the declared deps don't follow suit. If bundle size matters you'll want to watch this.
The url() behavior on Vercel Blob private is "throws" — there's no URL primitive, you have to use download(). That's documented, but it's the kind of thing that will catch you if you're migrating from S3 (where url() always works).
Version 1.2.0, active commits, 30+ adapters already — the pace is fast. The changeset tooling is in place, so semver discipline looks intentional.
Bottom line
If you're building anything where the storage backend might change — multi-tenant SaaS, AI pipelines that need to run against different buckets per environment, or just a project that started on Vercel Blob and outgrew it — files-sdk removes the rewrite cost. The AI tooling integration is a genuine differentiator for agent workflows.
