How to Manage Multiple API Keys Securely 2026
How to Manage Multiple API Keys Securely
A typical production app uses 5-15 different API keys. Stripe, Resend, Clerk, OpenAI, Cloudflare, analytics, monitoring — each with its own key that grants access to sensitive operations. One leaked key can compromise your users, your data, and your bank account.
The API Key Landscape
How Many Keys Do You Actually Have?
Typical SaaS app API keys:
Payment: Stripe secret + publishable key
Auth: Clerk secret + publishable key
Email: Resend API key
AI: OpenAI API key + Anthropic API key
Analytics: PostHog project key
Storage: AWS access key + secret key
Database: Connection string (with credentials)
Monitoring: Sentry DSN
CDN: Cloudflare API token
Deployment: Vercel token
Total: 12+ secrets to manage
Key Types
| Type | Example | Exposure Risk | If Leaked |
|---|---|---|---|
| Secret key | sk_live_... | Server only | Full API access, financial damage |
| Publishable key | pk_live_... | Client-safe | Limited operations, low risk |
| API token | ghp_... | Server only | Account access |
| Connection string | postgres://user:pass@... | Server only | Database access |
| Webhook secret | whsec_... | Server only | Webhook forgery |
| JWT secret | Random string | Server only | Token forgery |
Rule 1: Never Commit Keys to Git
The #1 source of key leaks: committing to version control.
# .gitignore — MUST include
.env
.env.local
.env.production
.env.*.local
*.pem
*.key
credentials.json
# Pre-commit hook to catch secrets
# Install: npx husky add .husky/pre-commit "npx secretlint '**/*'"
# Or use git-secrets
git secrets --install
git secrets --register-aws # Catches AWS keys
# Check for accidentally committed secrets
git log --all --diff-filter=A -- '*.env' '.env*'
git log --all -p -- . | grep -i "sk_live\|secret_key\|password"
# If you find a committed secret:
# 1. Rotate the key IMMEDIATELY (old key is compromised)
# 2. Remove from history (git filter-repo)
# 3. Force push (coordinate with team)
Rule 2: Environment Variables Only
// ❌ Never hard-code keys
const stripe = new Stripe('sk_live_abc123...'); // NO!
// ❌ Never put keys in client-side code
const openai = new OpenAI({
apiKey: 'sk-abc123', // Visible in browser devtools
});
// ✅ Environment variables
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!);
// ✅ With validation
function requireEnv(name: string): string {
const value = process.env[name];
if (!value) {
throw new Error(`Missing required environment variable: ${name}`);
}
return value;
}
const config = {
stripe: {
secretKey: requireEnv('STRIPE_SECRET_KEY'),
webhookSecret: requireEnv('STRIPE_WEBHOOK_SECRET'),
},
resend: {
apiKey: requireEnv('RESEND_API_KEY'),
},
openai: {
apiKey: requireEnv('OPENAI_API_KEY'),
},
};
Structured Environment Validation
import { z } from 'zod';
const EnvSchema = z.object({
// Required
STRIPE_SECRET_KEY: z.string().startsWith('sk_'),
STRIPE_WEBHOOK_SECRET: z.string().startsWith('whsec_'),
RESEND_API_KEY: z.string().startsWith('re_'),
CLERK_SECRET_KEY: z.string().startsWith('sk_'),
DATABASE_URL: z.string().startsWith('postgres://'),
// Optional with defaults
NODE_ENV: z.enum(['development', 'production', 'test']).default('development'),
LOG_LEVEL: z.enum(['debug', 'info', 'warn', 'error']).default('info'),
});
// Validate at startup — fail fast if keys are missing
export const env = EnvSchema.parse(process.env);
Rule 3: Use Secrets Managers
For production, don't rely on .env files:
| Solution | Best For | How It Works |
|---|---|---|
| Vercel Environment Variables | Vercel deployments | Encrypted, per-environment, auto-injected |
| AWS Secrets Manager | AWS infrastructure | Encrypted, rotation support, IAM-controlled |
| Google Secret Manager | GCP infrastructure | Encrypted, versioned, IAM-controlled |
| Doppler | Multi-platform | Universal secrets manager, syncs everywhere |
| 1Password CLI | Small teams | Load secrets from vault into env |
| HashiCorp Vault | Enterprise | Full secrets management, dynamic secrets |
| Infisical | Open-source alternative | Self-hostable, team sync |
# Doppler — sync secrets across environments
doppler setup
doppler run -- npm start
# All env vars injected from Doppler, never stored locally
# 1Password CLI
op run --env-file=.env.1password -- npm start
# Secrets pulled from 1Password vault at runtime
# AWS Secrets Manager
aws secretsmanager get-secret-value --secret-id prod/api-keys
Rule 4: Scope Keys Minimally
Never use admin keys when read-only will do:
// ❌ Using full-access key everywhere
const cloudflare = new Cloudflare({
apiToken: process.env.CF_API_TOKEN, // Has permission to delete zones!
});
// ✅ Scoped API tokens
const CF_TOKENS = {
dns: process.env.CF_DNS_TOKEN, // Edit DNS only
cache: process.env.CF_CACHE_TOKEN, // Purge cache only
analytics: process.env.CF_ANALYTICS_TOKEN, // Read analytics only
};
Key Scoping by Provider
| Provider | Scoping Options |
|---|---|
| Stripe | Restricted keys with specific permissions |
| Cloudflare | API tokens with per-resource permissions |
| GitHub | Fine-grained PATs with repo/scope selection |
| AWS | IAM policies with resource-level permissions |
| OpenAI | Project-level keys, usage limits |
# Stripe — create restricted key
# Dashboard → API Keys → Create restricted key
# Set permissions:
# Charges: Read
# Customers: Write
# Refunds: None
# Result: Key can create customers and read charges, but can't issue refunds
Rule 5: Rotate Keys Regularly
// Key rotation strategy
interface KeyRotation {
schedule: string;
how: string;
}
const rotationPolicies: Record<string, KeyRotation> = {
'stripe': {
schedule: 'Quarterly',
how: 'Generate new key in dashboard → update env → verify → revoke old key',
},
'database': {
schedule: 'Monthly',
how: 'Create new credentials → update connection string → verify → drop old user',
},
'jwt_secret': {
schedule: 'Annually',
how: 'Generate new secret → support both old+new during transition → remove old',
},
'webhook_secrets': {
schedule: 'Annually',
how: 'Add new secret → verify webhook signatures with both → remove old',
},
};
Zero-Downtime Key Rotation
// Support multiple valid keys during rotation
function verifyWebhookSignature(
payload: string,
signature: string,
secrets: string[]
): boolean {
// Try each secret — old and new
return secrets.some(secret => {
const expected = createHmac('sha256', secret)
.update(payload)
.digest('hex');
return timingSafeEqual(
Buffer.from(signature),
Buffer.from(expected)
);
});
}
// During rotation:
const WEBHOOK_SECRETS = [
process.env.WEBHOOK_SECRET_NEW!, // New key (primary)
process.env.WEBHOOK_SECRET_OLD!, // Old key (still valid during transition)
];
const isValid = verifyWebhookSignature(payload, sig, WEBHOOK_SECRETS);
Rule 6: Monitor Key Usage
// Track which keys are used and how
// Most providers offer usage dashboards
// Stripe: Dashboard → API Keys → shows last used date
// OpenAI: Dashboard → Usage → per-key breakdown
// AWS: CloudTrail logs all API key usage
// Custom monitoring:
class APIKeyMonitor {
async logUsage(keyName: string, endpoint: string, status: number) {
await analytics.track('api_key_usage', {
key: keyName,
endpoint,
status,
timestamp: new Date().toISOString(),
});
// Alert on suspicious activity
if (status === 401) {
await alert(`API key ${keyName} returned 401 — may be expired or compromised`);
}
}
}
Rule 7: Client-Side Key Safety
// ONLY publishable/public keys on the client side
// ✅ Safe for client
const NEXT_PUBLIC_STRIPE_KEY = process.env.NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY;
const NEXT_PUBLIC_CLERK_KEY = process.env.NEXT_PUBLIC_CLERK_PUBLISHABLE_KEY;
const NEXT_PUBLIC_POSTHOG_KEY = process.env.NEXT_PUBLIC_POSTHOG_KEY;
// ❌ NEVER on client
// STRIPE_SECRET_KEY → server only
// OPENAI_API_KEY → server only
// DATABASE_URL → server only
// Next.js enforces this:
// NEXT_PUBLIC_ prefix → available on client
// No prefix → server only (won't be in browser bundle)
// For APIs without publishable keys, proxy through your server
// ❌ Don't do this
const response = await fetch('https://api.openai.com/v1/chat/completions', {
headers: { 'Authorization': `Bearer ${OPENAI_KEY}` }, // Key in browser!
});
// ✅ Proxy through your API route
const response = await fetch('/api/ai/chat', {
method: 'POST',
body: JSON.stringify({ message: 'Hello' }),
});
// Your API route handles the key securely
// app/api/ai/chat/route.ts
export async function POST(req: Request) {
const { message } = await req.json();
const result = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: message }],
});
return Response.json(result);
}
Emergency Response: Key Leak
⚠️ Key leaked (committed to GitHub, in logs, shared publicly):
1. IMMEDIATELY rotate the key (new key from provider dashboard)
2. Update environment variables in production
3. Deploy with new key
4. Revoke the old key
5. Check audit logs for unauthorized usage
6. If financial API (Stripe, etc.): check for fraudulent transactions
7. Post-mortem: how did it leak? Prevent recurrence
Timeline: Steps 1-4 should take < 15 minutes
Detecting Leaked Keys Automatically
Manual review catches some leaks but misses others. Automated secret scanning is the safety net that catches what humans miss — especially in large teams, fast-moving codebases, or when engineers are in a hurry.
GitHub Secret Scanning: GitHub's built-in secret scanning runs on every push to any public or private repository and alerts you (and, for many providers like Stripe, the provider itself) when it detects a known secret format. Enable it in repository Settings → Security → Secret scanning. For Stripe, GitHub notifies Stripe directly, which can auto-revoke the leaked key before you've even seen the alert. This is free and requires zero configuration for public repositories. For private repositories, it's included in GitHub Advanced Security (available on Enterprise and some Organization plans).
Gitleaks and Trufflehog: For pre-commit scanning and CI/CD pipelines, Gitleaks (gitleaks detect) and Trufflehog (trufflehog git) both scan commit history for secrets with high signal-to-noise ratio. Run Gitleaks as a pre-commit hook (gitleaks protect --staged) to catch secrets before they ever enter the repository. Run it in CI as a full history scan (gitleaks detect --source .) on PRs from external contributors. Trufflehog has a particularly strong model for detecting high-entropy strings that look like secrets even when they don't match a known pattern.
GitGuardian: For teams that want a managed solution, GitGuardian monitors your GitHub, GitLab, or Bitbucket organization continuously and sends real-time alerts with remediation steps. It covers 400+ secret types including API keys from every major provider. The free tier covers public repositories; paid plans cover private repositories and add remediation workflows, developer training, and incident response features.
What to do when you find a leaked key: The order of operations is critical. First, rotate immediately — generate a new key from the provider dashboard. Second, update your production environment variables. Third, deploy the change so the new key is live. Only then remove the secret from git history (using git filter-repo --path-glob '*.env' --invert-paths or BFG Repo Cleaner). Attempting to clean history before rotating is backwards — the key is already compromised, and history cleanup without rotation is security theater.
Dynamic Secrets vs Static API Keys
Static API keys — the kind you create once, copy into an environment variable, and use for months or years — are the default but not the most secure option. Dynamic secrets are short-lived credentials that expire automatically, reducing the blast radius of any single credential leak.
AWS IAM roles over access keys: Instead of creating an IAM user with an access key and secret key, assign an IAM role to your EC2 instance, ECS task, or Lambda function. The AWS SDK automatically fetches temporary credentials from the instance metadata service, and those credentials rotate every few hours. No key to store, no key to leak. This is the strongly recommended approach for all AWS workloads. If you're still using long-lived IAM access keys in 2026, migrating to IAM roles is the highest-ROI security improvement you can make.
HashiCorp Vault dynamic secrets: For non-AWS services, Vault's dynamic secret engines generate unique, time-limited credentials for databases, cloud providers, and other systems. A developer checks out database credentials that expire in 1 hour; if they leave their laptop at a coffee shop, the credentials are already useless. Vault also provides audit logging of every credential issuance. For teams managing many secrets across multiple environments, Vault provides a unified secrets plane.
Service accounts with short-lived tokens: Providers like Google Cloud, Anthropic, and some newer API platforms support service account patterns where you authenticate your server's identity and exchange it for a short-lived API token. Rather than storing a permanent API key, your application authenticates via a certificate or service account file and receives a token that expires in an hour. The credential-rotation problem becomes the platform's responsibility.
Team and CI/CD Key Management
Individual developer access and CI/CD access have different security requirements that are often conflated. Managing them separately reduces risk.
Never share API keys between developers and CI: Your development Stripe key should be different from the CI/CD Stripe key, which should be different from the production Stripe key. Using the same key means that when a developer's machine is compromised, or a CI secret is exposed in a log file, the blast radius is contained to that environment. Most providers support multiple API keys; create one per environment and per actor type (human vs CI).
Separate per-developer vs shared team secrets: For local development, each developer should have their own test API keys registered in their name. This creates an audit trail: if someone's test key starts making unexpected API calls, you know which developer to contact. It also prevents the situation where one developer accidentally exhausts a shared test quota. Tools like Doppler support developer-level secret overrides where each developer has their own values for the same secret names.
CI/CD secrets hygiene: Store CI secrets in your CI platform's encrypted secrets store (GitHub Actions Secrets, GitLab CI Variables, or Doppler CI integration). Never print secret values in CI logs — use ::add-mask::$SECRET in GitHub Actions to mask specific values from log output. Rotate CI secrets more frequently than production secrets (quarterly vs annually) since CI logs are often more accessible to the broader team. Audit which workflows have access to which secrets and revoke access for workflows that no longer need it.
Methodology
The git filter-repo tool (not git filter-branch, which is deprecated and slow) is the recommended method for removing committed secrets from git history; it's available via pip install git-filter-repo. The Zod-based environment validation pattern requires Zod v3.x and executes at module initialization time, which means invalid configuration fails the Node.js process at startup rather than when the first API call fails. AWS's instance metadata service for IAM role credentials is available at http://169.254.169.254/latest/meta-data/iam/security-credentials/ and is accessible only from within AWS infrastructure (not from the public internet). GitHub Secret Scanning coverage for specific providers (including Stripe, Anthropic, and OpenAI) is documented at docs.github.com/code-security/secret-scanning/supported-secret-scanning-patterns.
| Using secret keys in client code | Keys visible to anyone | Publishable keys only on client, proxy for secret APIs | | Same key for all environments | Test actions affect production | Separate keys per environment | | Admin/full-access keys everywhere | Blast radius if compromised | Scoped keys with minimal permissions | | Never rotating keys | Longer exposure if leaked | Rotate quarterly minimum | | No secret validation at startup | Runtime errors from missing keys | Validate with Zod on startup | | Sharing keys via Slack/email | Keys in chat history forever | Use secrets manager, share via 1Password |
Find APIs with the best key management features on APIScout — key scoping, rotation support, and security best practices for every provider.
Related: Building an AI Agent in 2026, Building an AI-Powered App: Choosing Your API Stack, Building an API Marketplace