BETAWe're in beta! If you spot a bug, let us know.
·12 min·By Nicolas Ritouet

Your AI Coding Assistant Is Reading Your Secrets

Claude Code, Cursor, Copilot, and other AI coding tools automatically load your .env files into their context. Your API keys, database passwords, and tokens are sent to LLM providers. Here's how to fix it.

TL;DR: Claude Code, Cursor, Copilot, and other AI coding tools automatically load your .env files into their context. Your API keys, database passwords, and tokens are sent to LLM providers. The fix isn't .gitignore — it's keeping secrets out of your filesystem entirely.


The Problem Nobody Talks About

In December 2024, security researchers at Knostic discovered that Claude Code automatically loads .env files without explicit user consent. The secrets end up in the AI's context window, processed by Anthropic's servers.

This isn't a bug. It's a feature. AI coding tools need context to be useful. They read your codebase, your configs, your environment — including your secrets.

The numbers are brutal:

StatSource
39M secrets leaked on GitHub in 2024GitHub Security Report
65% of Forbes AI 50 companies leaked secrets on GitHubWiz Research
30+ vulnerabilities discovered in AI coding tools (Dec 2024)The Hacker News

And that's just what's public. Your .env file sitting on disk is a ticking time bomb.


How AI Tools Access Your Secrets

Claude Code

Claude Code uses dotenv to automatically load environment variables at startup. From Knostic's research:

// Claude Code loads .env files automatically
// Your OPENAI_API_KEY, DATABASE_URL, STRIPE_SECRET_KEY
// are now in Claude's context

You didn't ask for it. There's no prompt. It just happens.

Cursor

Cursor's agent mode can bypass .cursorignore restrictions. From a Cursor forum post:

"The AI agent used cat to read files I explicitly added to .cursorignore"

The .cursorignore file only prevents the IDE from indexing. It doesn't prevent the AI agent from executing shell commands that read those files.

GitHub Copilot

Copilot indexes your workspace to provide better suggestions. That workspace includes .env files unless you explicitly configure exclusions — and even then, the behavior is inconsistent across IDE versions.

The Common Pattern

All these tools share the same fundamental flaw:

.env file on disk → AI tool reads filesystem → Secrets in context → Sent to LLM provider

The only way to break this chain is to remove the first link: no secrets on disk.


What Doesn't Work

.gitignore

.gitignore prevents commits. It doesn't prevent filesystem reads.

# .gitignore
.env
.env.local
.env.production

# Cursor/Claude Code can still read these files
# They're on disk. That's enough.

.cursorignore / .aiignore

These files are advisory, not enforced. AI agents with shell access bypass them trivially:

# Agent executes this despite .cursorignore
cat .env

Encrypted .env files (dotenvx, SOPS)

Encrypted files need to be decrypted to be used. At some point, the plaintext exists:

# dotenvx decrypts to environment variables
dotenvx run -- node app.js

# The decrypted values are now in process.env
# The decryption happens at runtime, but the encrypted file is still on disk
# and some tools may still parse it

Better than plaintext .env, but the encrypted file is still on your filesystem and can be copied/exfiltrated.

"Just don't use AI tools"

Not realistic. AI coding tools boost productivity significantly for many developers. The answer isn't abstinence — it's safe usage.


What Actually Works: Zero-Disk Secrets

The only reliable way to prevent AI tools from reading your secrets is to never write them to disk.

The Principle

Secrets stored remotely → Fetched at runtime → Injected into process memory → No file on disk

This requires:

  1. Secrets stored in a remote vault (not your filesystem)
  2. Injected directly into process memory at runtime
  3. Never written to .env, never written to disk

Tools That Implement This

Several secrets managers support this pattern with a run command that injects secrets directly into your process:

Doppler:

doppler run -- npm run dev

Infisical:

infisical run -- npm run dev

Keyway:

keyway run -- npm run dev

1Password CLI:

op run -- npm run dev

All four follow the same principle:

  1. Authenticate with the secrets manager
  2. Fetch secrets from the remote vault
  3. Spawn your command as a child process
  4. Inject secrets into the child's environment (memory only)
  5. No .env file created, no disk write

Your AI coding tool sees your codebase, but there's no .env to read.


Comparison: Secrets Managers for Zero-Disk Secrets

ToolZero-disk runAuth MethodSelf-hostedPricingBest For
Dopplerdoppler runEmail, SSO, SAMLNoFree 5 users, then $8/user/moTeams needing SSO/SAML
Infisicalinfisical runEmail, SSO, SAMLYesFree 5 users, then $18/user/moSelf-hosted requirements
1Passwordop run1Password accountNo$7.99/user/moTeams already on 1Password
Keywaykeyway runGitHub OAuthNoFree public repos, $9/mo ProSolo devs, GitHub-native workflows
dotenvxEncrypted on diskGit + encryption keyYes (it's git)FreeGit purists, no external dependency
SOPSEncrypted on diskKMS/PGPYesFreeInfrastructure teams, GitOps

Key Differences

Doppler & Infisical: Enterprise-focused, feature-rich (audit logs, rotation, RBAC). Higher price point, more setup.

1Password: Great if your team already uses it for passwords. Requires 1Password subscription.

Keyway: GitHub-native auth (no new accounts). Simple for solo devs and small teams. Less features than enterprise tools.

dotenvx & SOPS: Encrypted files in git. No external service dependency, but the file is still on disk (encrypted).


Code Examples

Before: Vulnerable Setup

# .env file on disk
DATABASE_URL=postgres://user:password@host:5432/db
OPENAI_API_KEY=sk-1234567890abcdef
STRIPE_SECRET_KEY=sk_live_xxxxx

# Start your app
npm run dev

# Claude Code/Cursor can read .env
# Your secrets are in the AI's context

After: Zero-Disk Setup

Pick your tool:

# Doppler
doppler run -- npm run dev

# Infisical
infisical run -- npm run dev

# Keyway
keyway run -- npm run dev

# 1Password
op run -- npm run dev

No .env file on disk. Secrets exist only in the running process's memory.

Verify It Works

# Check: no .env file
ls -la .env*
# ls: cannot access '.env*': No such file or directory

# Check: secrets are available in your app
keyway run -- node -e "console.log(process.env.DATABASE_URL ? 'Found' : 'Missing')"
# Found

# Check: AI tool sees nothing
# Open Cursor, ask "what's in my .env file?"
# Response: "I don't see a .env file in your project"

How to Choose

If you...Consider
Need SSO/SAML/enterprise complianceDoppler, Infisical
Must self-host (air-gapped, regulatory)Infisical, dotenvx, SOPS
Already use 1Password for team passwords1Password CLI
Want GitHub-native auth, minimal setupKeyway
Prefer git-native, no external servicedotenvx, SOPS
Solo dev, want simplest optionKeyway or dotenvx
Large team (20+ devs)Doppler, Infisical

When to Keep Using .env Files

Be honest: .env files are fine when:

  • Solo hobby project with no sensitive data
  • Air-gapped environment with no network access
  • You explicitly need file-based config (some legacy tools require it)
  • You're prototyping and will fix it before prod

The goal isn't purity. It's reducing risk where it matters.


Migration Example (Keyway)

Here's a concrete example using Keyway. The workflow is similar for Doppler/Infisical.

Initial Setup

# Install
brew install keywaysh/tap/keyway

# Initialize and authenticate via GitHub
keyway init

# Import existing secrets from .env
keyway push

Daily Usage

# Run your app with secrets injected (always fetches latest)
keyway run -- npm run dev

# Deploy to Vercel/Netlify/Railway
keyway sync vercel

Team Onboarding

# New developer joins
git clone your-repo
npm install
keyway login    # GitHub OAuth - if they have repo access, they get secrets
keyway run -- npm run dev

# No "ask John for the .env file"
# No "check the Slack channel for DATABASE_URL"

CI/CD Integration

All these tools support CI/CD. Example with GitHub Actions:

Doppler:

- uses: dopplerhq/cli-action@v3
- run: doppler run -- npm run build

Infisical:

- uses: infisical/secrets-action@v1
- run: infisical run -- npm run build

Keyway:

- uses: keywaysh/action@v1
- run: keyway run -- npm run build

Or sync secrets directly to your deployment platform and skip the run command entirely.


FAQ

"Doesn't this just move the trust to a third party?"

Yes. You're trading "secrets on disk readable by anyone with filesystem access" for "secrets in a managed vault with access controls."

All these tools use encryption at rest and in transit. The threat model changes from "any process on my machine can read my secrets" to "only authenticated users can fetch secrets via API."

Pick a provider you trust, or self-host (Infisical, SOPS, dotenvx).

"Can't AI tools read environment variables from the running process?"

In theory, an AI tool could spawn a subprocess that reads /proc/self/environ on Linux. In practice, current AI coding tools don't do this — they read files from disk. That's the attack vector we're closing.

If AI tools start reading process memory, we'll need a new solution. For now, zero-disk secrets are effective.

"What about local development speed?"

The run commands add ~200-500ms startup latency (one API call to fetch secrets). For most development workflows, this is imperceptible. For hot-reload, secrets are cached in the parent process.

"I'm on a plane with no internet"

Valid concern. Options:

  • doppler run --fallback=.env.local (encrypted local cache)
  • Keyway: keyway pull before going offline caches secrets locally
  • Accept that you need network for secrets (security tradeoff)

Summary

  1. AI coding tools automatically read your .env files
  2. .gitignore and .cursorignore don't prevent this — the file is on disk
  3. The fix is zero-disk secrets: never write them to your filesystem
  4. Tools like Doppler, Infisical, Keyway, and 1Password CLI implement this with a run command
  5. Choose based on your needs: enterprise features, self-hosting, pricing, auth method

Your AI assistant should help you write code, not exfiltrate your secrets.


References: