Vibecoding Is Fun — Until Someone Reads Your Entire Database

I've been watching a pattern emerge over the last year or so, and it's one of those things that seems fine on the surface until it really, really isn't.

Someone has an idea. They've never written code before, or maybe they wrote a bit of HTML in school. They open up ChatGPT or Claude, describe what they want, and start copy-pasting. Within a few hours, they have a working app. Vercel takes care of the deployment. Firebase is the database. The whole thing is live. It feels incredible — and honestly, it kind of is.

But then someone like me comes along and reads their entire database in about four minutes.

This isn't a hypothetical. I've seen it happen. I've seen friends build internal tools with sensitive data, protected by a password that exists nowhere on the server. I've seen small businesses put customer orders behind an authentication system that I could bypass without touching the backend at all.

This is the dark side of vibecoding — not the code quality, not the maintainability — just the raw, straightforward danger of building things you don't fully understand.


The Setup That Keeps Appearing

Here's the stack I see most often, and I'm not picking on it because it's bad — it's actually a great stack for someone who knows what they're doing:

  • A Nuxt or React frontend, deployed to Vercel
  • Firebase / Firestore as the database
  • Some kind of "protected" area behind a password

The AI generates the whole thing. The user tests it, it works, they ship it. The problem is that AI is really good at making things that appear to work without making things that are actually secure.


Attack #1: The Fake Password

Let's say the app has a login page. You enter a password, and if it's right, you see the dashboard. Sounds reasonable.

Here's what the AI probably generated:

// The AI wrote this. It looks fine at a glance.
function checkPassword(input) {
  if (input === 'supersecret123') {
    localStorage.setItem('authenticated', 'true');
    router.push('/dashboard');
  }
}

And on the dashboard page:

onMounted(() => {
  if (localStorage.getItem('authenticated') !== 'true') {
    router.push('/login');
  }
});

The password check happens entirely in the browser. The "authentication" is just a value in localStorage. There's no server involved at all.

To bypass this, open the browser console and type:

localStorage.setItem('authenticated', 'true');

Then navigate to /dashboard. You're in. No password needed. The entire "security" of this system lives in a piece of text that any user can write themselves in ten seconds.

This isn't a subtle vulnerability. It's not even really a vulnerability — it's just... no security. The appearance of security without any of the actual protection.


Attack #2: The Firebase Config Is Right There

Here's where it gets more interesting. Once you're looking at the source of one of these apps, you'll find something like this — usually in a firebase.js or firebaseConfig.js file, sometimes just inline in the main entry point:

const firebaseConfig = {
  apiKey: 'AIzaSyBxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
  authDomain: 'my-cool-app.firebaseapp.com',
  projectId: 'my-cool-app',
  storageBucket: 'my-cool-app.appspot.com',
  messagingSenderId: '123456789012',
  appId: '1:123456789012:web:abcdef1234567890',
};

This is publicly visible. It's in the JavaScript bundle that gets sent to every user's browser. This is actually fine in principle — Firebase is designed to be used this way, with the config public. The security is supposed to come from Firebase Security Rules, which control who can read and write what.

But here's the thing: the AI didn't write those rules. Or if it did, it wrote the most permissive ones possible to make things "just work":

// Firestore rules — the AI's default
rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    match /{document=**} {
      allow read, write: if true;
    }
  }
}

allow read, write: if true means: anyone, anywhere, with no authentication, can read every document and write anything they want.

Now combine this with the public config. I can take that firebaseConfig block, paste it into any JavaScript file, initialize the Firebase SDK, and then:

import { initializeApp } from 'firebase/app';
import { getFirestore, collection, getDocs } from 'firebase/firestore';

const app = initializeApp({
  // config from their source code
  apiKey: '...',
  projectId: 'my-cool-app',
  // ...
});

const db = getFirestore(app);

// Read everything
const snapshot = await getDocs(collection(db, 'orders'));
snapshot.forEach((doc) => console.log(doc.data()));

That's it. I now have every order, every customer record, every piece of data in their Firestore database. And I can write to it too. Delete documents. Insert fake records. The whole thing is wide open.


Why This Happens

It's worth being clear that this isn't the user's fault, exactly. And it's not really the AI's fault either — AI tools generate code that works for the stated goal. "Build me a login page" produces a login page. "Connect this to Firebase" produces Firebase integration. Neither of those prompts has "make it secure" in it.

The fundamental issue is that security isn't visible. A login page that uses localStorage looks exactly the same as a login page backed by a real auth system. Open Firestore rules don't announce themselves. Everything appears to work correctly because from the user's perspective, it does work — until it doesn't.

Engineers know to think about the threat model. They know that client-side code is always visible to the user. They know that a database with no access rules is effectively public. These aren't advanced concepts, but they're invisible to someone who has never thought about them before.

Vibecoding lowers the barrier to shipping dramatically, which is genuinely great. But the barrier it lowers the most is the one that was keeping a lot of bad decisions off the internet.


How to Actually Fix This

If you've built something like this, here's how to get it to a reasonable baseline. None of this requires becoming a security expert.

1. Use Firebase Authentication for real

Stop storing "authenticated" in localStorage. Firebase has a proper auth system built in, and it's not complicated:

import { getAuth, signInWithEmailAndPassword } from 'firebase/auth';

const auth = getAuth();

async function login(email, password) {
  try {
    await signInWithEmailAndPassword(auth, email, password);
    // Firebase manages the session — you don't store anything in localStorage
    router.push('/dashboard');
  } catch (error) {
    console.error('Login failed:', error);
  }
}

Firebase handles the session token, stores it securely, and automatically refreshes it. Your route guards should check auth.currentUser, not a localStorage value:

import { getAuth, onAuthStateChanged } from 'firebase/auth';

onMounted(() => {
  const auth = getAuth();
  onAuthStateChanged(auth, (user) => {
    if (!user) {
      router.push('/login');
    }
  });
});

Now the server is involved in authentication. A user can't just set a variable and get in.

2. Lock down your Firestore rules

This is the most important one. Go to the Firebase console, find Firestore Security Rules, and replace the allow read, write: if true with something that actually checks whether the user is authenticated:

rules_version = '2';
service cloud.firestore {
  match /databases/{database}/documents {
    // Only authenticated users can read their own data
    match /users/{userId} {
      allow read, write: if request.auth != null && request.auth.uid == userId;
    }

    // Only authenticated users can read orders, only admins can write
    match /orders/{orderId} {
      allow read: if request.auth != null;
      allow write: if request.auth != null && request.auth.token.admin == true;
    }

    // Deny everything else by default
    match /{document=**} {
      allow read, write: if false;
    }
  }
}

The key idea: request.auth != null checks that there's a real, Firebase-authenticated user making the request. request.auth.uid == userId ensures they can only touch their own data. These rules are enforced on the server side — no amount of browser manipulation gets around them.

3. Restrict your API key to your domain

Even though the Firebase apiKey is designed to be public, you can — and should — tell Google to only accept requests that use it from your specific domains. Anyone who copies your config and runs it from their own script or server will get their requests rejected before they even reach Firebase.

Go to the Google Cloud Console → APIs & Services → Credentials, find the API key linked to your Firebase project, and add an HTTP referrer restriction:

https://my-cool-app.vercel.app/*
https://my-cool-app.com/*

This tells Google: only honor this key when the request comes from these domains. A script running on someone's laptop, or from a different origin, will get a 403 back instead of data.

One level up from this is Firebase App Check, which goes further. Instead of checking the request's origin header (which is easy to spoof from a server-side script), App Check issues a cryptographic attestation token — generated by reCAPTCHA Enterprise on web, or the device integrity APIs on mobile — and Firebase validates that token before processing any request. No valid token, no access:

import { initializeApp } from 'firebase/app'
import { initializeAppCheck, ReCaptchaV3Provider } from 'firebase/app-check'

const app = initializeApp(firebaseConfig)

initializeAppCheck(app, {
  provider: new ReCaptchaV3Provider('YOUR_RECAPTCHA_SITE_KEY'),
  isTokenAutoRefreshEnabled: true,
})

Once App Check is enforced in the Firebase console, your Firestore rules can require a valid attestation on top of authentication. Even if someone extracts your config and Security Rules allow authenticated reads, they can't get a valid App Check token for a domain you haven't registered. The two layers together make it genuinely hard to abuse your backend from outside your app.

4. Never put real secrets in frontend code

Firebase's apiKey is fine to expose (it's just an identifier, not a password). But if you're using any other API keys — OpenAI, Stripe, anything else — they must never be in your frontend code. Put them in environment variables on the server side:

# .env (server-side only, never committed to git)
OPENAI_API_KEY=sk-...
STRIPE_SECRET_KEY=sk_live_...

If you're using Nuxt, use server routes (server/api/) to call these services and only return the data you actually need to the frontend. The key never leaves the server.

4. Use Vercel's environment variables

If you're deploying to Vercel, use their environment variable system for anything sensitive. Variables prefixed with NUXT_PUBLIC_ in Nuxt (or NEXT_PUBLIC_ in Next.js) are intentionally public — everything else stays server-side. Understand which of your variables are which.


A Quick Checklist

Before you ship something that handles real user data, run through this:

  • Is authentication handled by a real backend (Firebase Auth, Auth.js, Supabase Auth) — not by checking a value in localStorage?
  • Have you checked your Firestore / database access rules? Are they actually restrictive?
  • Is your Firebase API key restricted to your domain in the Google Cloud Console?
  • Is there any API key in your frontend code that shouldn't be public?
  • Can you open the browser devtools, look at the source or network tab, and find anything that would give a stranger access to things they shouldn't have?

That last one is the most useful. Just sit with your app for five minutes and ask: "What would happen if someone technically curious looked at everything the browser is sending and receiving?" If the answer makes you uncomfortable, there's probably something to fix.


The Point Isn't to Stop Vibecoding

I genuinely think it's amazing that people who aren't engineers can build and ship real products. That democratization is a net good for the world. I don't want to scare anyone into paralysis.

But there's a gap right now between "the AI helped me build this" and "I understand what I built well enough to know what could go wrong." That gap is exactly where security vulnerabilities live.

The fix isn't to stop using AI. The fix is to ask it different questions. Don't just ask "how do I add a login page." Ask "how do I add a login page that's actually secure." Don't just ask "how do I connect to Firebase." Ask "what Firebase security rules should I use so that users can only access their own data."

AI tools are quite good at answering security questions when you ask them. The problem is most people don't know they need to ask.

So ask. Your users' data will thank you.