IDOR Protection: The Vulnerability AI Code Creates by Default
I found 23 security holes across 12 projects in one weekend. Change a number in the URL, see someone else's data. AI writes this flaw by default.
By Mike Hodgen
I found 23 security holes across 12 projects in one weekend. Not theoretical risks from a textbook. Real, exploitable problems in live systems where any logged-in user could see another user's private data by changing a number in the web address.
That's not a hypothetical. I watched it happen on my own screen. I logged in, saw my order page had a "847" in the URL, changed it to "846," and I was looking at someone else's order. Their name, their address, their purchase history. No hacking tools required. A curious twelve-year-old could do it.
And the majority of the code that created these holes was written by AI.
The Security Flaw AI Creates Without Telling You
Here's the simplest way I can explain what happened.
Imagine a hotel where every room has a numbered door. You get a key card that proves you're a guest. But that key card opens every room, not just yours. The hotel verified who you are. It just never checked which room is actually yours.
That's exactly what was happening in my code. The AI built systems that correctly verified someone was a logged-in user. But it never added the step that checks whether that user should be seeing that specific piece of data.
This kind of flaw has a name in the security world: an insecure direct object reference. I'll just call it what it is — a missing ownership check. The system knows who you are but doesn't verify what belongs to you.
Out of the 23 problems I found, 17 had this exact pattern. The AI was smart enough to add a "prove you're logged in" step. It just skipped the "prove this data is yours" step. And when you glance at the code, it looks complete. It looks secure. It passes every test you throw at it — because when you test your own app, you're always looking at your own data.
The nonprofit that tracks web security ranks this category of flaw as the number one vulnerability in modern web applications. It's not a niche concern. It's the most common security problem on the internet today.
Why AI Keeps Making This Mistake
When you ask an AI assistant to build something that pulls up an order by its ID number, it does exactly that. Faithfully. Correctly. And with a gaping security hole.
The AI writes code that works. If you ask for your own order, you get your own order. The problem is it also hands over anyone else's order if you know (or guess) the right number.
This happens because AI learns from millions of code examples — tutorials, how-to guides, open-source projects. The vast majority of those examples demonstrate how to make things work. They don't demonstrate how to make things secure, because that wasn't the point of the tutorial.
The AI isn't being dumb. It's being literal. You asked for function. It delivered function. Security is a separate concern, and one the AI almost never adds on its own.
I've built 15+ AI systems. I've written over 22,000 lines of AI-assisted code. My DTC fashion brand runs on AI — we've seen a 38% increase in revenue per employee and cut 42% of our manual work. I'm not anti-AI. I'm pro-reviewing what AI produces before it goes live.
The Fix Is Simpler Than You Think
The actual code fix is absurdly small. In one case, the difference between "anyone can see your data" and "only you can see your data" was adding seven words to a single line of code. Seven words. That's the gap between secure and exposed.
But rather than fixing things one spot at a time, I use what I think of as a belt-and-suspenders approach. Two layers of protection so that if one fails, the other catches it.
Layer one: the app checks ownership before showing anything. Every time someone requests a piece of data, the system asks "does this data belong to the person asking?" before it responds. If the answer is no, the person gets a "not found" message — not even a hint that the data exists.
Layer two: the database itself enforces the rules. Even if someone deploys new code that forgets the ownership check, the database won't hand over data that doesn't belong to the requesting user. I've set this up across all 50+ database tables in my production systems. It's a safety net for human error.
Each fix takes about 15 minutes per endpoint. For a project with 30 endpoints, that's under 8 hours of work. Compare that to the cost of a data breach — the fines, the lawsuits, the lost trust.
You Can Check Your Own Systems in an Afternoon
My 12-project audit took about 6 hours. A single project takes 1-2 hours. Here's the non-technical version of how I did it.
I logged in as one user, wrote down every ID number I could see in URLs and responses. Then I opened a different browser, logged in as a different user, and tried every one of those ID numbers. If User B could see User A's data, I had a problem.
Of the 23 flaws I found, 19 were exploitable with this approach in under 30 seconds each. You can do this on a Saturday morning. Given what's at stake, you should.
This is the kind of operational discipline that separates systems that scale from systems that end up on a breach notification. The code AI writes is good. The code AI writes without a human security review is a liability.
I've solved these problems across 12+ production systems. The patterns are repeatable. The process is documented. The architecture works at scale.
Thinking About AI for Your Business?
If this resonated — whether you're building with AI and not sure what's hiding in your systems, or you're scaling fast and want to make sure things are solid — I'd like to talk. I do free 30-minute discovery calls where we look at your operations and figure out where AI could actually move the needle, and where it might be creating risk you haven't seen yet.
Get AI insights for business leaders
Practical AI strategy from someone who built the systems — not just studied them. No spam, no fluff.
Ready to automate your growth?
Book a free 30-minute strategy call with Hodgen.AI.
Book a Strategy Call