Why I Made Risk Management Deterministic (No AI Allowed)
At 2:47 AM, my AI wanted to double down on a losing trade. A simple rule stopped it. AI makes suggestions. Hard math enforces safety. No exceptions.
By Mike Hodgen
It was 2:47 AM on a Tuesday when my trading system almost made a very expensive mistake.
I built a smart assistant to help manage my own investment portfolio. That night, it wanted to buy more of a stock that was already losing money. The assistant's reasoning looked solid on paper — it had spotted a pattern that historically worked 87% of the time. But it was missing something critical: that stock was minutes away from falling off a cliff.
If nothing had stopped that recommendation, the system would have doubled my money on a losing bet right before the loss got worse.
But something did stop it. A simple rule I'd written: if a position is down more than 3%, freeze it. No exceptions. The rule checked one number, saw the loss was close enough, and blocked the trade. No human needed. No drama. Just a hard boundary doing its job at 2:47 AM while I slept.
That moment confirmed something I now build into every AI system I create: AI makes suggestions. Simple rules enforce safety. No exceptions.
Why I Don't Let AI Police Itself
Here's the thing about AI: it's a prediction machine. It estimates. It guesses — really well, most of the time, but it guesses. That's exactly what makes it valuable for spotting patterns, writing content, or finding opportunities humans would miss.
But guessing is not what you want from a safety system. You want certainty.
Think of it like a restaurant kitchen. Your head chef is creative, experimental, brilliant at combining flavors nobody else would try. That's your AI. But you don't let the head chef also decide whether the kitchen passes its health inspection. You have a checklist. Is the fridge at the right temperature? Yes or no. Are the cutting boards sanitized? Yes or no. No creativity. No interpretation. Just hard rules.
That's what I build between the AI and any action that involves real money, real consequences, or real risk. Simple math. If this number is bigger than that number, stop. The AI never gets to argue.
Here's the math that should worry anyone relying on AI for safety: a system that's right 98% of the time sounds great. But run a thousand decisions through it, and that 2% failure rate gives you 20 moments where nobody's watching the door. You only need one bad one to be devastating.
In my own ecommerce business, I use AI that checks its own work — flagging product descriptions that aren't good enough or images that don't match our brand. But the standards themselves are hard rules. The AI evaluates quality. It never gets to lower the bar on what "good enough" means.
How This Works in Practice
My trading system has three levels of protection, like emergency brakes that get progressively more aggressive.
Level 1: Single position down 3%. That one investment gets frozen. The AI can scream about a once-in-a-lifetime opportunity on that stock. Doesn't matter. It's locked until I personally review it.
Level 2: Total portfolio down 7%. Everything freezes. No new money goes anywhere. The AI keeps running, keeps logging what it wants to do. None of it happens.
Level 3: Portfolio down 10%. Full stop. Everything lines up for an orderly exit. The system sends me a notification and waits for me to personally restart it.
I've only hit Level 3 once in testing. Never in real operation. That's the point — it exists so I never have to wonder what happens if things go truly sideways.
The setup is simple: the AI generates recommendations, those recommendations pass through the safety rules, and only then do they reach the part that actually executes trades. The safety rules sit in the middle with absolute veto power.
The funny thing is, the AI side of my trading system involves thousands of lines of sophisticated code. The safety layer? Maybe 200 lines. Mostly just "is this number bigger than that number?" That simplicity is the whole point. Every extra piece of complexity in a safety system is another thing that could break.
This Applies Way Beyond Trading
Most people reading this aren't building trading bots. That's fine. This principle works everywhere.
In healthcare, AI can recommend treatments and spot anomalies in patient data. But a dangerous lab result is dangerous every time. A potassium level of 6.5 is an emergency. The AI doesn't get to decide "this one's probably fine." Hard rule. Automatic alert.
In manufacturing, AI can predict when machines need maintenance and optimize production schedules. But a temperature reading that exceeds the equipment's safety rating triggers a shutdown. The AI doesn't get to factor in that stopping the line will cost $40,000. Physics doesn't negotiate.
In finance, AI can score credit risk and detect fraud. But regulatory limits on how much money can be concentrated in one area are absolute. The SEC doesn't care that your AI thought 25.3% was close enough to the 25% cap.
Same pattern every time: AI makes the recommendation, hard rules enforce the boundary.
Three Questions to Ask About Any AI System
When I'm deciding what the AI controls versus what controls the AI, I use three tests:
Can you undo it? If the AI writes a bad product description, you just rewrite it. Low stakes. But if money gets transferred, a trade gets executed, or a patient gets the wrong dosage — you can't take that back. Anything irreversible needs hard safety rules.
Could you explain it to a regulator? "The AI thought so" is not a defense. "The system enforces a hard limit of X, and the value exceeded X" is. As regulatory scrutiny of AI increases, this matters more every quarter.
What happens at 3 AM with nobody watching? If the worst case is a typo in an email, let the AI handle it. If the worst case is losing $200,000 or a patient not getting flagged, that's a hard rule. No question.
When in doubt, make it a hard rule. You can always loosen things later once you have proof it's safe.
The Real Skill Is Knowing Where AI Stops
The real work in building AI systems isn't making the AI smarter. It's knowing where the AI ends and the rules begin.
Every system I build — whether it's my trading assistant, my DTC fashion brand's product pipeline that takes a concept to a live listing in 20 minutes, or a system I'm building for a client — has this separation baked into the foundation. The AI does what AI is good at: finding patterns, creating content, making predictions. The safety layer does what simple rules are good at: being exactly right, every time, with zero creativity and zero ambition.
This is the architecture decision that separates AI systems that run reliably for months from ones that blow up in week two. It's not glamorous. But it's the part that lets me sleep while my systems run at 2:47 AM.
If you're thinking about putting AI into your business, the first question isn't "what AI tool should we use." The first question is: what does the AI control, and what controls the AI?
Want to Explore What AI Could Do for Your Business?
I do a free 30-minute strategy call. No pitch deck, no sales team sitting in the background — just a real conversation about your operations and where AI fits.
Get AI insights for business leaders
Practical AI strategy from someone who built the systems — not just studied them. No spam, no fluff.
Ready to automate your growth?
Book a free 30-minute strategy call with Hodgen.AI.
Book a Strategy Call