Back to Blog
technicalai

Encrypting PHI in Supabase: AES-256-GCM for Health Data

Database encryption at rest isn't enough. A leaked access key exposes every health record in plain text. Here's how I added real field-level encryption.

By Mike Hodgen

Want the full technical deep dive? Read the detailed version

I built a health tracking app for a family member. It stores real health information — symptoms, medications, vital signs, and AI-generated health insights. The kind of stuff people don't even share with close friends.

Early on, I made a mistake that most app developers make. I assumed the security that came with my database provider was enough. It took me about two days to realize I was wrong — and that the gap I'd left open could expose every single health record in the system.

Here's what I learned, and why it matters for anyone building apps that handle sensitive data.

The Lock on the Front Door Isn't Enough

When you store data in a modern database, the provider typically encrypts it "at rest." Think of this like a storage unit company that puts a lock on the building's front door. If someone breaks into the building after hours and tries to steal a hard drive, they can't read what's on it. That's real protection, but it's a narrow kind of protection.

Here's what that front-door lock doesn't stop:

  • Anyone with a key to the building (a leaked password, a stolen access credential) can walk in and read everything in plain English.
  • Employees of the storage company can see your stuff.
  • Backup copies of your data are completely readable.
  • If a team member on your project logs in through the dashboard, they see every health record, plain as day.

I signed the required privacy agreements with my database provider. They held up their end. But those agreements cover their responsibilities — not mine. The gap was in my own application. If someone got hold of my master access key through a leaked configuration file or a stolen laptop, they could pull up every symptom log, every medication record, every AI health assessment. All in plain text.

That's not a theoretical risk. Configuration files get accidentally published to code repositories every day. Laptops get stolen. Credentials leak through sloppy deployment processes.

What I Built Instead: A Safe Inside the Room

My solution was to add a second layer of protection. If the database security is the lock on the front door, what I added is a steel safe bolted to the floor inside the room.

Before any health information gets sent to the database, my application scrambles it using a military-grade encryption method called AES-256-GCM. You don't need to remember that name. What matters is what it does: it turns readable health data into complete gibberish that can only be unscrambled with a specific secret key that I control.

The database never sees the real health data. Not when it's being saved. Not when it's being read. Not in backups. Not in logs. If someone dumps the entire database, all they get is meaningless scrambled text.

Even better, this particular encryption method has a built-in tamper alarm. If anyone changes even a single character of the scrambled data — whether by accident or on purpose — the system refuses to unscramble it and throws an error. No silent corruption. No partial data. A hard stop that tells me something is wrong.

The performance cost? Each record takes a few extra milliseconds to scramble and unscramble. For a health app handling dozens or hundreds of records per session, the user never notices.

The Decisions That Make It Actually Work

Choosing the right encryption method was the easy part. The hard parts are key management and deciding what to encrypt.

The secret key is everything. If it's stolen, all the encryption is worthless. If it's lost, all the data is permanently unreadable. Both are catastrophic. So I follow strict rules: the key lives only in secure environment settings, never in the code itself and never in the database. My development environment, testing environment, and live environment each have separate keys. And I planned from day one for key rotation — the ability to swap in a new key without downtime or data loss. Most tutorials skip that part entirely. If you can't rotate your key, you don't have a real system. You have a demo.

Not everything gets encrypted. You can't search or sort scrambled data. The database can't make sense of gibberish. So I made deliberate choices. All the actual health information gets encrypted: symptom descriptions, medication names and dosages, vital signs, AI-generated assessments, anything the user typed. But basic organizational data stays readable: user IDs, timestamps, record categories.

What does that mean in practice? If someone got raw database access, they could see: "A user created a symptom record on March 15th at 9:30 AM." They could not see what the symptom was, how severe it was, or what the AI said about it. The substance of the health information stays locked in the safe.

Testing for Gaps (Because There Are Always Gaps)

Building encryption is half the job. Proving it works is the other half.

After every change to the system, I export the entire database and search it for any health information in plain text. A symptom description I entered. A medication name. An AI assessment phrase. If I find it anywhere — any table, any log, any backup — I have a gap. This takes five minutes and catches mistakes that even careful review misses.

The sneaky leaks I've caught or seen in other systems: error messages that accidentally include health data in their reports, logging systems that capture information passing through the app, and browser-side caching that stores unscrambled records on the user's device. Each one is a hole that makes your encryption pointless if you don't plug it.

The health app I built uses a team of AI specialists — each handling a different aspect of health analysis. Every one of those AI smart assistants respects the encryption boundary. They unscramble what they need, process it, generate assessments, and the results get scrambled again before storage. Health data only exists in readable form for the brief moment it's being actively worked on.

People trust health apps with information they might not share with family members. The security should match that trust. A signed privacy agreement and basic database encryption is a starting point, not a finish line.

Thinking About AI for Your Business?

If this resonated, I'd like to hear what you're building. I do free 30-minute discovery calls where we look at your operations and identify where AI — and proper security — could actually move the needle.

Book a Discovery Call

Get AI insights for business leaders

Practical AI strategy from someone who built the systems — not just studied them. No spam, no fluff.

Ready to automate your growth?

Book a free 30-minute strategy call with Hodgen.AI.

Book a Strategy Call