This is a series. The first blog post is here, the second is here, and this is the third.
For the rest of this series, I am going to follow a similar format for each post/behavior. I will name the behavior, then various biases and heuristics that I believe apply, and then give some examples that may or may not feel familiar. Next, I will cover why the behavior seems reasonable at the time, and how it’s causing a security problem. I will follow up with suggested solutions, and I might indulge a little in that section, but hopefully you don’t mind. As always, feel free to send feedback!
The behavior: Vibe Coding
AI-assisted, fast, contextless coding without verification. AI – Artificial Intelligence.
What this looks like in the real world
- Accepting and committing AI-generated code because it compiles and looks clean
- Skipping review because “the AI probably got it right”
- Letting AI write auth, parsing, validation logic, or other complex security controls
- Reviewing outputs quickly instead of reasoning through every part
This often shows up when we are under time pressure or when feel behind.
Behavioral biases at play
- Automation bias: We trust suggestions from automated systems, especially when they appear confident. AI is so confident.
- Fluency bias: Clean, readable code feels more ‘correct’ than it actually is. It just looks good.
- Cognitive offloading: We delegate thinking to tools when they seem reliable. Some people might call this laziness, but I don’t think that’s fair. We work in tech to make things easier. We’re trained to always seek out the easiest way. It’s literally our job.
These biases are both common and normal. They exist to conserve mental energy. They aren’t bad, usually they serve us well. But not in this case.
Why this behavior makes sense in the moment
- AI tools are right a lot of the time.
- The code they produce looks professional and complete. Plus, it compiles!
- We are usually rewarded for speed rather than quality (which means less security in this case)
- Reviewing AI code might feel redundant for some people
- It usually takes a long time before small shortcuts get caught (such as the annual pentest)
This seems like rational behavior for a high-pressure situation. You might do this. I might do this too.
The security risk
As a person who creates training and reviews a lot of code as part of that process, let me tell you the stuff I’ve seen the AI get wrong…
- Missing authorization or incorrect access checks
- Incomplete or poor quality input validation
- Assumptions about trust boundaries that are totally wrong (implied trust)
- Error handling that leaks sensitive information, or is missing altogether
- Constantly missing security controls where the AI knows they should be. If you don’t ask for it explicitly, there’s a good chance it won’t be there.
The biggest risk is the context being wrong, which can cause a cascade of issues. AI does not know your system unless you literally give it a copy. And sometimes that still isn’t enough.
Let’s call this context collapse. The AI generates plausible code without any understanding of your system’s history, trust boundaries, or constraints.
Solutions:
Although training developers on how to use AI more safely and how to review code is a great foundation (which is what I do for a living, in case you want to hire someone for that), we need to do more than just training. If we expect them to reply on willpower to resist taking short cuts, we are likely to end up disappointed. Let’s look at some ideas for behavioral and system-level fixes.
AI System Setup
Let’s start with setting up whatever approved AI your developers have access to with security by default. Let’s connect a RAG server (Retrieval-Augmented Generation) with secure code examples, or anything else you can give it to so that it has better code to reference. I realize a lot of people don’t have something to work with for this, but I swear I will get to this at some point!
Up next, let’s set up a list of prompts that the AI should apply every single time (add it to the memory), so that it auto-reviews the code it generates and cleans it up. I suggest you turn your secure coding guideline into prompts. If you don’t have a guideline, you can use mine.
Secure Defaults
If you can find a technical way (each IDE – integrated development environment – and AI assistant is different) to add a prompt to the user to review risky code before accepting suggestions that would be a great nudge (which is a well-known type of behavioral economic intervention). For example, “This line modifies auth logic. Review carefully.”
If you can add a check list for code review as part of your pull request process, that would also be helpful. If you can have it also force an additional reviewer if you’ve changed or added complex security controls, that would be a nice point of friction to ensure we give it more attention.
Let’s Talk Friction
If we add a pause or some other sort of ‘friction’ to make someone think a bit more while making important decisions, we get better results. It’s like adding a barrier to entry, it’s not huge, but it’s enough to make someone stop and think. For friction, what about… Requiring a short, written explanation of what the AI-generated code does before we merge it? If we can’t explain it, perhaps we shouldn’t commit it. I’d love to hear of other ideas for friction or important places to pause.
Conclusion
Vibe coding is not ‘bad’ per se but giving over all our decision making to powerful tools we do not understand certainly is. Let’s design systems to help us avoid falling into this obvious trap.
