In my previous blog post, I introduced the topic of applying behavioral economics to application security programs, using proven behavioral economic interventions to help us avoid known bad developer behaviors (including ones I know I am guilty of). In this post I am going to cover building systems that support secure developer behavior, that can help us gently point them in the right direction (secure code), more often. These apply to any team of developers, even if you do not have a list of specific behaviors you would like to change. Yes, this is the post that applies to everyone!
But first, WHY do we even need to do this? Why can’t people just “do the right thing” all the time? Can’t they just try hard, have some willpower, and behave perfectly? I don’t know about you, but I am certainly not perfect all the time. I’m in a rush. I skip steps. It compiles, so I commit. We are not robots, we are human beings, and we have a lot going on. But…. There’s more.
We don’t create insecure software because we don’t want or care to build great software. It’s because of how people behave under pressure, complexity, and (often perverse) incentives. What motivates us from moment to moment is not always clear on first blush. When we look deeper into the reasons that we make various decisions, we can see that it made sense at the time, even if later on we’re wondering why we ate that third piece of cheesecake, we didn’t save enough money for retirement, or why we resist when someone nags us but are happy to go with the flow if it’s a nudge instead. Human beings are complex, especially when we apply pressure (such as deadlines).
I wanted to write this blog series because I have tried a lot of ‘different’ approaches in the way I do AppSec over the years. Being “the department of no” never worked all that well, and having been on the other end of that approach, I wanted to be different. I wanted to treat people how I wanted to be treated. But also ensure the software they built was secure.
As I looked deeper into how to build security programs, especially when I realized I had very little authority over the software developers I would be working with, I realized a few things:
- Technical controls aren’t as powerful as culture. They are usually reactive (they happen after), where if you change the culture of where you work it’s proactive (it can ensure the bad thing does not happen at all). And quite frankly, any software developer worth their weight in salt can get around most technical controls, as we are like water… We just go around them.
- Many of the bad behaviors we’re going to talk about in this series are systemic. Everyone is doing them, not just one person. Meaning it scales…. But in a bad way. The longer the habits are in place, the harder they can be to change.
- If we try to shove changes down people’s throats… It goes poorly. We must always be respectful and work to build trust.
Building Systems That Support Secure Developer Behavior
The first idea I want to talk about is that we need secure defaults. Secure intentions mean willpower. They mean remembering to do the thing. They mean effort. When I say ‘secure default’, I mean that a system is set up automatically for the most secure option. And that someone needs to perform one or more actions to ‘undo’ the secure setting. Most of us keep most of the default settings on systems we use, and this means many people will just leave it that way. It also means that someone would need to exert effort, and intentionally make a decision, to undo that secure choice that you put in place for them.
Some examples of this could be:
- Intention: “Remember to use parameterized queries”
- Secure Default: ORM that makes parameterized queries the only option
- Intention: ”Follow encryption policy”
- Secure Default: Libraries that won’t let you choose weak algorithms
- Intention: “Validate all user input”
- Secure Default: A template that validates input by default and requires explicit bypass to not perform validation
On top of this, every single secure default you set up protects everyone it comes into contact with, almost every single time (unless they make a choice to turn it off). This is the opposite of a good intention, which can be forgotten. Technical safeguards don’t get tired or forget, like people do.
When I used to work at the Treasury Board Secretariat in the Canadian government, my team built a template that did all the security things we knew how to do at the time. Input validation, output encoding, a login screen, session management, etc. We were always adding to it, PenTesting it, and redeploying it. Every single new app was built with this DLL, and it was a standard for our team. It saved us time and money, AND it made us more secure. Honestly, I’m not sure why every framework doesn’t offer such things out of the box… But I digress.
How do we embed these ideas into the SDLC, Tools, and Training?
The next idea I wanted to talk about in this post was how to embed these ideas into the way we do things. For instance, how do we embed these concepts directly into the SDLC, so they happen every single time? Can we make our tools apply these concepts? Can we flip traditional training on its head, so we apply these ideas as well? Let’s start with the SDLC.
The SDLC
The first thing we can do is try is to get our development teams to choose more secure frameworks and languages. Starting off with safer and more modern technologies is a huge head start on building better systems. We can do this by being part of the optional analysis phase for all new technology (or whatever your org calls “choosing new tech and approving it for use”). You can attend developer meetings and bring up the idea of switching to something more modern, or perhaps make a presentation to other teams about the merits of various language ecosystems from a security perspective. I know this might be hard, but try to channel your inner salesperson and do some persuading.
Another thing I find can add some value is threat modeling. Having conversations about designs, instead of asking developers to fill out long documentation templates or questionnaires. I know it might not seem like it scales as well, but if we are trying to change behavior, we cannot automate it away. I find when I hold a threat modelling session with a developer for the first time, their eyes and mind light up (like mine did during my first session). By discussing what could go wrong, we uncover issues together, and they learn at the same time. They also get to know the security team and (hopefully) realize we aren’t so bad. And hopefully they realize that we are there to support them, and they come to us more often.
When gathering requirements, there should already be a list of security requirements that are added to all new projects, just like accessibility requirements. I know that a serverless app might have a few different requirements than an API or a web socket, but you should have a pre-set list of security requirements for technologies for each new project, ready to go. This is another example of a secure default.
Another thing you can do is create security checklists for each phase of the SDLC, such as a threat modeling, security architecture, or PR checklist. “Was this PR reviewed for the following things?” I love a good standard operating procedure (SOP)!
Offer pre-approved secure reference architectures, so developers get their designs approved faster. For instance, “we use this secret management tool”, “we use this identity tool”, “All public APIs are only accessible from this API gateway”, etc. Then they know exactly what is expected. Also, teach them about these options and document it, so they can design faster.
There’s the templating idea I mentioned earlier, but you could create secure API services that do all sorts of things. Input validation, error handling, logging, etc. Then everyone does these important things the same way, every time. And it’s easy to review code and see something is wrong “Hey, you’re not using that API…. What’s up with that?”
I’m not sure if everyone will like my next idea but… What if we taught developers basic security testing and gave them a safe place to break stuff? I don’t mean send them all on intensive penetration testing training, but I do mean give them a sandbox area and a workshop on how to use Zap or Burp and let them loose. If they all know how to fuzz and ‘active scan’ (pew-pew!) their own apps, perhaps they will start writing a lot better code when they can see the issues with their own eyes? This normalizes security testing, that security is a part of quality, and that we don’t let weird stuff in our apps slide.
I had another thought too: what if we made security after an app is in prod more visible to developers? Maintenance is part of the SDLC, but we often don’t talk about it. And if there’s a security incident with one of our apps in prod, that’s part of MAINTENANCE (you can fight me on this if you want). But I find most security teams keep stuff like that a secret. There’s a security incident with an app, and they follow “need to know” and all that good stuff. But once the incident is over, they don’t have a big “lessons learned” session, which means the rest of the developers don’t learn anything. Other things we could make more visible: how long on average it takes for teams to fix vulns (let each team see if they are better or worse than the org average), positive reinforcement for teams who fix bugs fast, report issues, or do other positive security culture things… I don’t mean name and shame, I mean create visibility, so people see actions and then consequences. This also helps change behavior.
Okay I think I gave enough examples there, onto embedding ideas like this into tooling and other technical controls.
Tooling
There are a lot of ideas in this space that I have talked about before, so please bear with me if you’ve heard me say some of this previously. Tools do not get tired. Tools do not forget. I know tools can’t think of new ideas or be creative either, but we can depend on them to at least do the things we ask them to, every single time. So, let’s use them to help us support developers making more secure choices!
- Pre-commit hooks that run security checks automatically, like scanning for secrets
- IDE plugins that suggest secure alternatives as they code, such as more secure functions over less secure ones
- CI/CD that blocks merges for critical security issues (not just warns)
- Dashboard that shows “security debt” right next to technical debt, because it’s just as important!
- Pull Request templates that prompt for specific security checks or reviews
- Security tools that scan for issues, then make a pull request with a fix (and the fix doesn’t break stuff)
- Security tooling that explains impact clearly, in developer words (not security jargon), instead of saying “it’s a 10/10” or calling literally everything critical
- Anything that gives immediate feedback, that is contextual and meaningful
- Always framing security as part of quality and part of being a GOOD software developer
If you have more ideas to add here, I would love to hear them!
Training
And now the third one, training that supports secure developer behavior. I realize that I sell training, so I’m biased. But I do it because we need it (and also because I enjoy it, let’s be honest). Feel free to take this section with a grain of salt if you like.
WI want us to reframe training goals explicitly as habit formation, as opposed to just knowledge transfer. We want devs to recognize patterns and then take different actions. For instance, “This input crosses a trust boundary, and therefore I need to validate it” or “This is a parser, therefore it is dangerous, and that means I should…” We want to say “when you see X, you should do X” and then show them how, and have them practice it.
We want to replace memorizing “The OWASP Top Ten”, or other security trivia, with stories that lead to failures, and how it “made sense at the time”. Small mistakes can lead to serious incidents, and people will remember a story better than they will a CVE or CWE number. Stories as especially powerful when they are close to home, a real-life example of something your team has done many times but that caused a real-life breach, or a near miss.
Add commitment mechanisms to training. I do this a lot. “Which one of these are we going to adopt at your org, starting next week? Don’t worry, I will wait while you choose.” And then I awkwardly wait until people start picking things they say they are going to do. Sometimes I call it “Your next secure move” or “Now it’s your turn”. Sometimes I have a list and ask them to prioritize it. Sometimes it’s a quiz, and then when someone gives the answer I ask “But you’re going to start doing that next week? RIGHT?” If you can add one or two commitments to the training, and people say it out loud in front of their peers… Things start to change for the better.
Use progressive disclosure in your training design, and by that, I mean build up to the complexity (or avoid it when possible). I do this via my bad/better/best technique, showing code vulnerable to whatever issue we’re discussing, then a fix for that one single issue, then multi-layered defenses. We don’t want to overwhelm people or impress them with how smart we think we are. We want to build them up, not perform a buffer overflow on their working memory.
Ensure psychological safety during learning. Always praise people for asking questions. Make sure they receive answers (every single time!). Try to remove any sort of shame about uncertainty. Admit you don’t know things (as the person who is the trainer), so they can see it’s okay, but that you will find out together. It makes everyone more comfortable and makes them less likely to make a guess in the future for fear they may look foolish in front of their peers.
Measure training success via behavioral changes, not butts in seats, exercises completed, or videos watched. This is a BIG change for a lot of us.
Let’s measure:
- Decline in repeat vulnerability patterns that we covered in class
- Increased use of secure APIs, templates, code samples, tooling
- Fewer bypasses of secure defaults, and if there is a bypass it is documented
- Security-related PR comments initiated by devs
- More security discussions or questions prompted by developers
- Less friction and more trust between security and developers
A few more things to move away from knowledge transfer to behavioral change.
- Stop teaching “what” without “why it’s easier and better”.
- Use spaced repetition, not only annual compliance training
- Teach patterns developers can recognize in their own behavior, state them explicitly, don’t make them guess
- Make it social: security champions, pair programming on security fixes, meetups
- Make it memorable: cheat sheets, repetition, reminders, posters, memes
- Make it fun: promote community, events, challenges
- Review both good AND bad code
- Focus on desired behaviors (secure code over vulns)
One last thing on training before I attempt to end this never-ending blog post… We should also design the environment for training. Yes, another out-of-the-box idea from Tanya. But imagine a different kind of learning with me…
- Proximity: Put security info where decisions happen (the IDE, not a wiki)
- Frequency: Small daily/weekly reminders > big annual training, or both for even more impact, but once a year is just not enough
Now that you’ve heard me obsess over a couple of things that you can start doing right away, I’m going to sign off. In the next couple of posts, we are going to get into the specifics of each of the ten developer behaviors I would like to try to change for the better. Thanks for reading!

1 comment