This is a series. The first blog post is here, #2, #3, and this is the forth.

The behaviour: Copy and Pasting from online forums

What this looks like in the real world

  • Copying code from Stack Overflow, GitHub, blog posts, or comments without fully understanding it or verifying assumptions.
  • Submitting this copied code as your own without proper verification, because it’s been upvoted to the top (plus it compiles!).
  • Reusing custom code for security controls that you do not understand
  • Pasting copied code during an incident or other stressful time with the plan to replace it ‘later’, but later is never

This often shows up when we are stuck, frustrated, or rushed. This is often used to solve complex syntax or integration issues.

Behavioural biases at play

  • Authority bias: If many people upvoted it, it must be correct.
  • Availability heuristic: The easiest answer to find feels like the best one.
  • Social proof: “If everyone else is using this, it’s probably fine.”

These biases are based both on trust (authority bias and social proof), and to help us conserve mental energy (all 3). Trusting authority figures often works out well. And we, as knowledge workers, always want to reduce cognitive load.

Why this behaviour makes sense in the moment

  • Documentation is often unavailable, out of date, hard to search, your problem is not covered
  • Official examples are frequently incomplete/not your edge case situation
  • Forums feel (are) faster and more human than using an AI or official docs
  • The code “worked for someone else” (many someone else’s)
  • We are rewarded for unblocking ourselves/releasing/finding a solution quickly

This is not reckless. It’s efficient, in the moment. And frankly, it usually does work out well. That’s why so many of us do it. Unfortunately, most developers are unaware that the ‘top upvoted solution’ is usually also the least-secure answer.

A woman in a purple dress speaking into a microphone in a bright indoor setting with lush greenery in the background.

The security risk

  • Security trade offs are not documented, and often security features are turned off to ensure “it just works”
  • Posts are often outdated, old, perhaps libraries are very out of date
  • The person posting the solution has no idea what use case you have
  • People upvote things based on ‘it compiles’ not ‘it’s safe’
  • No security review of any of this, ever

This is how the AI has ended up trained on insecure patterns. It spent a lot of time on these forums as part of it’s training.

Solutions:

Training

Training developers on how to review code, that ‘the upvoted solution is not necessarily secure’, and that they are responsible for what the commit contains, is a start. But we already know training alone isn’t enough to solve any of these systemic problems. Why?

  • People may not remember or prioritize rules when stuck and frustrated
  • Search results don’t come with warnings (maybe they should?)
  • Most developers are rewarded based on speed, not verification (raises and promotions for fast features, punishment only if something goes wrong, and breaches seem rare)

Another idea for this category: Create a technical library (a bookshelf where anyone can borrow anything) and add copies of both my books, and several other security books you like. Then tell the developers the books are there, and that they can request other books they want. Don’t have due dates, just email them after a month and ask if they like the book so far. If they are reading it, let them have it as long as they want, it’s a win.

Secure Defaults

If developers are going to copy-paste anyway (because they are), let’s give them safer things to copy. Lean in.

  • Create and maintain an internal library of approved, secure code examples. Include all the security controls you can: how to implement the secret management tool, your identity/session management/AuthN, functions for authorization/access control, regex for input validation (including unit tests to make sure you got it juuuuust right)
  • If you can add a check list for code review as part of your pull request process, that could also be helpful (assuming it’s not ignored). Be reasonable with this list, if it’s too long, you will not get what you want.
  • If you can force an additional reviewer for when they’ve changed or added security controls, that would be a nice point of friction to ensure it’s given more attention.

Environmental Design

Ideally, we want to put guidance where decisions happen. If someone pastes code into an IDE, can we flag known risky patterns? Something like: “This pattern matches a known insecure pattern (CWE-###). Please review.” I know some SAST do this, but I suspect AI will soon do it for us too. Perhaps the IDE should also alert when a bunch of code ‘suddenly appears’ with “Hey, it looks like you just did a big copy/paste, did you get that code from a place you trust? If not, please review carefully.”

This kind of nudge or pause, right at the moment of decision, can be more effective than a training session from six months ago. But combining both might be a better option.

Friction

If someone pastes in long, complex, or security-sensitive code, require a short explanation before merge. If you can’t explain what it does, you probably shouldn’t be committing it. Right? ** This isn’t about slowing people down. It’s about making them think when we need them to think. When they are making an important decision. Remember: The rest of time we want the security guardrails to be invisible unless something is going wrong.

** Note: recently Anthropic released a paper stating people are not actually reviewing AI generated code, especially complex, multi-file bug fixes. The “human in the loop” advice appears not to be something most of us are doing in practice. While I greatly appreciate the honesty from Anthropic (truly), it’s disheartening. Again, I don’t feel this is “developers not caring”. I theorize they are currently panicking as many people they know are losing their jobs to AI and if someone else can commit 10,000 lines of code a day in the cubicle next to them, they better be able to do it too. I feel like developers and many other knowledge workers are under so much pressure I’m surprised we don’t all turn into diamonds. I’m concerned too folks. 💜

Social / Cultural

We need to normalize being able to say: “I don’t trust this code.” And honestly, building psychological safety in a professional setting is hard. When I switched from being a developer, to a manager (briefly), then into security, then back and forth from management/ DevRel/ trainer/ AppSec/ IR, I had to learn to communicate a lot better. I had to learn persuasion, negotiation, how to explain technical things to really smart people who aren’t-so-technical, among other communication skills. If you lead a team of developers, ensuring they feel okay to discuss such things, and bringing up topics like this at team gatherings, will get you a lot further than hoping they start these conversations for themselves.

Conclusion

Copy-pasting isn’t the whole problem. Copy-pasting without understanding, verification, or guardrails is the problem. Let’s design systems to make this harder to do.

Up next…. Shiny new tech obsession!

Leave a Reply

Discover more from SheHacksPurple

Subscribe now to keep reading and get access to the full archive.

Continue reading