Recently I hosted a webinar called “Metrics, Models, and Mindsets: The Future of Application Security” with:

Our goal was simple: talk honestly about where application security is going, and what’s actually working (and not working) in real teams today. You can watch the conversation below:


Meet the Panel

  • Me (Tanya Janca) – I run a small secure coding training company, I’ve written Alice and Bob Learn Application Security and Alice and Bob Learn Secure Coding, I speak at conferences, and I spend most of my time trying to help software teams make more secure code without hating their lives.
  • Aram – Has been in AppSec for more than 15 years. He started as a researcher at the University of Leuven in Belgium and is now CEO of Codific, “builders of the SAMMY tool” that helps organizations work with OWASP SAMM. He’s also a core contributor to the OWASP SAMM project.
  • Spyros – Has been doing “all things security” for about 15–18 years. He’s maintained several tools you may know, like Project Wayfinder (on the OWASP projects page) and OpenCRE.org. Recently he founded Smithy Security, whose mission is essentially “Zapier for AppSec tools” – orchestrating and automating AppSec workflows to deal with noise and tooling.

Big Question: Building an AppSec Program with OWASP SAMM

“If you walk into an organization with no AppSec program, what are the essential first steps?”

Aram’s answer – Step 1: Understand where you are

OWASP SAMM (Software Assurance Maturity Model) gives you:

  • A structured assessment across 15 security practices and 30 streams
  • A continuous score from 0 to 3 that reflects maturity
  • The ability to scope assessments to:
    • One team
    • One product
    • Or an entire organization
    • One business unit

This score isn’t just a vanity number; it’s a map of your current posture across the entire AppSec landscape.

But here’s the important part:

Don’t obsess over the score itself. It’s just a tool.

The real value is in understanding your strengths, your gaps, and where to improve.

Step 2: Define your target posture

SAMM introduces the idea of a target posture: not “max out everything to Level 3,” but:

  • Decide where you actually need to be, based on:
    • Your risk
    • Your industry
    • Your regulatory environment
    • The value of what you’re protecting

If you spend $1 million to reach Level 3 in an area that doesn’t materially reduce risk or support the business, you’ve just spent $1 million for nothing.

I loved how Spyros summarized this:
If you spend a fortune to get to the highest maturity level in something that doesn’t matter to the business, you’ve just done a very expensive vanity project.

Tanya: Step 3: Start with inventory (yes, really)

If you don’t know what you have, you can’t protect it. – Tanya

In SAMM, inventory appears in multiple places (even if not always explicitly labeled as “inventory”):

  • You need to know:
    • What systems you have
    • Where your data lives
    • What you’re processing and where

Without that, it’s almost impossible to measure risk.

Build vs Buy (and Where AI Fits In)

Next, I asked Spyros one of my favourite “trick questions”:

“When should an organization build its own security tools vs. buying or using existing ones?”

As someone who used to be a developer far longer than I’ve been on the security side, I absolutely have the “I can just code that” bias. Many developers do.

Spyros’ view: Build vs Buy: It Depends on Criticality

  • If a tool touches critical infrastructure
    (e.g., audits highly regulated code, triages findings for your main product)
    • You probably want a dedicated vendor or team whose entire job is to build and maintain that.
  • If it’s a small internal convenience, a marginal improvement to your workflow
    • It might make sense to build it yourself.

But be careful that:

  • Your “toy” doesn’t silently become critical infrastructure.
    If it does, you’ll need to either:
    • Properly engineer, maintain, and scale it
    • Or be willing to let it go in favor of a more robust solution

AI and “vibe coding”

We also talked about AI-generated code and “vibe coding” (where people just keep prompting a model until it “looks right” and then ship whatever came out).

  • LLMs lack context and engineering discipline.
    They don’t think about:
    • Retries
    • Observability
    • Testability
    • Long-term maintainability

You’ll often end up with a pile of brittle scripts that don’t scale.

So AI can help, but it does not replace:

  • Frameworks
  • Architecture
  • Sound engineering practices
  • Code review
  • Security review

Use AI as an assistant, not an architect.


Security Champions: What Goes Wrong (and How to Fix It)

Security Champions programs came up a lot at OWASP Global AppSec, and it’s a topic close to my heart. I’ve seen them go really well, and I’ve also seen them crash and burn.

Common mistakes we see

1. No clear goals or responsibilities

  • “We’re launching a Security Champions program!”
  • …with:
    • No defined goals
    • No specific responsibilities
    • No metrics for success

Then teams are surprised when:

  • Results are random
  • Different champions do completely different things
  • Nobody’s sure if the program “worked”

If your only goal is “better security,” that’s not a goal. That’s a wish.

Instead, define things like:

  • “Roll out SAST to these teams by X date”
  • “Champions will triage findings weekly with support from the security team”
  • “We will review how this is going after 3 months and iterate”

Everyone should know:

  • What success looks like
  • Who does what
  • How progress will be measured

2. Treating champions like a compliance checkbox

Aram talked about organizations that see “security champions” as a shiny thing to implement—and then:

  • Pick random people with no understanding of technology
  • Give them little support
  • Treat it like a “tick the box” activity

Spyros shared two mistakes he personally made:

  • Taking champions for granted
    • Treating them like they’re obligated, not volunteers
    • Chasing them like they’re failing an SLA instead of recognizing this is extra work
  • Trying to “bribe” them with swag and pizza
    • Instead of treating them like future members of the security team

Champions are volunteers first. If you treat them like cheap labor, your program will quietly die.

What works better

Things that have worked well for us collectively:

  • Treat champions as community members and future security professionals
  • Invest in them:
    • Training (e.g., pentesting classes)
    • Conference attendance
    • Meaningful opportunities (not just “please fix these tickets”)
  • Make expectations clear but realistic
  • Recognize their work and support their career growth

From my side, two big practical tips:

  1. Be specific.
    “Make code better” is not a job description!
    “Help roll out SAST, triage results weekly, and be the first point of contact for security questions in your team” is…
  2. Don’t sprint a marathon.
    Many security teams start a champions program with:
    • Weekly events
    • Long newsletters
    • Tons of 1:1s
      … and burn themselves (and the champions) out in 2–3 months.
    Whatever you think you can do consistently, cut it in half and start there.

And if you drop the ball for a while? Just be honest:

“We had an incident and got pulled away. We’re sorry we went quiet. We still value you and the program, and here’s what we’re doing next.”

That kind of transparency goes a long way.


Metrics: Avoiding Vanity and Measuring What Matters

Because this was a “metrics” webinar, we had to talk about vanity metrics.

What is a vanity metric?

A simple way to think about it:

A metric that looks good on a dashboard, but does not actually help you make better decisions or improve outcomes.

Aram referenced Goodhart’s Law:

“When a measure becomes a target, it ceases to be a good measure.”

For example:

  • A beautiful dashboard with all green indicators
  • But:
    • No clear processes
    • SAMM scores are low
    • Risk to business is actually high… not low

Or the opposite:

  • Dashboard is red because of technical debt and old issues
  • But the team has the right people in place, who understand their responsibilities and follow processes with integrated security best practices

The dashboard alone doesn’t tell the whole story.

My favourite example of vanity vs real value

When I worked at a very large, very well-known company, my colleague’s blog posts would get thousands of views from Reddit, while mine would get a few hundred.

I thought he was crushing me—until we added one more metric:

  • Average time on page

We discovered:

  • Reddit readers stayed ~2 seconds on his posts (they clicked, bounced)
  • Readers of my posts stayed around 2 minutes (they actually read them)

Same tool, same “views” metric, totally different story when we added context.

The lesson:

A single number is rarely enough.
How people behave, not just whether they clicked, matters.

The same applies to security:

  • You can measure:
    • Number of vulnerabilities
    • Number of training completions
    • Number of scans per week
    …but what really matters is:
    • Are fewer serious issues reaching production?
    • Are developers asking more/better questions?
    • Is time-to-fix improving?

Sometimes the best indicators are qualitative and slower to measure, such as better relationships between security and dev, or developers proactively involving security earlier.


AI in AppSec: Help, Hindrance, or Both?

We couldn’t finish without talking about AI!!!

How AI can hinder

We’re putting extremely powerful tools into the hands of people who:

  • Are not security experts
  • Often are not seasoned engineers
  • May be under pressure to ship fast

This leads to:

  • “Vibe-coded” apps and scripts going straight to production
  • Data scientists and business folks deploying code without engineering safeguards
  • A massive increase in volume of code without a proportional increase in quality or security

It’s like handing out scalpels to toddlers.

How AI can help (responsibly, intentionally, hopefully)

Used thoughtfully, AI can be useful:

  • Format translation for threat modeling
    • Turn whiteboard photos into rough diagrams
    • Generate Mermaid diagrams or simple DFDs from text
  • Idea generation, not decision-making
    • “Given this architecture, what kinds of threats should we consider?”
    • Then humans validate what’s realistic and relevant
  • Assistance with code reviews or test scaffolding
    • Suggest unit tests
    • Highlight suspicious patterns
    • Generate boilerplate

Aram: However, I would not outsource threat modeling entirely to an LLM. At best, it’s:

A brainstorming assistant, not your threat modeler. – Aram

Guardrails and centralization

For organizations trying to manage “everyone using everything”:

  • Treat models like third-party dependencies
  • Consider:
    • Central tools (e.g., your own LLM gateway or “model garden”)
    • Guardrails layered on top (input/output checks, logging, policy)
    • Using this moment to improve SAST/SCA/DAST adoption, since everyone suddenly cares about code and models again

And always keep humans in the loop.


One Key Takeaway from Each of Us

To close the webinar, I asked everyone for a single key takeaway.

  • Aram’s takeaway:
    Take another serious look at your metrics. Don’t get lost in dashboard colours. Use models like OWASP SAMM and DSOMM thoughtfully, and keep refining how you measure progress.
  • Spyros’ takeaway:
    Change your mindset:
    The security team is not the police; we’re the physicians.
    We advise, we warn, we recommend. We can’t force a lifestyle change, but we can be honest about the consequences and support teams who choose to improve.
  • My takeaway:
    I need to spend more time with the latest version of OWASP SAMM, and encourage others to do the same. It’s evolving, and the combination of models (SAMM, DSOMM), good metrics, and the right mindset is incredibly powerful when used well.

Thank you so much for being part of this community and for caring about building safer software. 💜

If you have follow-up questions about metrics, OWASP SAMM, security champions, or AI in AppSec, feel free to drop them in the comments! I’d love to hear what you’re wrestling with right now.

2 comments

  1. Thanks for the great summarization. I could attend only to the end of the meeting. Please add an RSS for your blog (actually, probably it already has since it’s wordpress, I will check)

Leave a Reply

Discover more from SheHacksPurple

Subscribe now to keep reading and get access to the full archive.

Continue reading