The Difference Between Applications and Infrastructure

Christian Wiediger on Unsplash

Recently someone asked me what the difference was between Applications and Infrastructure. He asked why a Linux operating system wasn’t “software” and I said it was but it’s a perfect copy… I tend to speak about ‘custom software’. We ended up talking for a very long time about it, and I thought a blog post was in order.

Photo by Christian Wiediger on Unsplash

Infrastructure is the operating systems and hardware that applications live on. Think Windows, Linux, containers, and so much more. Sometimes hardware is included in this category (depending on who you talk to), and sometimes it is not. Infrastructure is necessary to run an application, even serverless runs (briefly) on a container. Operating systems are also all standardized, and not unique in nature. For instance, if I’m running SQL server 2012 R2, and so are you, we both have the same options for patches, configuration, etc. Operating systems are software that speak to hardware. 

Applications are software that speak to operating systems, databases, APIs and anything else you can think of. There are custom applications (what I’m almost always talking about, software developed for a specific business need or as a product to sell), COTS (configurable off the shelf, like sharepoint or confluence, administered by a person or team, installed locally on a server) and regular old software that you install or access via a web browser that you use as-is (no administration required/simpler). More newly there is SaaS, software as a service, which is basically a great big COTS product, hosted by someone else (no need for you to patch or otherwise take care of it, you pick your settings and use it). 

Infrastructure usually needs to be patched, updated/upgraded, and hardened (secure configuration choices). Patches and upgrades arrive in a prepackaged format, but sometimes these updates can break the applications living on that infrastructure. Testing and sometimes downtime is required. This is why so many people say ‘patching is hard’, it is difficult to plan for testing, downtime and to ensure everything will go smoothly. 

Software, on the other hand, includes many different components that will be provided prepackaged (such as a new version of a library or a framework) but when you update them sometimes other libraries or framework parts break and/or the custom code that your team wrote can break as well. Meaning you may need to re-code or rewrite things, or update a whole bunch of things at the same time. I’ve heard developers refer to this as “dependency hell”

If you have just released something brand new, it’s super easy to keep it up to date. Tiny changes present less risk (which is why people love devops over waterfall), making it easier to maintain. But because it’s sparkling and new… Usually management says “hey, please build this new feature, and update that library later”. This is how technical debt accrues. It’s not operational staff or software developers saying ”forget that, I don’t care about this“, it’s almost always conflicting priorities. 

I hope this helps clarify the difference.

Discoveries as a Result of the Log4j Debacle

Me, pre-log4j
Tanya making a silly face.
Happier times, before I knew anything about log4j.

Over the past 2 weeks many people working in IT have been dealing with the fallout of the vulnerabilities and exploits being carried out against servers and applications using the popular Log4J java library. Information security people have been responding 24/7 to the incident, operations folks have been patching servers at record speeds, and software developers have upgrading, removing libraries and crossing their fingers. WAFs are being deployed, CDN (Content Delivery Network) rules updated, and we are definitely not out of the woods yet.

​Those of you who know me realize I’m going to skip right over anything to do with servers and head right onto the software angle. Forgive me; I know servers are equally important. But they are not my speciality…

Although I already posted in my newsletter, on this blog and my youtube channel , I have more to say. I want to talk about some of the things that I and other incident responders ‘discovered’ as part of investigations for log4j. Things I’ve seen for years, that need to change.

After speaking privately to a few CISOs, AppSec pros and incident responders, there is a LOT going on with this vulnerability, but it’s being compounded by systemic problems in our industry. If you want to share a story with me about this topic, please reach out to me.

Shout-outs to every person working to protect the internet, your customers, your organizations and individuals against this vulnerability.

You are amazing. Thank you for your service.

Let’s get into some systemic problems.

Inventory: Not just for Netflix Anymore

I realize that I am constantly telling people that having a complete inventory of all of your IT assets (including Web apps and APIs) is the #1 most important AppSec activity you can do, but people still don’t seem to be listening… Or maybe it’s on their “to do” list? Marked as “for later”? I find it defeating at times that having current and accurate inventory is still a challenge for even major players, such as Netflix and other large companies/teams who I admire. If they find it hard, how can smaller companies with fewer resources get it done? When responding to this incident this problem has never been more obvious.

Look at past me! No idea what was about to hit her, happily celebrating her new glasses.

​Imagine past me, searching repos, not finding log4j and then foolishly thinking she could go home. WRONG! It turns out that even though one of my clients had done a large inventory activity earlier in the year, we had missed a few things (none containing log4j, luckily). When I spoke to other folks I heard of people finding custom code in all SORTS of fun places it was not supposed to be. Such as:

  • Public Repos that should have been private
  • Every type of cloud-based version control or code repo you can think of; GitLab, GitHub, BitBucket, Azure DevOps, etc. And of course, most of them were not approved/on the official list…
  • On-prem, saved to a file server – some with backups and some without
  • In the same repos everyone else is using, but locked down so that only one dev or one team could see it (meaning no AppSec tool coverage)
  • SVN, ClearCase, SourceSafe, subversion and other repos I thought no one was using anymore… That are incompatible with the AppSec tools I (and many others) had at hand.

Having it take over a week just to get access to all the various places the code is kept, meant those incident responders couldn’t give accurate answers to management and customers alike. It also meant that some of them were vulnerable, but they had no way of knowing.

Many have brought up the concept of SBOM (software bill of materials, the list of all dependencies a piece of software has) at this time. Yes, having a complete SBOM for every app would be wonderful, but I would have settled for a complete list of apps and where their code was stored. Then I can figure out the SBOM stuff myself… But I digress.

Inventory is valuable for more than just incident response. You can’t be sure your tools have complete coverage if you don’t know you’re assets. Imagine if you painted *almost* all of a fence. That one part you missed would become damaged and age faster than the rest of fence, because it’s missing the protection of the paint. Imagine year after year, you refresh the paint, except that one spot you don’t know about. Perhaps it gets water damage or starts to rot? It’s the same with applications; they don’t always age well.

We need a real solution for inventory of web assets. Manually tracking this stuff in MS Excel is not working folks. This is a systemic problem in our industry.

Lack of Support and Governance for Open-Source Libraries

This may or may not be the biggest issue, but it is certainly the most-talked about throughout this situation. The question posed is most-often is “Why are so many huge businesses and large products depending on a library supported by only three volunteer programmers?” and I would argue the answer is “because it works and it’s free”. This is how open-source stuff works. Why not use free stuff? I did it all the time when I was a dev and I’m not going to trash other devs for doing it now…. I will let others harp on this issue, hoping they will find a good solution, and I will continue on to other topics for the rest of this article.

Lack of Tooling Coverage

The second problem incident responders walked into was their tools not being able to scan all the things. Let’s say you’re amazing and you have a complete and current inventory (I’m not jealous, YOU’RE JEALOUS), that doesn’t mean your tools can see everything. Maybe there’s a firewall in the way? Maybe the service account for your tool isn’t granted access or has access but the incorrect set of rights? There are dozens are reasons your tool might not have complete coverage. I heard from too many teams that they “couldn’t see” various parts of the network, or their scanning tools weren’t authorized for various repos, etc. It hurts just to think about; it’s so frustrating.

Luckily for me I’m in AppSec and I used to be a dev, meaning finding workarounds is second nature for me. I grabbed code from all over the place, zipping it up and downloading it, throwing it into Azure DevOps and scanning it with my tools. I also unzipped code locally and searched simply for “log4j”. I know it’s a snapshot in time, I know it’s not perfect or a good long-term plan. But for this situation, it was good enough for me. ** This doesn’t work with servers or non-custom software though, sorry folks. **

But this points to another industry issue: why were our tools not set up to see everything already? How can we tell if our tool has complete coverage? We (theoretically) should be able to reach all assets with every security tool, but this is not the case at most enterprises, I assure you.

Undeployed Code

This might sound odd, but the more places I looked, the more I found code that was undeployed, “not in use” (whyyyyyyy is it in prod then?), the project was paused, “Oh, that’s been archived” (except it’s not marked that way), etc. I asked around and it turns out this is common, it’s not just that one client… It’s basically everyone. Code all over the place, with no labels or other useful data about where else it may live.

Then I went onto Twitter, and it turns out there isn’t a common mechanism to keep track of this. WHAT!??!?! Our industry doesn’t have a standardized place to keep track of what code is where, if it’s paused, just an example, is it deployed, etc. I feel that this is another industry-level problem we need to solve; not a product we need to buy, but part of the system development life cycle that ensures this information is tracked. Perhaps a new phase or something?

Lack of Incident Response/Investigation Training

Many people I spoke to who are part of the investigations did not have training in incident response or investigation. This includes operations folks and software developers, having no idea what we need or want from them during such a crucial moment. When I first started responding to incidents, I was also untrained. I’ve honestly not had near as much training as I would like, with most of what I have learned being from on the job experience and job shadowing. That said, I created a FREE mini course on incident response that you can sign up for here. It can at least teach you what security wants and needs from you.

The most important part of an incident is appointing someone to be in charge (the incident manager). I saw too many places where no one person was IN CHARGE of what was happening. Multiple people giving quotes to the media, to customers, or other teams. Different status reports that don’t make sense going to management. If you take one thing away from this article it should be that you really need to speak with one voice when the crap hits the fan….

No Shields

For those attempting to protect very old applications (for instance, any apps using log4j 1.X versions), you should consider getting a shield for your application. And by “shield” I mean put it behind a CDN (Content Delivery network) like CloudFlare, behind a WAF (Web Application Firewall) or a RASP (Run-Time Application Security Protection).

Is putting a shield in front of your application as good as writing secure code? No. But it’s way better than nothing, and that’s what I saw a lot of while responding and talking to colleagues about log4j. NOTHING to protect very old applications… Which leads to the next issue I will mention.

Ancient Dependencies

Several teams I advised had what I would call “Ancient Dependencies”; dependencies so old that the application would requiring re-architecting in order to upgrade them. I don’t have a solution for this, but it is part of why Log4J is going to take a very, very long time to square away.

Technical debt is security debt.

– Me

Solutions Needed

I usually try not to share problems without solutions, but these issues are bigger than me or the handful of clients I serve. These problems are systemic. I invite you to comment with solutions or ideas about how we could try to solve these problems.

I want to talk about Log4j

What my face looked like when I figured out how scary this log4j thing is.

Lots of people are talking about how Log4J affects servers, but if you subscribe to my newsletter or read my blog, you probably want to know about your apps. Let’s talk about what the problem is, how to figure out if you have it, then what to do about it.

Problem: this java logging dependency has a vulnerability in it that allows an attacker to take over your web server and run commands from it. They can run this attack before a login screen (unauthenticated). This is the “most scary possible” from a security viewpoint.

What my face looked like when I figured out how scary this log4j thing is.
My face when I understood how scary log4j is.

Do you have this problem? You can search for it in a bunch of ways, but I suggest just going to your code repo, searching for “*log4J*”. If you find nothing, ALSO search using any sort of dependency tool. This could be dependencyGraph in GitHub, Snyk, OWASP Dependency-Check, White source, etc. These are also often called “Software Composition Analysis” or SCA for short.

Versions 2.x (every single one except 2.17) are vulnerable. Note: 2.15 and 2.16 are also vulnerable.

Versions 1.x are only vulnerable if you call the JMSAppender functionality. You can look in your code for “JMSAppender” to see if you are calling it. If you are, you are vulnerable. If not, you’re good.

If you’re going to rule it out, make absolutely sure. If you don’t have it, go back to your week and chill. If you do, let’s get into that.

NOTE: Email your security team (appsec team) and let them know you don’t have it. They will be SO HAPPY.

Now onto if you have it.

Okay, so you have Log4J and you think your week is ruined, but maybe it’s not. For each instance you find make a list, to document, and to give to security later. Verify if the code has actually been deployed somewhere or not. As I did some IR this weekend, most instances I found were undeployed.

If the code has not been deployed anywhere, mark it as “do not deploy” and move on. Anything that HAS been deployed: where? WHERE is it deployed/where does it live? Behind a WAF or CDN? If so, add the rules to block this attack. CloudFlare & CloudFront both do, turn it on!

If you have your own RASP or WAF, and there are no rules available yet from your vendor (ask them if you don’t see it, tell them you want one if not). So if not available from the vendor, make your own “virtual patch”. Work with InfoSec to write regex that blocks the attack or fish some off of the internet (you are not the only one with this problem).

To be clear, if you make a virtual patch, this is a temporary measure, and you need to 1) monitor it to make sure it’s working, plus 2) upgrade the versions of Log4J as soon as you can. Don’t forget please. 😀

Worst case scenario: You have log4J, nothing to help you block it, and it’s a vulnerable version.

It’s go time.

Option 1: “accept the risk” and do nothing to block it. You will still monitor the situation, but that is all. You will instead spend your efforts on releasing the upgraded version of your software as soon as humanly possible. For some organizations, this is the only option. Don’t feel bad, use that energy of the update instead. Ensure you test thoroughly; you don’t want to release patches like our industry saw during Meltdown/Spector that broke the patched systems worse than the vulnerability would have.

Option 2: Shut off the vulnerable systems. Immediately. If your business can have a few systems down until you figure out how to do something better, this *might* present less risk. There are currently many systems all over the internet being turned off, in the short term. There is no shame in doing this if it’s your best option. I’d rather have egg on my face than an exploited server.

Option 3: Go through your code and remove this dependency from your project, then comment out the code that calls it. When you are ready to apply the upgrade/patch, you will add it and turn it back on. Stop logging, just for now. This is the only situation where I would ever recommend removing logging. Test it thoroughly before deploying and make sure you don’t have any sort of “backup logging” that could interfere or spoil your efforts.


No matter what you decide: Tell the InfoSec team. Tell your management. Do not make these decisions solo.


I made a video about it as well. This topic is important to me, and it should be to you too.

Log4J Affecting Civilians

This vulnerability will affect many systems, including the ones you use at home. Here are some tips to protect your personal devices and home network (tell your friends).

  • Apply updates. Especially this week and next week. If you’re computer, phone or any piece of software wants an update, say yes. Updates in general are great, they often contain security fixes, as well as new features.
  • At some point this week make time to call your internet provider and ask if your router or modem has any updates for log4j. A lot of *devices* are going to be vulnerable, and most of us forget about our modems.
  • If you have “smart” devices at home, check for updates on them too!

More tips!

While we’re at it, here are a few tips for securing your digital self (again, tell your friends!):

  • Turn on Multi-Factor Authentication (MFA) or two-factor authentication (2FA), on all your online accounts that are important to you. Your banking accounts, government services, shopping accounts that have a credit card saved, etc.
  • Get a password manager. Then change and save your passwords into it, one by one, as you visit all your favourite sites. Let it auto-generate unique passwords for you.
  • Think twice before you click on a link in an email if you were not expecting to receive that email. Verify who it’s from, that it’s a legitimate website, and that the link starts with a domain that you recognize. If you’re not sure, copy the link into google.com (not the address bar, go to the search engine website) then add “phishing” to your search and see what it says.
  • Never give personal information, such as pictures of your ID, your social insurance number, your home address, date of birth, etc. to anyone on the internet.

My Career Story

Me, smiling

I started coding at 17 years old, and it was love at first sight.

I got great marks in all of my classes in high school, but loved computer science because in every class, I could “make something out of nothing.” Computer science runs deep in my family as almost all of my aunts and uncles are computer scientists, and my cousins are engineers, scientists and programmers. When I announced that I wanted to go to college for computer science my family responded with “what else would you take?” It wasn’t until years after working in tech that I realized that this is not an experience that most young women share.

I landed my first job in tech at age 18, and haven’t stopped since, despite several career setbacks, harassment and toxic work environments. I realize this might not seem very encouraging, but I have to tell you; things in tech have really improved. I’ve had the fortune of work experience in a variety of different situations both in computer science and in my other passion, music. Both careers taught me the value of collaborating with others, confronting differences, and taking constructive criticism well. It’s also given me the benefit of becoming more resilient when it comes to unpleasant situations or less-than-constructive comments made in the workplace.

For many years, I was a programmer by day and a musician at night. My successful music career allowed me to play in countless venues and bars around town, and it taught me many lessons that have since turned out to be very helpful in tech, such as how to handle hecklers, how to capture the attention of a drunk and belligerent crowd, and what the best way to throw someone off a stage is. As you can imagine, there were challenges to being a young 20-something woman in a hardcore punk band.

Later in my career I met an ethical hacker who was also in a band and we became friends. He spent the next 1.5 years convincing me to join him as his apprentice and learn how to hack. I became fascinated with the security of software, I wanted to know everything. I joined my local OWASP chapter and almost immediately became a chapter leader, which helped me greatly since I had the chance to invite experts on topics that I was interested in to come speak for us. I also met my next 3 professional mentors though OWASP, who taught me even more. OWASP is an incredibly supportive and amazing community, I strongly recommend that everyone joins their local chapter.

OWASP Montreal, I drove there with my mom to speak at lunch time. I missed a day of work for it.

At this point in my career I felt like I had a thirst for knowledge that could not be quenched. Although I managed to switch over from software development to a full time security job, I was frustrated that there was no budget for me to go on the types of advanced training that I was interested in. Then one of my professional mentors convinced me to speak at a conference, and they let me in FOR FREE.

For the next 2 years, I spoke at meetups and local events, taught myself as much as I could, and worked in application security helping developers make more secure apps. I loved it, but I kept striving for more. I wanted to do more modern types of application security, and I realized that the organizations I worked for were not very modern, and resistant to change. I found that my drive and ambition was difficult for certain managers, and it became a point of friction for me in the workplace.

Then I broke through from meetups into speaking at conferences. I honestly couldn’t believe it when I received the email saying that I had been accepted to speak at AppSec EU, the international OWASP conference. I discovered that all of my musical stage performance skills transferred over and with all of my practice at meetups that I had become good at public speaking. After AppSec EU, I had invitations to speak all over the world. As conferences started sending me plane tickets, I took time off work and went off to learn for free. I realized that a career shift was necessary. I knew that I had something to offer to the right employer, but I wasn’t quite sure what that would be… Then Microsoft reached out to me.

A Microsoft representative said that he had heard about me, and wanted to interview me for a “Developer Advocate” position. I had no idea at that point that “developer relations” was a job, and when he described what the job would be I said “I already do that, for free.” It took him about 20 minutes to convince me that he was not kidding, this was a real job, and he was actually from Microsoft. Before I knew it, I was traveling the planet, learning about cloud security, working with absolutely brilliant people and so much more. All the while I was *getting paid* to do it! Talk about a dream!

During my many years traveling and talking to the community, I learned a lot about my industry, both good and bad. I learned that software developers had a lot of aches and pains in regards to security that I had also felt when I was a developer, and especially during my work in incident response and AppSec. My goal in being a developer and cloud advocate was to help push the industry forward, and to help people create more secure software, everywhere. During this time I founded the #CyberMentoringMonday online initiative and the WoSEC (Women of Security) organization, released countless articles, videos and podcasts, and spoke regularly at security events. Although I definitely felt I was helping many people in my industry, I felt like I could do even more. I also felt the constant travel was extremely exciting, but also exhausting and perhaps not the most efficient way to help the most people. I wanted to figure out how to make a bigger difference, and ’scale’ myself in a more effective manner.

With that in mind, I started to devise a plan; focus my efforts in a more concise way in order to deliver more impact. Do fewer things, but do those things in a very big way. I decided to choose two big goals; to write a book and start my own company. And I decided I would just go for it, even if it was scary.

I realized at this point that I was going to have to leave Microsoft to pursue my new career goals. I decided to start my own online training academy, We Hack Purple. We have a podcast, community and courses, it’s a dream come true!

I am also in the process of writing my first book! It’s an intro to AppSec, “Alice and Bob Learn Application Security”, and I’m excited to share it with the community at large when it’s ready. Even though I am at the very beginning of both of these adventures, you better believe I plan to knock them out of the park! ** Alice and Bob Learn AppSec is now available worldwide!

If I can offer advice to you it is this: if you want it, go get it. Don’t let anyone tell you that you can’t reach greatness; you can, you just need to be prepared to work like you’ve never worked before. The Information Security industry needs all the help it can get, and we definitely need you. Yes you, the person reading this right now. Please join us, and help us make the world a better and more secure place.

I have a mailing list, please subscribe, it’s free!

#CyberMentoringMonday

WoSEC Ottawa

Some people have been asking me online how to be a good mentor. Here are some thoughts for all of you. 😀

Some mentees don’t listen, and are not willing to put in the work. Some of them will astound you and excel beyond your wildest dreams. The key is finding a good match for you, and for them.

It’s your job as a mentor to try to help your mentee any way you can. That can be through advice, loaning them a book, sharing resources, introducing them to people that can help them, referring them for a job (if appropriate) or other opportunities.

WoSEC Ottawa— Women of Security

Example: I wrote an essay to explain to a conference why one of my mentees deserved a diversity grant. She has worked SO HARD to teach herself and change careers. She won the grant because of her hard work, AND my essay. It took me 30 minutes, and she benefited.

Example 2: I brainstormed talk ideas with a mentee, then she built an amazing proof of concept. I asked a conference that I was keynoting to book her, even though she’d never spoken before. She was AMAZING! Out of this world! I knew she would be good, but she was 10 times better than I would have dared to hope for.

Example 3: When I’m invited to speak somewhere but cannot make it, I ask if they would like me to recommend someone else. I have a list of people who are not well-known, but who are amazing. I always recommend one of them to take my place. I advocate for them.

Example 4: I asked a friend to let one of my mentees into his very expensive training for free, and he said yes. I let her stay in my hotel room with me so she could afford the trip. It cost me one favour and sharing my room to give her a huge leg up for her career.

I use the power and privileges of my current role to help others, and you can too. You may not even realize how much power you have until you start helping someone.

Sometimes it’s recommending or loaning someone the right book. Sometimes it’s about letting them have a place in your training, workshop, talk, or conference for free. Sometimes it’s helping them when they are stuck at work on a technical problem and you give them the answer. Maybe you will introduce them to the person who will hire them some day. It’s about helping however you can.

The key with mentoring is that they can trust you, and that you have their best interests at heart. It’s not about being perfect or knowing everything. It’s about your motivations.

Good luck folks!

#CyberMentoringMonday

Security bugs are fundamentally different than quality bugs

This topic has come up a few times this year in question period: arguments that quality bugs and security bugs ‘have equal value’, that security testing and QA are ‘the same thing’, that security testing should ‘just be performed by QA’ and that ‘there’s no specific skillset’ required to do security testing versus QA. This post will explain why I fundamentally disagree with all of those statements.

First some definitions.

A software bug is an error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways.

A security bug is specifically a bug that causes a vulnerability. A vulnerability is a weakness which can be exploited by a Threat Actor, such as an attacker, to perform unauthorized actions within a computer system.

QA looks for software bugs (any kind); security testers look for vulnerabilities. This is the main difference, their goals.

Just as all women are human beings, but not all human beings are women; while all security bugs are defects, not all defects are security bugs.

Now let’s dissect each of the claims above.

1. Quality bugs and security bugs ‘have equal value’.

If a security bug can lead to a low risk vulnerability, it does not have ‘the same value’ as a non-security-related bug that is making the system crash over and over. The same as if a security bug is creating a situation of a potential data breach, or worse, it’s not equivalent to the fonts not matching from page to page. I am of the opinion that security bugs are more likely to be able to cause catastrophic business harm than a regular bug, due to the fact that if your system has fallen under the control of a malicious actor, creativity is the only limit. Malicious actors never cease to amaze me with the damage they can do.

Someone is wearing camouflage. — #MSIgniteTheTour, Toronto, 2019

2. Security testing and QA are ‘the same thing’

The goals of security testing and quality assurance testing are different, which I feel makes them obviously different (if they were the same, why would they not be called the same thing?), however I want to get deeper into this idea.

Security is a part of quality.

I often say, “Security is a part of quality.”, because I believe this to be true. You cannot have a high-quality product that is insecure; it is an oxymoron. If an application is fast, beautiful and does everything the client asked for, but someone breaks into the first day that it is released, I don’t think you will find anyone willing to call it a high-quality application.

There are many different types of testing;

· Unit Testing — small automate-able tests to verify small units of code (functions/subroutines) to verify it does the one thing it is supposed to do.

· Integration Testing — test between different components to ensure they work well together. Larger than unit tests, but less intense than end to end tests.

· End-To-End Testing — ensuring the flow of your application from start to finish is as expected

· User Acceptance Testing (UAT) — manual and/or automated testing of client requirements (often used interchangeably with ‘QA’),

· User Experience Testing (UX) — verifying that the application or product is easy to use and understand from a user perspective)

· Regression Testing — verifying that new changes have not broken anything that was already tested, a ‘retesting’ of all previously released functionality

· Stress/Performance/Load Testing — verifying your application can handle large amounts of usage/traffic while continuing to perform well, generally performed using software tools (although each of these three have slight differences, they are all generally lumped in together)

· Security Testing — a mix of manual and automated testing, using one or more tools, with the aim to find vulnerabilities within applications.

There are more types of testing, but I think you get the point.

Some or all of these types of testing can be used to verify that a product is of high quality, and security is just one part. Therefor, security testing and QA are not ‘the same thing.’

3. Security testing ‘should be performed by QA’

For each one of the types of testing listed above a different skillset is required. All of them require patience, attention to detail, basic technical skills, and the ability to document what you have found in a way that the software developers will understand and be able to fix the issue(s). That is where the similarities end. Each one of these types of testing requires different experience, knowledge, and tools, often meaning you need to hire different resources to perform the different tasks. Also, we can’t concentrate on everything at once and still do a great job at each one of them.

Although theoretically you could find one person who is both skilled and experienced in all of these areas, it is rare, and that person would likely be costly to employ as a full-time resource. This is one reason that people hired for general software testing are not often also tasked with security testing. Another reason is that people who have the experience and skills in order to perform thorough and complete security testing are currently a rarity, there is a skill shortage, while as an industry we are lucky to have quite a number of skilled QA professionals, making them easier to hire and staff. Lastly, the amount of time, training and experience that it takes to become a security tester, versus a general software tester, is more difficult to acquire.

Training on how to perform security testing is extremely expensive and difficult to find, it generally takes longer to learn it as a skill than other types of testing, and there are fewer opportunities to get into that industry, when compared to QA. Thus it is more difficult to become a security tester, when compared to general testing. Scarce resources, high demand, expensive training means it costs significantly more to hire security testers than it does to hire general software testers.

All of these facts lead up to the reality that it is cost-prohibitive to have staff your QA team with professionals who are skilled and experienced in both QA and security testing. This also means that you are creating a single point of failure for testing in your organization, which will not save you money in the long run.

#MSIgniteTheTour, Toronto, 2019

Another point on this topic; those who work in the security industry are likely to have a preference for their area of focus, security, and may be unwilling to perform other types of work outside their area of concentration (people who specialize generally want to work within their area of specialization, whenever possible, and security testing is a specialization).

4. ‘There’s no specific skillset’ required to do security testing versus QA.

First of all, I feel this statement is insulting to QA testers, as though they do not have a specific skillset that makes them good at what they do. I don’t believe that to be true. I suspect that when people make this argument that it is out of frustration with our industry, because I honestly cannot fathom someone thinking that security testing does not require specific experience, training or skills; otherwise there would be no skills shortage and it would not be a high-paying job. Security testing is a specialization within the field of testing, just as there are specializations within any field, and by definition it requires more knowledge and training to form the skillset in order to do the job.

I do not intend to downplay the value of QA testing, only to explain that quality assurance is different from ensuring that a product is secure. I should also say that I feel that hacking is sometimes glorified in television, the media and our industry as a whole, in a way that isn’t logical to me. Security testing is very important, but I do not believe that hackers are superior to other professionals who work in IT. In fact, I choose to focus my career on AppSec, DevSecOps and other types of defence, because I truly believe that it is more important that we write secure code than we ‘hack all the things’. Security is so much more than just security testing (ethical hacking), it is secure design, secure coding, threat modelling, etc.

I feel comments like this (#4) are not based on facts, but feelings, and it’s difficult to debate with someone when that is the case.

It is okay if we disagree on this topic. Debate is good and healthy, and I would love to hear your feelings, thoughts and ideas in the comments.

At this point I’d like to remind you all that security is everybody’s job. Not only is it everyone’s responsibility to do their job in the most secure way they know how, but having many different people look at something with security in mind can help us find new and different problems that may have otherwise been missed.

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Security Headers for ASP.Net and .Net CORE

Website report showing we received an A

For those who do not follow myself or Franziska Bühler, we have an open source project together called OWASP DevSlop in which we explore DevSecOps through writing vulnerable apps, creating pipelines, publishing proof of concepts, and documenting what we’ve learned on our YouTube Channel and our blogs. In this article we will explore adding security headers to our proof of concept website, DevSlop.co. This blog post is closely related to Franziska’s post OWASP DevSlop’s journey to TLS and Security Headers. If you like this one, read hers too. 🙂

Franziska Bühler and I installed several security headers during the OWASP DevSlop Show in Episode 22.1 and 2.2. Unfortunately we found out that .Net Core apps don’t have a web.config, so the next time we published it wiped out the beautiful headers we had added. Although that is not good news, it was another chance to learn, and it gave me great excuse to finally write my Security Headers blog post that I have been promising. Here we go!

Our web.config looked so…. Empty.

Just now, I added back the headers but I added them to the startup.cs file in my .Net Core app, which you can watch here. Special thanks to Damien Bod for help with the .Net Core twist.

If you want in-depth details about what we did on the show and what each security header means, you should read Franziska’s blog post. She explains every step, and if you are trying to add security headers for the first time to your web.config (ASP.Net, not .Net CORE), you should definitely read it.

The new code for ASP.Net in your web.config looks like this:

<! — Start Security Headers →
<httpProtocol>
<customHeaders>
<add name=”X-XSS-Protection” value=”1; mode=block”/>
<add name=”Content-Security-Policy” value=”default-src ‘self’”/>
<add name=”X-frame-options” value=”SAMEORIGIN”/>
<add name=”X-Content-Type-Options” value=”nosniff”/>
<add name=”Referrer-Policy” value=”strict-origin-when-cross-origin”/>
<remove name=”X-Powered-By”/>
</customHeaders>
</httpProtocol>
<! — End Security Headers →

Our new-and-improved Web.Config!

And the new code for my startup.cs (.Net CORE), looks like this (Thank you Damien Bod):

//Security headers make me happy
app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
app.UseXContentTypeOptions();
app.UseReferrerPolicy(opts => opts.NoReferrer());
app.UseXXssProtection(options => options.EnabledWithBlockMode());
app.UseXfo(options => options.Deny());

app.UseCsp(opts => opts
.BlockAllMixedContent()
.StyleSources(s => s.Self())
.StyleSources(s => s.UnsafeInline())
.FontSources(s => s.Self())
.FormActions(s => s.Self())
.FrameAncestors(s => s.Self())
.ImageSources(s => s.Self())
.ScriptSources(s => s.Self())
);
//End Security Headers

Our beautiful security headers!

In future episodes we will also add:

  • Secure settings for our cookies
  • X-Permitted-Cross-Domain-Policies: none
  • Expect-CT: (not currently supported by our provider)
  • Feature-Policy: camera ‘none’; microphone ‘none’; speaker ‘self’; vibrate ‘none’; geolocation ‘none’; accelerometer ‘none’; ambient-light-sensor ‘none’; autoplay ‘none’; encrypted-media ‘none’; gyroscope ‘none’; magnetometer ‘none’; midi ‘none’; payment ‘none’; picture-in-picture ‘none’; usb ‘none’; vr ‘none’; fullscreen *;

For more information on all of these security headers, I strongly suggest you read the OWASP Security Headers Guidance.

We now have good marks from all of the important places, https://securityheaders.comhttps://www.ssllabs.com and http://hardenize.com, but hope to improve our score even further.

For more information, watch our show! Every Sunday from 1–2 pm EDT, on Mixer and Twitch, and recordings are available later on our YouTube channel.

Please use every security header that is available and applicable to you.

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Hacking Robots and Eating Sushi

Jesse Honnes and his rebots

I recently had dinner with an old friend, Jesse Hones, the Engineering Manager of Systems / Senior Software Developer of Aprel. I remember when we first met he explained that he designed and programmed robots to measure radio frequencies at extremely precise levels. Fast forward a decade; I am an ethical hacker and he is designing more complex robots than ever before. So I did what anyone would do; I asked him to come on the OWASP DevSlop show and talk about hacking robots.

Jesse Hones and one of his many robots.

His answer was “Not yet. I can’t tell you what all the loopholes are, because I still need those to get my job done.”

Interest piqued.

He explained that previously robot firmware was all custom; each system it’s own unique snowflake. Like custom software is today, ripe with vulnerabilities, however you can only hack them one system at a time. Recently this has changed, he explained, things are standardizing and many use the same components, which all run the same firmware. I asked if this was like Windows XP back in the day, in that almost everyone is running it so when a bug is discovered *every* system is vulnerable. He said yes.

But Jesse is a developer, not a security person, so he looks at it with the “I need to make sure this runs properly” lens, not the “I want to make this robot do my bidding!” viewpoint of an ethical hacker.

More of Jesse’s robots.

Obviously, I had to threat model the situation immediately. Poor Jesse.

Me: What if malware is created that stops production for all affected robots?

Jesse: Yes, this would be costly.

Me: What about ransomware?

Jesse: <unhappy face>

Me: What if someone takes over the robots and has them implant something in every 20th chip it makes to spy on the users? As a supply chain attack?

Jesse: Yes, that would be very bad. However this could change soon; we may be switching over to Windows Embedded.

Me: Okay…. But what if a company used robots Stuxnet style and slowly sabotaged their competitors? So they could never quite finish their R&D on a product? Meanwhile stealing their ideas? Think Stuxnet meets Schindler’s List.

Jesse: …

Me: What if someone uses them to mine bitcoin? Robot Crypto Pirates!

Jesse: I guess tha-

Me: What if robots become the weak point of most networks and are used regularly as pivot points by hackers?

At this point Jesse has resorted to quietly waiting for me to calm down. I look at him.

Me: What if a robot murders someone?

Jesse: That would be the end. The industry could not survive. That cannot be allowed to happen. Ever.

Sounds like we need more robot hackers.

For content and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Why I Love Password Managers

Tanya waving

** This article is for beginners in security or other IT folk, not experts. 😀

Passwords are awful. The software security industry expects us to remember 100+ passwords, that are complex (variations of upper & lowercase, numbers and special characters), that are supposed to be changed every 3 months, with each one being unique. Obviously this is impossible for most people, and for those whom it is possible, why would they want to waste all of that brain power on something that is, essentially, meaningless?

Comic illustrating the need for password security.
I love XKCD and so should you: https://xkcd.com/936/

That’s right, the password itself means nothing. The purpose of the password is to authenticate the user; to prove that *you* are the real, authentic, you. Not another person with the same name or birthday, but the person who owns the account that is being logged into. The person who’s money is in that bank account. The person who tweets all those tweets.

I realize that the security industry is wise to this issue, and NIST has updated it’s password advice, but that still leaves many applications doing things the old way and programmers continuing to implement the old security advice. The result is password reuse; people using the same password over and over, for most or all of their accounts. Last month I heard a speaker that claimed the most common password has changed from “Password1” to “Autumn2018”, “Winter2019” and so on, for every third month. Tragic.

The reason this is a problem is that once one account is breached, or a password stolen, that email & password combo (known as credentials) is likely to work in many, many other places. “Credential stuffing” is the term for when criminals or other bad actors steal many credentials and use scripts to try them all against a larger site, with malicious intent. These attacks are often wildly successful, which makes password reuse very scary from a defender’s perspective.

At least 1% of what I know comes from XKCD: https://xkcd.com/792/

This is where password managers come in. Password managers allow users to generate long and complex passwords, as long and complex as the software will allow. It remembers all of them, keeping them in an encrypted vault. When users go to log into something they either press a button on the browser to have them do it all for them, or they open the password manager, enter the one single password they need to know, and access all of their secrets.

Password managers can protect you against several types of attacks:

  • Password reuse attack (if all of your passwords are different, if one account is breached, the rest are fine)
  • Phishing attacks that target your accounts using URLs that are similar to ones you already use. When you go to the fake URL your password manager will not recognize it, and this should tip you off that you are under attack
  • Brute force attacks; if you are always using very long and complex passwords (because you don’t need to remember them), it would take forever for a brute force attack to uncover your password.

Below is a non-exhaustive list of password managers. Some are free, some are not. Either way, go get one so you can stop wasting brain power on boring things like remembering your passwords.

If you work in an IT environment, you absolutely must have a password manager. I strongly suggest that anyone who uses a computer regularly and has multiple passwords to remember to get one, even if you don’t consider yourself tech savvy. Put every single password in there, change all the passwords you used to have to long randomly-generated ones, and ensure the password you use for your password manager is a passphrase that is an entire sentence (such as: “I work with NeuraLegion and I like them a lot!” or “Tanya Janca is my favorite blog writer and her jokes are never self-deprecating”).

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

VAs, Scans and PenTests; not the same thing

I’d like to define a couple of subjects that seem to be confused often in the industry of application security; Vulnerability Assessment (VA), Vulnerability Scan (VA Scan) and Penetration Test (PenTest). They are often used interchangeably, and the differences do not seem to be well-understood; I have seen this misunderstanding used against many clients who have purchased these services and am hoping clear definitions will help us all.

Vulnerability Assessment (VA) (sometimes called a security assessment) is an assessment of the security of a system, in attempts to find all possible vulnerabilities. It generally involves using multiple scanning tools, manual exploration and evaluation, as well as examination of all security controls (a lock on a door, a login screen, or input validation are all security controls). The Assessor does not exploit vulnerabilities that are found (for instance they see the door is unlocked, but they do not enter), they just report them, along with information on how to fix each of the vulnerabilities. This sometimes includes a security review of the design and/or threat modelling, questionnaires or interviews, and generally takes days or weeks, not hours or minutes. Sometimes the security assessor will create a proof of concept (POC) to explain a vulnerability with more clarity, but to be clear, that is not the focus of this exercise.

In the past when I was hired to do a penetration test, I would often describe a VA, as if that’s what they wanted, and they would say “yes, do that”. My contract would say “PenTest”, but I would conduct a vulnerability assessment.

In the past I often had requests for “a quick VA” or “VA Scan”, which as it turns out meant “one scan with a vulnerability assessment tool” and no other activities, such a manual investigation of the results. This can be done in as few as a few hours or even minutes if your target is small, and the person performing the task does not need advanced training or skills to perform the task. There are many VA tools on the market; Nessus, Nexpose, OpenVAs and Azure Security Center (for Azure cloud infrastructure only) are all used for scanning infrastructure, while Microsoft Security Risk Detection, Burp Suite, Zed Attack Proxy, NetSparker, Acunetix, AppScan, and App Spider are for scanning web apps. Doing “a quick scan” with any of these tools will net you a list of vulnerabilities, and many of them will be true positives (as opposed to false positives); it is most certainly a worthwhile venture. It is not, however, as thorough as a Vulnerability Assessment or Penetration Test, and there will remain many other issues that are not uncovered if you leave it at that.

I also enjoy infrastructure as code, from time to time

Penetration Test is another beast entirely. A PenTest seeks to find vulnerabilities and then exploit them, to prove real-world risk. Sometimes penetration tests can cause damage (exploits, if not done very carefully, can leave a mess), and sometimes the scope of a PenTest can call for the tester to collect “trophies” to prove they did the things they claim.

It is very rare that I write an exploit or feel the need to exploit vulnerabilities I find when testing*. Most of the times in my career when I have exploited something everyone just ended up pissed off at me; from the first PenTest I ever did as a sub-contract when I ruined a live prod server and the person that hired me had to explain what happened, to creating proof of concept exploits that embarrass management into doing “the right thing”, to breaking a Drupal CMS site so badly that they had to restore the database AND the app server (Drupal CMS itself was completely unusable) from backup. It’s nice that I impressed people, but I honestly would prefer to spend that extra time helping the developers fix what I have found and re-testing the fixes, rather than showing off whatever talent I have for burning things down.

Special note on ethics: I have seen many consultants who offer these services pass off a quick scan as a full VA or Pentest, charging for 10 days what took them only 1 day to perform. I have also seen many of these same consultants sub-contract this work out to others who they pay less (and with who they share your sensitive data with!), but they do not credit these individuals in the reports or contracts resulting in you having no idea who had access to your systems and data. When writing contracts for such services it would be wise to be explicit in what you are paying for, as well as who will do the work and what information must remain confidential. I am sad to report that I have met many consultants who have bragged about doing these types of (in my opinion) unethical practices. Buyer beware.

I would suggest that performing a proper VA against all of your custom applications as well as large COTS implementations (Customizable Off The Shelf system, such as SharePoint) is a best practice for Enterprise businesses. Not only would you be amazed at the things that you find, (assuming you fix the issues) you will have taken serious measures to avoiding a data breach in the future, as insecure software is still, sadly, the top reason for data breaches (as per the Verizon Breach Reports 2016, 2017, and 2018).

I hope this article helps instill a bit of clarity in our industry.

When I did testing, I did exploit XSS using alert boxes, regularly, because it’s 100% safe to do so. And also blind SQL with timers and errors, but to be clear I am very careful to only perform safe exploits when testing. I can feel myself putting my foot right into my mouth with this note…

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!