My Career Story

Me, smiling

I started coding at 17 years old, and it was love at first sight.

I got great marks in all of my classes in high school, but loved computer science because in every class, I could “make something out of nothing.” Computer science runs deep in my family as almost all of my aunts and uncles are computer scientists, and my cousins are engineers, scientists and programmers. When I announced that I wanted to go to college for computer science my family responded with “what else would you take?” It wasn’t until years after working in tech that I realized that this is not an experience that most young women share.

I landed my first job in tech at age 18, and haven’t stopped since, despite several career setbacks, harassment and toxic work environments. I realize this might not seem very encouraging, but I have to tell you; things in tech have really improved. I’ve had the fortune of work experience in a variety of different situations both in computer science and in my other passion, music. Both careers taught me the value of collaborating with others, confronting differences, and taking constructive criticism well. It’s also given me the benefit of becoming more resilient when it comes to unpleasant situations or less-than-constructive comments made in the workplace.

For many years, I was a programmer by day and a musician at night. My successful music career allowed me to play in countless venues and bars around town, and it taught me many lessons that have since turned out to be very helpful in tech, such as how to handle hecklers, how to capture the attention of a drunk and belligerent crowd, and what the best way to throw someone off a stage is. As you can imagine, there were challenges to being a young 20-something woman in a hardcore punk band.

Later in my career I met an ethical hacker who was also in a band and we became friends. He spent the next 1.5 years convincing me to join him as his apprentice and learn how to hack. I became fascinated with the security of software, I wanted to know everything. I joined my local OWASP chapter and almost immediately became a chapter leader, which helped me greatly since I had the chance to invite experts on topics that I was interested in to come speak for us. I also met my next 3 professional mentors though OWASP, who taught me even more. OWASP is an incredibly supportive and amazing community, I strongly recommend that everyone joins their local chapter.

OWASP Montreal, I drove there with my mom to speak at lunch time. I missed a day of work for it.

At this point in my career I felt like I had a thirst for knowledge that could not be quenched. Although I managed to switch over from software development to a full time security job, I was frustrated that there was no budget for me to go on the types of advanced training that I was interested in. Then one of my professional mentors convinced me to speak at a conference, and they let me in FOR FREE.

For the next 2 years, I spoke at meetups and local events, taught myself as much as I could, and worked in application security helping developers make more secure apps. I loved it, but I kept striving for more. I wanted to do more modern types of application security, and I realized that the organizations I worked for were not very modern, and resistant to change. I found that my drive and ambition was difficult for certain managers, and it became a point of friction for me in the workplace.

Then I broke through from meetups into speaking at conferences. I honestly couldn’t believe it when I received the email saying that I had been accepted to speak at AppSec EU, the international OWASP conference. I discovered that all of my musical stage performance skills transferred over and with all of my practice at meetups that I had become good at public speaking. After AppSec EU, I had invitations to speak all over the world. As conferences started sending me plane tickets, I took time off work and went off to learn for free. I realized that a career shift was necessary. I knew that I had something to offer to the right employer, but I wasn’t quite sure what that would be… Then Microsoft reached out to me.

A Microsoft representative said that he had heard about me, and wanted to interview me for a “Developer Advocate” position. I had no idea at that point that “developer relations” was a job, and when he described what the job would be I said “I already do that, for free.” It took him about 20 minutes to convince me that he was not kidding, this was a real job, and he was actually from Microsoft. Before I knew it, I was traveling the planet, learning about cloud security, working with absolutely brilliant people and so much more. All the while I was *getting paid* to do it! Talk about a dream!

During my many years traveling and talking to the community, I learned a lot about my industry, both good and bad. I learned that software developers had a lot of aches and pains in regards to security that I had also felt when I was a developer, and especially during my work in incident response and AppSec. My goal in being a developer and cloud advocate was to help push the industry forward, and to help people create more secure software, everywhere. During this time I founded the #CyberMentoringMonday online initiative and the WoSEC (Women of Security) organization, released countless articles, videos and podcasts, and spoke regularly at security events. Although I definitely felt I was helping many people in my industry, I felt like I could do even more. I also felt the constant travel was extremely exciting, but also exhausting and perhaps not the most efficient way to help the most people. I wanted to figure out how to make a bigger difference, and ’scale’ myself in a more effective manner.

With that in mind, I started to devise a plan; focus my efforts in a more concise way in order to deliver more impact. Do fewer things, but do those things in a very big way. I decided to choose two big goals; to write a book and start my own company. And I decided I would just go for it, even if it was scary.

I realized at this point that I was going to have to leave Microsoft to pursue my new career goals. I decided to start my own online training academy, We Hack Purple. We have a podcast, community and courses, it’s a dream come true!

I am also in the process of writing my first book! It’s an intro to AppSec, “Alice and Bob Learn Application Security”, and I’m excited to share it with the community at large when it’s ready. Even though I am at the very beginning of both of these adventures, you better believe I plan to knock them out of the park! ** Alice and Bob Learn AppSec is now available worldwide!

If I can offer advice to you it is this: if you want it, go get it. Don’t let anyone tell you that you can’t reach greatness; you can, you just need to be prepared to work like you’ve never worked before. The Information Security industry needs all the help it can get, and we definitely need you. Yes you, the person reading this right now. Please join us, and help us make the world a better and more secure place.

I have a mailing list, please subscribe, it’s free!


WoSEC Ottawa

Some people have been asking me online how to be a good mentor. Here are some thoughts for all of you. 😀

Some mentees don’t listen, and are not willing to put in the work. Some of them will astound you and excel beyond your wildest dreams. The key is finding a good match for you, and for them.

It’s your job as a mentor to try to help your mentee any way you can. That can be through advice, loaning them a book, sharing resources, introducing them to people that can help them, referring them for a job (if appropriate) or other opportunities.

WoSEC Ottawa— Women of Security

Example: I wrote an essay to explain to a conference why one of my mentees deserved a diversity grant. She has worked SO HARD to teach herself and change careers. She won the grant because of her hard work, AND my essay. It took me 30 minutes, and she benefited.

Example 2: I brainstormed talk ideas with a mentee, then she built an amazing proof of concept. I asked a conference that I was keynoting to book her, even though she’d never spoken before. She was AMAZING! Out of this world! I knew she would be good, but she was 10 times better than I would have dared to hope for.

Example 3: When I’m invited to speak somewhere but cannot make it, I ask if they would like me to recommend someone else. I have a list of people who are not well-known, but who are amazing. I always recommend one of them to take my place. I advocate for them.

Example 4: I asked a friend to let one of my mentees into his very expensive training for free, and he said yes. I let her stay in my hotel room with me so she could afford the trip. It cost me one favour and sharing my room to give her a huge leg up for her career.

I use the power and privileges of my current role to help others, and you can too. You may not even realize how much power you have until you start helping someone.

Sometimes it’s recommending or loaning someone the right book. Sometimes it’s about letting them have a place in your training, workshop, talk, or conference for free. Sometimes it’s helping them when they are stuck at work on a technical problem and you give them the answer. Maybe you will introduce them to the person who will hire them some day. It’s about helping however you can.

The key with mentoring is that they can trust you, and that you have their best interests at heart. It’s not about being perfect or knowing everything. It’s about your motivations.

Good luck folks!


Security bugs are fundamentally different than quality bugs

This topic has come up a few times this year in question period: arguments that quality bugs and security bugs ‘have equal value’, that security testing and QA are ‘the same thing’, that security testing should ‘just be performed by QA’ and that ‘there’s no specific skillset’ required to do security testing versus QA. This post will explain why I fundamentally disagree with all of those statements.

First some definitions.

A software bug is an error, flaw, failure or fault in a computer program or system that causes it to produce an incorrect or unexpected result, or to behave in unintended ways.

A security bug is specifically a bug that causes a vulnerability. A vulnerability is a weakness which can be exploited by a Threat Actor, such as an attacker, to perform unauthorized actions within a computer system.

QA looks for software bugs (any kind); security testers look for vulnerabilities. This is the main difference, their goals.

Just as all women are human beings, but not all human beings are women; while all security bugs are defects, not all defects are security bugs.

Now let’s dissect each of the claims above.

1. Quality bugs and security bugs ‘have equal value’.

If a security bug can lead to a low risk vulnerability, it does not have ‘the same value’ as a non-security-related bug that is making the system crash over and over. The same as if a security bug is creating a situation of a potential data breach, or worse, it’s not equivalent to the fonts not matching from page to page. I am of the opinion that security bugs are more likely to be able to cause catastrophic business harm than a regular bug, due to the fact that if your system has fallen under the control of a malicious actor, creativity is the only limit. Malicious actors never cease to amaze me with the damage they can do.

Someone is wearing camouflage. — #MSIgniteTheTour, Toronto, 2019

2. Security testing and QA are ‘the same thing’

The goals of security testing and quality assurance testing are different, which I feel makes them obviously different (if they were the same, why would they not be called the same thing?), however I want to get deeper into this idea.

Security is a part of quality.

I often say, “Security is a part of quality.”, because I believe this to be true. You cannot have a high-quality product that is insecure; it is an oxymoron. If an application is fast, beautiful and does everything the client asked for, but someone breaks into the first day that it is released, I don’t think you will find anyone willing to call it a high-quality application.

There are many different types of testing;

· Unit Testing — small automate-able tests to verify small units of code (functions/subroutines) to verify it does the one thing it is supposed to do.

· Integration Testing — test between different components to ensure they work well together. Larger than unit tests, but less intense than end to end tests.

· End-To-End Testing — ensuring the flow of your application from start to finish is as expected

· User Acceptance Testing (UAT) — manual and/or automated testing of client requirements (often used interchangeably with ‘QA’),

· User Experience Testing (UX) — verifying that the application or product is easy to use and understand from a user perspective)

· Regression Testing — verifying that new changes have not broken anything that was already tested, a ‘retesting’ of all previously released functionality

· Stress/Performance/Load Testing — verifying your application can handle large amounts of usage/traffic while continuing to perform well, generally performed using software tools (although each of these three have slight differences, they are all generally lumped in together)

· Security Testing — a mix of manual and automated testing, using one or more tools, with the aim to find vulnerabilities within applications.

There are more types of testing, but I think you get the point.

Some or all of these types of testing can be used to verify that a product is of high quality, and security is just one part. Therefor, security testing and QA are not ‘the same thing.’

3. Security testing ‘should be performed by QA’

For each one of the types of testing listed above a different skillset is required. All of them require patience, attention to detail, basic technical skills, and the ability to document what you have found in a way that the software developers will understand and be able to fix the issue(s). That is where the similarities end. Each one of these types of testing requires different experience, knowledge, and tools, often meaning you need to hire different resources to perform the different tasks. Also, we can’t concentrate on everything at once and still do a great job at each one of them.

Although theoretically you could find one person who is both skilled and experienced in all of these areas, it is rare, and that person would likely be costly to employ as a full-time resource. This is one reason that people hired for general software testing are not often also tasked with security testing. Another reason is that people who have the experience and skills in order to perform thorough and complete security testing are currently a rarity, there is a skill shortage, while as an industry we are lucky to have quite a number of skilled QA professionals, making them easier to hire and staff. Lastly, the amount of time, training and experience that it takes to become a security tester, versus a general software tester, is more difficult to acquire.

Training on how to perform security testing is extremely expensive and difficult to find, it generally takes longer to learn it as a skill than other types of testing, and there are fewer opportunities to get into that industry, when compared to QA. Thus it is more difficult to become a security tester, when compared to general testing. Scarce resources, high demand, expensive training means it costs significantly more to hire security testers than it does to hire general software testers.

All of these facts lead up to the reality that it is cost-prohibitive to have staff your QA team with professionals who are skilled and experienced in both QA and security testing. This also means that you are creating a single point of failure for testing in your organization, which will not save you money in the long run.

#MSIgniteTheTour, Toronto, 2019

Another point on this topic; those who work in the security industry are likely to have a preference for their area of focus, security, and may be unwilling to perform other types of work outside their area of concentration (people who specialize generally want to work within their area of specialization, whenever possible, and security testing is a specialization).

4. ‘There’s no specific skillset’ required to do security testing versus QA.

First of all, I feel this statement is insulting to QA testers, as though they do not have a specific skillset that makes them good at what they do. I don’t believe that to be true. I suspect that when people make this argument that it is out of frustration with our industry, because I honestly cannot fathom someone thinking that security testing does not require specific experience, training or skills; otherwise there would be no skills shortage and it would not be a high-paying job. Security testing is a specialization within the field of testing, just as there are specializations within any field, and by definition it requires more knowledge and training to form the skillset in order to do the job.

I do not intend to downplay the value of QA testing, only to explain that quality assurance is different from ensuring that a product is secure. I should also say that I feel that hacking is sometimes glorified in television, the media and our industry as a whole, in a way that isn’t logical to me. Security testing is very important, but I do not believe that hackers are superior to other professionals who work in IT. In fact, I choose to focus my career on AppSec, DevSecOps and other types of defence, because I truly believe that it is more important that we write secure code than we ‘hack all the things’. Security is so much more than just security testing (ethical hacking), it is secure design, secure coding, threat modelling, etc.

I feel comments like this (#4) are not based on facts, but feelings, and it’s difficult to debate with someone when that is the case.

It is okay if we disagree on this topic. Debate is good and healthy, and I would love to hear your feelings, thoughts and ideas in the comments.

At this point I’d like to remind you all that security is everybody’s job. Not only is it everyone’s responsibility to do their job in the most secure way they know how, but having many different people look at something with security in mind can help us find new and different problems that may have otherwise been missed.

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Security Headers for ASP.Net and .Net CORE

Website report showing we received an A

For those who do not follow myself or Franziska Bühler, we have an open source project together called OWASP DevSlop in which we explore DevSecOps through writing vulnerable apps, creating pipelines, publishing proof of concepts, and documenting what we’ve learned on our YouTube Channel and our blogs. In this article we will explore adding security headers to our proof of concept website, This blog post is closely related to Franziska’s post OWASP DevSlop’s journey to TLS and Security Headers. If you like this one, read hers too. 🙂

Franziska Bühler and I installed several security headers during the OWASP DevSlop Show in Episode 22.1 and 2.2. Unfortunately we found out that .Net Core apps don’t have a web.config, so the next time we published it wiped out the beautiful headers we had added. Although that is not good news, it was another chance to learn, and it gave me great excuse to finally write my Security Headers blog post that I have been promising. Here we go!

Our web.config looked so…. Empty.

Just now, I added back the headers but I added them to the startup.cs file in my .Net Core app, which you can watch here. Special thanks to Damien Bod for help with the .Net Core twist.

If you want in-depth details about what we did on the show and what each security header means, you should read Franziska’s blog post. She explains every step, and if you are trying to add security headers for the first time to your web.config (ASP.Net, not .Net CORE), you should definitely read it.

The new code for ASP.Net in your web.config looks like this:

<! — Start Security Headers →
<add name=”X-XSS-Protection” value=”1; mode=block”/>
<add name=”Content-Security-Policy” value=”default-src ‘self’”/>
<add name=”X-frame-options” value=”SAMEORIGIN”/>
<add name=”X-Content-Type-Options” value=”nosniff”/>
<add name=”Referrer-Policy” value=”strict-origin-when-cross-origin”/>
<remove name=”X-Powered-By”/>
<! — End Security Headers →

Our new-and-improved Web.Config!

And the new code for my startup.cs (.Net CORE), looks like this (Thank you Damien Bod):

//Security headers make me happy
app.UseHsts(hsts => hsts.MaxAge(365).IncludeSubdomains());
app.UseReferrerPolicy(opts => opts.NoReferrer());
app.UseXXssProtection(options => options.EnabledWithBlockMode());
app.UseXfo(options => options.Deny());

app.UseCsp(opts => opts
.StyleSources(s => s.Self())
.StyleSources(s => s.UnsafeInline())
.FontSources(s => s.Self())
.FormActions(s => s.Self())
.FrameAncestors(s => s.Self())
.ImageSources(s => s.Self())
.ScriptSources(s => s.Self())
//End Security Headers

Our beautiful security headers!

In future episodes we will also add:

  • Secure settings for our cookies
  • X-Permitted-Cross-Domain-Policies: none
  • Expect-CT: (not currently supported by our provider)
  • Feature-Policy: camera ‘none’; microphone ‘none’; speaker ‘self’; vibrate ‘none’; geolocation ‘none’; accelerometer ‘none’; ambient-light-sensor ‘none’; autoplay ‘none’; encrypted-media ‘none’; gyroscope ‘none’; magnetometer ‘none’; midi ‘none’; payment ‘none’; picture-in-picture ‘none’; usb ‘none’; vr ‘none’; fullscreen *;

For more information on all of these security headers, I strongly suggest you read the OWASP Security Headers Guidance.

We now have good marks from all of the important places, https://securityheaders.com and, but hope to improve our score even further.

For more information, watch our show! Every Sunday from 1–2 pm EDT, on Mixer and Twitch, and recordings are available later on our YouTube channel.

Please use every security header that is available and applicable to you.

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Hacking Robots and Eating Sushi

Jesse Honnes and his rebots

I recently had dinner with an old friend, Jesse Hones, the Engineering Manager of Systems / Senior Software Developer of Aprel. I remember when we first met he explained that he designed and programmed robots to measure radio frequencies at extremely precise levels. Fast forward a decade; I am an ethical hacker and he is designing more complex robots than ever before. So I did what anyone would do; I asked him to come on the OWASP DevSlop show and talk about hacking robots.

Jesse Hones and one of his many robots.

His answer was “Not yet. I can’t tell you what all the loopholes are, because I still need those to get my job done.”

Interest piqued.

He explained that previously robot firmware was all custom; each system it’s own unique snowflake. Like custom software is today, ripe with vulnerabilities, however you can only hack them one system at a time. Recently this has changed, he explained, things are standardizing and many use the same components, which all run the same firmware. I asked if this was like Windows XP back in the day, in that almost everyone is running it so when a bug is discovered *every* system is vulnerable. He said yes.

But Jesse is a developer, not a security person, so he looks at it with the “I need to make sure this runs properly” lens, not the “I want to make this robot do my bidding!” viewpoint of an ethical hacker.

More of Jesse’s robots.

Obviously, I had to threat model the situation immediately. Poor Jesse.

Me: What if malware is created that stops production for all affected robots?

Jesse: Yes, this would be costly.

Me: What about ransomware?

Jesse: <unhappy face>

Me: What if someone takes over the robots and has them implant something in every 20th chip it makes to spy on the users? As a supply chain attack?

Jesse: Yes, that would be very bad. However this could change soon; we may be switching over to Windows Embedded.

Me: Okay…. But what if a company used robots Stuxnet style and slowly sabotaged their competitors? So they could never quite finish their R&D on a product? Meanwhile stealing their ideas? Think Stuxnet meets Schindler’s List.

Jesse: …

Me: What if someone uses them to mine bitcoin? Robot Crypto Pirates!

Jesse: I guess tha-

Me: What if robots become the weak point of most networks and are used regularly as pivot points by hackers?

At this point Jesse has resorted to quietly waiting for me to calm down. I look at him.

Me: What if a robot murders someone?

Jesse: That would be the end. The industry could not survive. That cannot be allowed to happen. Ever.

Sounds like we need more robot hackers.

For content and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Why I Love Password Managers

Tanya waving

** This article is for beginners in security or other IT folk, not experts. 😀

Passwords are awful. The software security industry expects us to remember 100+ passwords, that are complex (variations of upper & lowercase, numbers and special characters), that are supposed to be changed every 3 months, with each one being unique. Obviously this is impossible for most people, and for those whom it is possible, why would they want to waste all of that brain power on something that is, essentially, meaningless?

Comic illustrating the need for password security.
I love XKCD and so should you:

That’s right, the password itself means nothing. The purpose of the password is to authenticate the user; to prove that *you* are the real, authentic, you. Not another person with the same name or birthday, but the person who owns the account that is being logged into. The person who’s money is in that bank account. The person who tweets all those tweets.

I realize that the security industry is wise to this issue, and NIST has updated it’s password advice, but that still leaves many applications doing things the old way and programmers continuing to implement the old security advice. The result is password reuse; people using the same password over and over, for most or all of their accounts. Last month I heard a speaker that claimed the most common password has changed from “Password1” to “Autumn2018”, “Winter2019” and so on, for every third month. Tragic.

The reason this is a problem is that once one account is breached, or a password stolen, that email & password combo (known as credentials) is likely to work in many, many other places. “Credential stuffing” is the term for when criminals or other bad actors steal many credentials and use scripts to try them all against a larger site, with malicious intent. These attacks are often wildly successful, which makes password reuse very scary from a defender’s perspective.

At least 1% of what I know comes from XKCD:

This is where password managers come in. Password managers allow users to generate long and complex passwords, as long and complex as the software will allow. It remembers all of them, keeping them in an encrypted vault. When users go to log into something they either press a button on the browser to have them do it all for them, or they open the password manager, enter the one single password they need to know, and access all of their secrets.

Password managers can protect you against several types of attacks:

  • Password reuse attack (if all of your passwords are different, if one account is breached, the rest are fine)
  • Phishing attacks that target your accounts using URLs that are similar to ones you already use. When you go to the fake URL your password manager will not recognize it, and this should tip you off that you are under attack
  • Brute force attacks; if you are always using very long and complex passwords (because you don’t need to remember them), it would take forever for a brute force attack to uncover your password.

Below is a non-exhaustive list of password managers. Some are free, some are not. Either way, go get one so you can stop wasting brain power on boring things like remembering your passwords.

If you work in an IT environment, you absolutely must have a password manager. I strongly suggest that anyone who uses a computer regularly and has multiple passwords to remember to get one, even if you don’t consider yourself tech savvy. Put every single password in there, change all the passwords you used to have to long randomly-generated ones, and ensure the password you use for your password manager is a passphrase that is an entire sentence (such as: “I work with Cloud Defense and I like them a lot!” or “Tanya Janca is my favourite blog writer and her jokes are never self-deprecating”).

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

VAs, Scans and PenTests; not the same thing

I’d like to define a couple of subjects that seem to be confused often in the industry of application security; Vulnerability Assessment (VA), Vulnerability Scan (VA Scan) and Penetration Test (PenTest). They are often used interchangeably, and the differences do not seem to be well-understood; I have seen this misunderstanding used against many clients who have purchased these services and am hoping clear definitions will help us all.

Vulnerability Assessment (VA) (sometimes called a security assessment) is an assessment of the security of a system, in attempts to find all possible vulnerabilities. It generally involves using multiple scanning tools, manual exploration and evaluation, as well as examination of all security controls (a lock on a door, a login screen, or input validation are all security controls). The Assessor does not exploit vulnerabilities that are found (for instance they see the door is unlocked, but they do not enter), they just report them, along with information on how to fix each of the vulnerabilities. This sometimes includes a security review of the design and/or threat modelling, questionnaires or interviews, and generally takes days or weeks, not hours or minutes. Sometimes the security assessor will create a proof of concept (POC) to explain a vulnerability with more clarity, but to be clear, that is not the focus of this exercise.

In the past when I was hired to do a penetration test, I would often describe a VA, as if that’s what they wanted, and they would say “yes, do that”. My contract would say “PenTest”, but I would conduct a vulnerability assessment.

In the past I often had requests for “a quick VA” or “VA Scan”, which as it turns out meant “one scan with a vulnerability assessment tool” and no other activities, such a manual investigation of the results. This can be done in as few as a few hours or even minutes if your target is small, and the person performing the task does not need advanced training or skills to perform the task. There are many VA tools on the market; Nessus, Nexpose, OpenVAs and Azure Security Center (for Azure cloud infrastructure only) are all used for scanning infrastructure, while Microsoft Security Risk Detection, Burp Suite, Zed Attack Proxy, NetSparker, Acunetix, AppScan, and App Spider are for scanning web apps. Doing “a quick scan” with any of these tools will net you a list of vulnerabilities, and many of them will be true positives (as opposed to false positives); it is most certainly a worthwhile venture. It is not, however, as thorough as a Vulnerability Assessment or Penetration Test, and there will remain many other issues that are not uncovered if you leave it at that.

I also enjoy infrastructure as code, from time to time

Penetration Test is another beast entirely. A PenTest seeks to find vulnerabilities and then exploit them, to prove real-world risk. Sometimes penetration tests can cause damage (exploits, if not done very carefully, can leave a mess), and sometimes the scope of a PenTest can call for the tester to collect “trophies” to prove they did the things they claim.

It is very rare that I write an exploit or feel the need to exploit vulnerabilities I find when testing*. Most of the times in my career when I have exploited something everyone just ended up pissed off at me; from the first PenTest I ever did as a sub-contract when I ruined a live prod server and the person that hired me had to explain what happened, to creating proof of concept exploits that embarrass management into doing “the right thing”, to breaking a Drupal CMS site so badly that they had to restore the database AND the app server (Drupal CMS itself was completely unusable) from backup. It’s nice that I impressed people, but I honestly would prefer to spend that extra time helping the developers fix what I have found and re-testing the fixes, rather than showing off whatever talent I have for burning things down.

Special note on ethics: I have seen many consultants who offer these services pass off a quick scan as a full VA or Pentest, charging for 10 days what took them only 1 day to perform. I have also seen many of these same consultants sub-contract this work out to others who they pay less (and with who they share your sensitive data with!), but they do not credit these individuals in the reports or contracts resulting in you having no idea who had access to your systems and data. When writing contracts for such services it would be wise to be explicit in what you are paying for, as well as who will do the work and what information must remain confidential. I am sad to report that I have met many consultants who have bragged about doing these types of (in my opinion) unethical practices. Buyer beware.

I would suggest that performing a proper VA against all of your custom applications as well as large COTS implementations (Customizable Off The Shelf system, such as SharePoint) is a best practice for Enterprise businesses. Not only would you be amazed at the things that you find, (assuming you fix the issues) you will have taken serious measures to avoiding a data breach in the future, as insecure software is still, sadly, the top reason for data breaches (as per the Verizon Breach Reports 2016, 2017, and 2018).

I hope this article helps instill a bit of clarity in our industry.

When I did testing, I did exploit XSS using alert boxes, regularly, because it’s 100% safe to do so. And also blind SQL with timers and errors, but to be clear I am very careful to only perform safe exploits when testing. I can feel myself putting my foot right into my mouth with this note…

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Threat Modelling Serverless

I met with my colleague Bryan Hughes the other day to discuss the security of a serverless app he’s creating for JSConf EU (there will be no spoilers about his creation, don’t worry). We had discussed the idea of threat modelling while on a business trip together and he wanted to give it a go. Since I am particularly curious about serverless apps lately thanks to Tal Melamed having dragged me into the OWASP Serverless Top 10 Project, I was excited to have a chance to dive down this rabbit hole.

Bryan’s app’s architecture:

  • Azure Functions App (MSFT serverless)
  • JWT tokens for Auth, they will be short-lived
  • His app will allow other Azure users to call it, with parameters, and it will do something exciting (see? no spoilers!)
Bryan Hughes, South Korea, Demilitarized Zone (DMZ)

Once Bryan has explained what his app would do, he told me his security concerns: who would have access to his app? Could they break into other areas of his Azure Subscription? Exactly what type of authentication token should he use? How would he handle session management? All of which are definitely valid concerns, I was impressed!

We discussed each one of his concerns, and possible technical solutions to mitigate each risk. For instance, use JWTs only to send a random session token value, never a password or sensitive data, and never a number that actually corresponds to something important, such as using someone’s SIN number as their session ID number, that is sensitive info, and an insecure direct object reference. I reminded him that JWTs are encoded, not encrypted, and therefore they were not a secure way to transmit data. Also, I suggested that he create a virtual network around this app (firewalls), just in case someone gets into it, it would mean that they can’t get into the rest of his network and subscription.

NoteRFC 7516 allows for the encryption of JWT tokens, follow the link for more info.

Then we talked about my concerns, which started with a bunch of questions for Bryan about his users and his data.

  • What data are you asking for from your users? Is any of it sensitive?

He’s asking for their GitHub info, so that he could give them access to call his serverless app so he could grant them access, but that is all. This one piece of data is sensitive info.

  • Who are your users? What are their motivations to use your app?

The users are conference attendees who want to learn how to call a serverless app like an API, and then make his app do the cool thing that it would do. It’s a learning opportunity, and it’s fun.

  • Let’s assume you have a malicious user, how could they attack your app?

My first concern was Denial of Service or Brute Force-Style attacks. To avoid these attack vectors he should follow Azure Functions best practices guide, specifically, he should set maxConcurrentRequests to a small number (to avoid a distributed denial-of service), add throttling (slowing down requests to a reasonable speed, which would stop scripted attacks) by enabling the “dynamicThrottlesEnabled” flag, and ideally also set a low number for the maxOutstandingRequests setting, to ensure no one overflows his buffer for requests, which would also result in a denial of service. (Note this is the “A” in CIA: availability)

Other attacks I was concerned about where someone sending malformed requests, in attempt to elicit unexpected behaviour from his app, such as crashing, deleting or modifying data, allowing the user to inject their own code or other potential issues. We discussed using a white list for user input validation and rejecting all requests that were not perfectly formed, or that contained any characters that were not “a-z,A-Z,0–9”. (Note this is an attack on both Integrity and Availability)

The last attack vector I will list here is that users may attempt to access the data itself, the subscription IDs of all the other users (Confidentiality). This was the most important of the risks in this list, as you are the guardian of this data, and if you lose it, and they were to be attacked successfully as a result, this could cause catastrophic reputation damage (to the conference, to him as the creator of the app, to Microsoft as his employer). When I explained this, it became his #1 priority to ensure his users and their data were protected during and after using his system.

Tanya Janca, South Korea, DMZ, 2019
  • How long are you keeping this data? Where are you storing it? How are you storing it?

Originally Bryan was hoping to avoid using a database together; no data collection means nothing to steal. Although he’s still looking into if that’s a possibility, the plan is to use a database, for now.

He decided he would keep the data until just after the conference was over, and then destroy it all (hence making the risk only a 48~ hour risk). It would be stored in a database (we discussed encryption at rest and in transit, as well as always using parameterized queries, and applying least privilege for the DB user that calls those queries (likely read-only or read/write, but never DBO).

  • What country is this conference in? Will you be subject to GDPR?

It would be in Europe, and therefore is subject to GDPR. I introduced him to Miriam Wiesner, an MSFT employee with a Pentesting and Security Assessment background, who happens to live in the EU and therefore would have familiarity. I said she would have better advice than I would.

The conversation was about an hour, but I think you get the picture.

The key to serverless is to remember that almost all the same web app vulnerabilities still apply, such as Injection or Denial of Service (DOS) attacks, and that just because there is no server involved, does not mean you do not need to be diligent about the security of your application.

If you want to keep up with Bryan Hughes, and see the results of his project, you can follow him on Dev.TO.

I hope that you found this informal threat model helpful.

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Presentation Tips for Technical Talks

Me with Solomon Sonya, at sector 2021 !

In the past few years I’ve given and watched several technical talks, and they are not all created equal. Recently I met with Teuta H Hyseni to talk about an upcoming talk she was planning (securing AI and ML, very interesting!), and afterwards I made several notes about general tips for technical talks that I have shared below.

Me with Solomon Sonya, at sector 2021 !
Me with Solomon Sonya, at #sectorca #sector2021 !
  1. The first thing I always do is explain what the talk is about, so audience members know if they want to stay or go. If some people walk out it’s okay, your talk wasn’t for them anyway. For everyone else, it will reaffirm they are in the right room.
  2. Whenever you say the name of a product the first time, make sure you say it very clearly, especially if the audience’s first language is not the same as the language you are giving your talk in.
  3. Always explain what every acronym means the first time you use it. If it is a core component of your talk, if it’s not too clumsy, say the full name of it twice or even three times, throughout the talk.
  4. If there is one new key concept that you want to audience to take away from your talk, explain it 3 times, in different ways. Abstract concepts are very difficult for people to learn at first, and explaining it a few different ways, and repeating it, will ensure that people learn it.
  5. If you put a bunch of words on the screen people will read it, as soon as you show the slide. They will not listen to you until you are done reading. So either use images and explain, then put text, or give the audience a few seconds to read what you wrote. Trust me, 90% of the audience will read the text and not listen, so change your slides accordingly.
  6. When you introduce yourself pronounce your name very clearly and slightly slowly, especially if it’s a bit unusual/not common in the area you are presenting.
  7. Audiences tend to like stories that tie together technical points. If you are trying to tell them “Don’t roll your own crypto” follow it up with a story about how disastrous it was when you saw it done. It helps drive the point home. *Extra points if the story is funny or is very interesting or otherwise special.*
  8. Try not to put too much on one slide, slides are free, just make more.
  9. Ensure that your text is large enough for the audience to read, especially code. If possible, try to put your slides up on the big screen in advance, walk to the back of the room, and see if you can read your own slides.
  10. Remember that your audience is smart, but might not know your topic well, so try hard to explain what each part is, unless you are at a speciality/advanced conference on that topic. For instance, when I give security talks at developer conferences I always try to remember my audience is very smart, but they are not likely experts in security, so explain each point well, even the basic ones. I don’t want to leave anyone in the audience behind, and neither do you.
  11. Put a summary slide at the end. People will likely take photos of it. If you see people with their cameras/phones up, try to give them enough time to take the photo(s) of your slide(s).
  12. If possible, use imagery to explain your concepts more clearly. Personally I’m weak in this area, but whenever I see someone else do it well I remember that I need to try harder to do that whenever possible.
  13. If possible give explanations of why the audience should or should not do something. For instance: “do not feed machine learning systems data from the internet, it has to be clean”, but what does “clean” mean? Instead we could follow that with “Clean datasets could include survey data, customer data, and data purchased from social media platforms”.
  14. Practice to ensure you are approximately the correct amount of time. Factor in the fact that you will likely go a bit fast. Ending late or very early is not good, you don’t want your talk to bleed into the next speaker’s allotted time (that is very rude) and you also don’t want the audience to feel they didn’t get enough of you. If you go under, perhaps use that time for Q&A.
  15. Take a breath in-between each major point — so the audience has time to digest the info, and so that you can breathe.
  16. If you see the audience’s eyes sort of closing a bit, this likely means they are tired or their “brains are full”. This might be from all the previous talks, or yours, but it likely means they are having trouble keeping up. It generally does not mean that you are boring.
  17. If you see many people playing with their phones this can be good or bad. Sometimes they are taking notes or tweeting about you, but other times they are just distracted. If you happen to be good at telling jokes, this would be an ideal time to briefly stop and tell a joke, to get their attention back. **This approach is not for everyone, and you have to know for sure that you are funny. A bad joke will potentially make people leave.**
  18. Many people like to hear about where the future will go in your area of expertise, if you have some guesses, perhaps share them?
  19. Unless your talk is “an intro to xyz” or level 101, don’t spend more than 10 minutes of your talk giving background on the topic. If I go to a cryptocurrency talk and they spend 30 of the 50 minutes talking about the origins of bitcoin, I’m going to play with my phone and wait for the talk to actually start.
  20. If you feel comfortable, give a rough outline of your talk right at the start, then the audience knows what to expect.
  21. If possible, have links from your talk to longer videos or blog posts that go deeper into specific topics. Even if the videos or blogs are not yours, if they are good, it’s nice to give the audience more if they want more.
  22. At the end of your talk always say thank you (the audience could have done 100 other things with the time they just gave to you), and then pause to allow them to clap. Whenever a speaker doesn’t give the audience a space to clap I always feel so awkward. Don’t ask “Any questions” immediately at the end, allow the audience to thank you.
  23. Practice on someone you trust, get feedback, make adjustments, repeat. Do this until you know your talk is awesome and you will be a smashing success!

I hope you find these tips helpful!

Other relevant articles & videos by yours truly!

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!

Sharing talks with the InfoSec & IT Community and Industry

Artwork by Ashley McNamera

I recently decided that I would share most of my talk content with my community (everything that I am not currently applying to conferences with). By “share” I mean give my express permission for anyone, anywhere, to present content that I have written, with no need to pay anything or ask for my consent. You can even charge money to give the talk, but if you do I kindly ask you make a donation to the OWASP DevSlop Project or WoSEC.

OWASP Bat Signal, Image created by Ashley McNamara

I’ve had a few people ask me why I would do this, and there are a few reasons.
* To spread the word about how to secure software; it’s important to me to try to make the internet and other technologies safe to use.
* To help new speakers (especially from underrepresented groups). If they have something they can present, with instructions they can follow, hopefully it will help make them more confident and skilled at presenting.
* To share knowledge with my community in general: sharing is caring, yo.
* The more people who present my talk the more people who may decide to follow me. SO MUCH WIN!

The first talk I decided to release is called “Pushing Left, Like a Boss”. It’s an intro to application security that I’m told is very accessible for technical and non-technical audiences alike. My mom watched me do this talk and said “I finally understand what the IT Security people are talking about at work and why they were bothering me!” You could do this talk at any almost IT meetup and they are likely to find value; it’s also great for a lunch and learn at work with software developers or other IT staff. Topics covered include; threat modelling, Pentesting, code review, creating a secure system development lifecycle, and how to figure out the most secure way to do whatever you are trying to do. Talk difficulty level: 101/intro. Also, this talk is based on the Pushing Left, Like a Boss Blog series.

In efforts to ensure anyone who presents my material has a good experience I made a GitHub repo with an instructional video of what to say, a readme file with written instructions and links so you can watch me do the talk myself.

Please go forth and teach AppSec! And if you have feedback I want to hear it!

For content like this and more, check out my book, Alice and Bob Learn Application Security and my online community, We Hack Purple!