We think we know podcast

We think we know hackers thrive on deep environment knowledge

Publisher
Pentest-Tools.com
Updated at
Pete Herzog at We think we know

“Not everything works as configured. Not everyone behaves as trained.”


The reality of this statement makes it possible for us, the people in offensive security, to have a job. It also highlights how unpredictable our work can be and how never-ending our learning process is.


We work in a space where things are so complex that we need to combine big-picture, higher-level thinking with boost-on-the-ground practice.


And our guest today is brilliant at doing just that. 


Pete Herzog has spent over two decades distilling the fundamental principles of security testing, turning them into a decade-defining manual - the Open Source Security Testing Methodology Manual (OSSTMM). 


With V4 coming this year and new results from in-depth research, Pete brings offensive and defensive security concepts together to break down important misconceptions.  

Pete Herzog bio

Pete Herzog

Pete Herzog is an experienced hacker deeply immersed in security, AI, and business. As an active security researcher, investigator, and threat analyst, he develops tools and techniques to provide outstanding services for clients facing unique problems. 


He’s also the co-founder of the OSSTMM, a standard for security testing and analysis, and continues to lead this research project and its refined results with an international team. 


What’s more, Pete co-created Hacker High School, a free cybersecurity curriculum for teens. And, while he rarely talks at industry events, he spends his (free) time doing video conferences and guiding students through the opportunities in cybersecurity.

*This is an audio-exclusive episode to which you can listen below or on any podcasting app you prefer*

Listen to this conversation to uncover:

  • Why you can’t do security without understanding the process behind it [08:23]

  • How automation can help but, at the same time, hurt the ones using it [11:00]

  • Why you can’t rely only on automated security tools in your pentests [19:10]

  • The importance of implementing security controls to change the environment [28:22]

  • Pete’s perspective on Zero Trust and how they tackled this in OSSTMM [35:18]

  • Why he thinks there are “too many parrots, not enough pirates” in this space [43:42]

  • The excitement of researching for OSSTMM v4 and exploring new technologies [51:40]  

From the expert systems behind AI-driven tools and their blindspots to generalizations that hurt offensive security outcomes, we explore key elements that shape today’s problems - some of which you’re probably wrestling with as well. 

Let’s explore them!

Resources from this episode

Pete on LinkedIn

Pete on Twitter

ISECOM (The Institute for Security and Open Methodologies)

OSSTMM 3 (The Open Source Security Testing Methodology Manual)

ICANN

Unicornscan

Hacker High School

The Zero Trust security model

The PCI security standard

Authenticate like a boss for the Troopers conference (2017)

Neurohacking with Pete Herzog for the Cyber Empathy podcast (Season 4)

Listen to this conversation on:

Spotify 

Apple Podcasts

Amazon Podcasts

Google Podcasts

Episode transcript


Andra Zaharia: Only the most curious and persistent people thrive in offensive security. How do I become a better hacker? How can I build and maintain my advantage over adversaries? 

And what's limiting my ability to think creatively? 


This podcast is for you if you're the kind who's always digging deeper for answers. Join me as I talk to some of the world's best offensive security pros about their work ethic, thinking, and real-world experiences.


This is We think we know, a podcast from Pentest-Tools.com


Andra Zaharia: Not everything works as configured and not everyone behaves as trained. The reality of this statement makes it possible for us, the people in offensive security, to have a job. It also highlights how unpredictable our work can be and how never-ending our learning process actually is. We operate in a space where things are so complex that we need to combine big picture, higher level thinking with boots on the ground practice. And our guest today is brilliant at doing just that. 


Pete Herzog has spent over two decades distilling the fundamental principles of security testing and turning them into a decade defining manual. I'm talking, of course, about the open-source security testing methodology manual. With v4 coming this year and new results from in-depth research, Pete brings offensive and defensive security concepts together to break down important misconceptions. 


From the expert systems behind AI-driven tools and their blind spots, to generalizations that hurt offensive security outcomes. We explore key elements that shape today's problems, some of which you're probably wrestling with as well.

So let's get into them.

[02:02] Andra Zaharia: Pete, welcome to We think we know. It is a wonderful opportunity to be able to talk to you, to be able to get your insights from a career that's not just fascinating but truly impactful in the security community, especially when it comes to security testing. Not just penetration testing, but so, so much more. It's a real pleasure to have you, and I cannot wait to unpack some of the evergreen principles in your work that keep teaching us things.

Pete Herzog: Well, thank you. It's always a pleasure to be doing these with you, and hopefully, I'll have something actually smart to say. So let's give it a shot.

Andra Zaharia: Let's do that. One of the things that I kind of wanted to get from you, because you work a lot in research, you've really studied things and went into the depths of security testing, not just to put together the open-source security testing methodology manual, but also to just sit with the process for long enough until it becomes something evergreen that so many people rely on to do their work and to model their thinking. 

So in the years that you worked on this, a lot of things have kind of bled over into, let's say, a more public conversation. The gates have opened up to let more people in, which is a great thing. But that also means that with this kind of openness, there's also a level of simplification that happens in the industry so..And of course, some stereotypes that, you know, come along and get pushed forward and so on and so forth. And I was wondering, what is a particular stereotype or generalization that you think most hurts, though, the work that security specialists do when it comes to security testing?

 

[04:11] Pete Herzog: Oh, that's a tough one. So I think, following your train of thought, what you were saying, might be even a little bit backwards. I think, yes, things definitely got simplified. But if we step back, we say people who are in, old people like me in security still. So most of us, Gen Xers started in IT because there was no cybersecurity job. It wasn't really a thing.  In - when was it? 97- I was hired to be an ethical hacker, but it still wasn't called that. It was just called e-security. So the whole thing about becoming an ethical hacker, I mean, hacker itself was a dirty word, probably until about 2014-2015. And even still, you see journalists confusing it. And that's okay. That's going to be a constant battle. And I think every profession has it. 

I mean, I think lawyer is still a dirty word for a lot of people. So that's something they're going to have trouble getting over. But I mean, really what happened was that it was an IT field, it was a tech field. It was complicated. People then moved over to security from networking. So they had a deep background in that. At some point, though, I would probably say around the 2010s, it became really common to have cybersecurity as an undergrad program. And a lot of the stuff got skipped. You know, the system hardening, the networking. Sure, they. Not everywhere and not all schools, but for a large majority, this happened. And of course, the huge growth of things being aligned, people's lives being aligned, meant that they needed more security, more privacy. 

Companies were being held accountable for breaches more and more. So still not as much as they should be. But, you know, a lot of liability is still pushed down to the customer, the consumer, the user. And so what ends up happening here is that there's a great simplification because a lot of cybersecurity people don't need the technical knowledge to graduate or to get a job. I mean, the proliferation of CISSP, which is talk the talk, not walk the walk. And of course, if you bring this up in public, people are like, well, you don't need to be techie to be in cybersecurity. There's all, and it is a broad field. And yes, there is a place for policy and contracts and things, but again, you cannot write good policy. You cannot. Just like you can't make good laws if you don't understand how things work. 

So you, yes, you don't have to be technical, but you at least have to have the understanding of how these things work. I don't care if it's how people work, how email works, how DNS works. You can't just go and make rules or laws without actually understanding what matters. And this is the biggest problem, this simplification, because then you end up with something like GDPR, which was a good thing, and now I'm stuck with having to approve cookies all the time on every website, which is extremely annoying, to say the least. And not only that, but it's led to a whole bunch of new phishing attacks because people are just used to clicking yes, just to move on. 

So these things have long-term ramifications. And as you have each new generation moving forward into it, it affects them as well. They have a need for cybersafety, which, again, simplification is required in order because they don't have to be technical experts, but they still need to be able to protect themselves without actually knowing how things work. And this is where things get really confusing.

[08:23] Pete Herzog: So is there a stereotype that hurts us? I probably couldn't nail one down specifically, but I think the idea that you can do security, you can be in security without understanding the process or the operation of the thing you're trying to secure, that hurts us a lot. I see this all the time. And it's funny because you wouldn't see this in physical security. 

You wouldn't see somebody protecting a building and not knowing where the entrances and exits are and how people move through it. They obviously know how to protect people, how to protect things. They know how to use their equipment. And we have this belief in cyber that you don't have to. And I think that hurts us a lot.

Andra Zaharia: It does, and in itself, it is an oversimplification of the idea of openness and welcoming people into the industry and encouraging them to pursue a path in this industry. But that is, again, let's just call it what it is, a superficial way of relating to this movement that is important and worthy of our time and attention. And just like many other things, because to me, security in general and offensive security especially is such a thing of nuance. It has so much depth, it has so many aspects. 

And all of these details are not necessarily equally important, but they are important to consider thoughtfully because they make a huge difference. And it all ties into that aspect of craft that we're debating and why it cannot be commoditized as unfortunately, some claims have invaded this space with this perception that we can automate anything, we can do anything because we know what the steps are and we know what the work is. So we can just automate this end to end and you can just take care of other things in the meantime, which I believe couldn't be further from the truth. 

And that really takes out the incredible work that people are doing to do it thoughtfully and with commitment to doing respectful, good work for others. It takes them out of the picture completely, which is so unfair and so reductive.

[11:00] Pete Herzog: You know, I agree that there's a craft element to it, that there's a mindset depending on where you are in security. Again, it's a very broad field. Yeah. And if you're looking at penetration testing, ethical hacking, whatever you want to call it, but even on the blue team side, the purple team, there's a type of craft that goes into it. Now when we talk about automation, I was a partner in an AI company for the last five years, and we worked on AI before the LLMs became a thing. In AI, there's something called expert systems, which is you basically hypercharge automation, so to speak, in a colloquial way of saying it, that you take an expert who says this is how we do this, and then the automation runs it in these ways and the decisions are made according to what the expert would have done for those decisions. 

Everybody in AI can tell you the problem with expert systems is that the experts don't really know how they make those decisions, they don't really know why they do those things because it becomes second nature to them. And they never actually stopped to really break it down and say, why do I do this? Now if you ask most people why do you patch? They're going to say the same thing, well, it's you have a hole and you close it and then somebody can attack it. And they say that that's so obvious, you know, that patching is something you do as fast as possible so that the criminals can take advantage of it. 

But you have to ask again why? A lot of this is carried over from when it made sense to patch as quickly as possible because we had a lot of, a lot of systems, there was low automation and you had relatively few services. You went from hardening servers, so you had very few services, very few things running on it, so that if something was wrong, you did patch it. Because we had limited types of controls as well in order to take care of that. So this stuff is carried over and now people are like, oh, you need antivirus, you need to patch immediately, you need to do this. But they don't actually understand why. 

And this is the same with pentesting and everything else. Why did you do that? And all of this, if it carries over into an expert system, it's going to have those same flaws when the environment changes, because all this is based on the environment. So I may not want to patch my systems because these patches are for services that I don't use in a perfectly good running system that has been running for a year, everything works. Why should I change code in it now when it doesn't impact the use of it at all at this point, because it's not a service running, there's no internal users on it. It's hardened to least privilege. 

So that's not even something that's going to run or give access to somebody. I don't know if it's an internal program in the shell or something like that. So I mean there is knowledge that comes there and a lot of times, especially with patching, it's become sort of, we used to talk about if car dealers could have the kind of customer loyalty that OS makers have, you know, like because they do this kind of customer satisfaction, they would love to know more about their customers. And here you have your OS. I mean car makers are lucky if they hear from a customer every ten years, you know, officially beyond, because most people don't even go back to the dealer for servicing.

Meanwhile, you have OS makers pushing something on you every 8 hours. And so their constant reminder like, hey, here, look at us, we still care about your security. Look at us, we’re still here. And it really is more social engineering than it is actually protection. And this is the problem, you don't need this invasiveness, this automation of changing things or making things happen because they don't understand your environment, your situation. And this becomes learned over time when things start breaking because you don't follow change control. Or the OS maker decides that what was once free, they've decided not to make it free, so they remove it from your system. And now you're missing something that maybe you were using. And of course, as these companies get bigger and you try to talk to them about any kind of mistake that happens, you end up with automation and you can't get a human being to actually listen to your problem. And so this whole thing about automation, it's not just protecting us, it's also hurting us because we're not the only ones using the automation to find problems or try to scale things up

Those companies, those businesses, those application makers, they're also using automation to scape things up. And in the end, who drowns? The consumer. And therefore, if I have real security problems, I can't get that message out to the big ones. And this goes far beyond, I mean, I had it out with ICANN and Cloudflare because they're hiding, they're allowing fraud websites, fraud DNS's domain names to thrive. And I mean, ICANN has made new moves now, to address this, but just, just now, like within the last weeks, and I don't even know if it's fully implemented. But that's the thing, is that you'd go to them, say this is fraud, and you couldn't get things taken care of because they automated it. Just like the criminals are automating it as well. And we are all falling in the cracks of this. So, I mean, I think your comment on automation goes really, really wide to hurt a lot of people. You know, it's not just about craft. 

Andra Zaharia: Absolutely. And thank you for adding those examples because I think that they make it so much more palpable. The issue at hand, the issue with the idea that, oh, we have the technology to take care of this, we don't really need the people in it. But just like you mentioned, not we, the people, sometimes the source of problems, we're also the ones who are impacted by it. 

Pete Herzog: I mean, even if you find the problem, right, even if you're the one who recognizes it, you can't get that to them because they're not listening, they're automating it. And so again, that's a whole other side of security that's not being handled. 

Andra Zaharia: Absolutely. And to your point, I remember something that you mentioned in the open-source testing manual, which was that data security tools don't know when they're like. And this is one of the things that plays right into this because when you work in offensive security and you have an overreliance on tools, you may end up on a wild goose chase or just spend your time trying to figure out what the truth is behind the results that you got. 

And that is where human ingenuity and experience, that intuitiveness that comes with experience, kicks in. But when you transform that into an expert system, like you mentioned, then it kind of loses its power again, if I may say so. So it's, it honestly is, there's no perfect system.

[19:10] Pete Herzog: But, you know, that's a double-edged sword as well. So now we have.. A tool's only as smart as the person who made it. That's clear. Right? So if the tool, you run a vulnerability scanner and it tells you this problem, this problem, this problem, and do I trust it? 

Well, I can send these to IT and tell them to fix it. And I can tell you anybody who's done that gets a certain number back saying, there's nothing wrong. This isn't a problem. We don't even have that service. I don't know what went wrong. I don't know why you think this is a problem. Can you send me a proof of concept? A lot of the higher-end pentesting companies will run scanners and then just as an edge, and then they'll proof them, right? They'll go through and see if these are valid. 

Now, the other problem with that, though, is that you have generations, new generations of security testers who do this, and they don't really understand the answers coming back a lot of times because they don't have the long networking experience or background. So one of the things we find is a lot of them don't actually know what the answers they're getting from traceroute really are factually like, what does this factually mean? And how can this help me? 

Pete Herzog: And I know certain tools. I think it was Nmap that still does it. Some of their scans will do a traceroute to every host. And again, this is so that you can tell how many hops away it is. Why do you get the TTLs? Well, so that you can see if this is the machine talking to you or is it a security device in front of it or is it the router? And there's a lot of this kind of information that's lost. It doesn't really get into automation because it's a bit complicated. But at the same time, if you run these scanners, they are going to be fixing things that your intuition probably will screw you up on. So if you're an experienced tester, you might say something like, "Oh, we don't test more than the first 3000 UDP ports or plus a few that are known right now to have. Why? Because testing 65,535 UDP ports takes 18 hours. So 17 hours+ due to ICMP rate limiting. You get one reply a second. Now, a lot of this has been sped up in order to, I think unicornscan did it first in order to give you only test the UDP services. And this is something that we had worked on early in the early 2000 in order to speed up UDP testing. 

But most testers will still either not test it or just do the bare minimum of it and not do all, you know, 65.000, 65k, I should say ports, because it takes so long per host. And a tool that can do this and do it properly is not going to let your intuition get in the way and is going to do what's required of the job. No shortcuts. So in that case, automation can exist to do the grunt work that you don't want to do. And I think this is where some of us old school guys found the benefit in vulnerability scanners is because we know that it's going to do the work that we would take shortcuts on because it is grunt work, it is a pain. And so we let that run while we do our manual tests and we just give it its time to do its thing and then we go back and check. 

So again, automation, not bad per se. I mean, for investigations, we do it a lot because there's only so many social networks I can check at once, but with automation, I can have 50 social network fake accounts that are going through and checking and doing the investigations and grabbing the stuff all as an individual virtual machine, its own server. Again, something that I couldn't do, and it's going to do it thoroughly, which is something I wouldn't do on my own because it's such a grunt work thing. I said that wrong. Such a pain, I should say so. Told you I was tired.

Andra Zaharia: Now you made an excellent point here, and an optimistic one, I think, because we tend to have such black-and-white thinking in the space sometimes it's either something's really great or something's really terrible. And not many things sit in between. And having that level-headed approach to things I think helps a lot, especially because we're talking about things that are nuanced. And that means that there is a place for everything. And it all ties back to that critical security thinking that you've been advocating for years. 

Because the why and the how behind it kind of informs and shapes everything that we do. 

How we use tools, how we deploy automation, how we build products, how we build teams, how we train young people that come into this industry to understand this thinking to pass along and to help them build their own critical thinking so they might make increasingly better decisions. 

[24:35] Pete Herzog: That's the thing. You would think that this is obvious, but it's not. And it's not like we don't have historical records of lack of critical thinking and things that don't work. So, for example, security awareness training. Okay, is it a good thing? Yes, it's an excellent thing. It's important that people are security aware. What does that mean? Where does that bring you? Is this situational awareness? Is it, you know, like is it, can it be reduced to something? Like I hear a knocking or a weird sound in my car that I drive every day. But like your computer, now it's doing something that I'm not used to hearing. You know what I mean? Like something is new. 

I feel something is different. I hear something different, just like you would with your car. Is that enough? Because then you say, okay, I have to take the car to the shop. It's shaking a little bit more than usual or whatever. And you prevent that kind of security awareness. Sure. Knowing whether or not something is a phishing link, I mean, that's a cat-and-mouse game that even I would fail. So sometimes, and I think realistically everybody would. 

Pete Herzog: I guarantee you I could make a phishing mail that somebody would, no matter who they think can get by, it would fail. That's the whole point of how creative and how good some of these things get. And thinking, though, that you can sit people down and talk to them and show them things about passwords, Bluetooth, and whatever. People whose job, they have a job to do. Their job is not security. They have a job to do and you're telling them they have to sit and go through this security stuff that actually gets fairly involved rather than actually controlling that security for them, coming up with technical solutions for their password so that they don't have to do it. Something frictionless, rather. No, no. 

We think we can teach them by calling them the weakest link when they're actually the asset they're the ones bringing in money and trying to tell them, listen, you're going to get fired if you make a security mistake. And them thinking, the sales guy thinking, okay, do I want to get hired at my next job after getting fired here by saying I got fired because I suck at sales, or did I get fired because I made a small security mistake? And I guarantee you they would take the security mistake rather than sucking at their job. And this is the thing. We think that this is a good thing, but at the same time, we know that for. 

I don't know, since the seventies, eighties, there's been this push on safe sex, right? There's been all of this training in schools, trying to get them as young as possible, and that's like impacted zero, you know? And then you look again at fields like, I don't know, medicine. So if you look at medicine and saying, okay, we have something cool, like the Heimlich maneuver when somebody is choking, you know, and you're like, that's a great idea. And most people have an idea of what it is and how to do it. Maybe not perfect, but I think most people have some idea. But again, this came out, I think, in the, in the early seventies, it was invented. You want to think that it existed forever, but it didn't at some point. And then it took a good, I think, 20 to 25 years before it became common knowledge, you know, and. Or at least what you can, you can judge for yourself how well you know how to do the Heimlich maneuver, but which is something that is a safety requirement in our daily lives because we all eat multiple times a day.

[28:22] Pete Herzog: Heart attacks, I mean, we're at, right now, we're at a 40% increase in heart attacks annually between, for people between the ages of 25 and 40, I think. So this has gone up dramatically. CPR, they've learned, I think, back in the seventies, that CPR with mouth-to-mouth doesn't work well. You should just do chest compressions. You should not be doing the air thing for breathing. And until the experts arrived again, they figured this out, I think, in the early seventies, and it didn't become passed through the hospitals until the nineties. And I would say that most people still don't know. I mean, most movies will still do it wrong, you know, and so we know that training people to do the right safety, security thing, really not easy. It's much easier to build security controls or one of the things we found was just change the environment. If you change the environment, who they work with, how they work, what they're doing, you're going to impact their security more in a positive way than just giving them exams or quizzes on their others or making them watch movies about it.

Andra Zaharia: How do you plan that exercise on changing the environment? Because that says a lot about everything that captures really, everything that we've discovered about how our brain works in relating to habits, relating to our behavioral patterns, mental patterns. There's a lot of neuroscience that we can use to actually improve all of these outcomes. But how do you change, if you're an offensive security specialist, how can you change the environment for the person that you're reporting to, whether it's the CEO, the CEO, the CISO, or, you know, whatever other, let's say, management or leadership function?

[30:26] Pete Herzog: Well, it really depends on. It's really raining out right now, hard. It really depends on. So if you hear noise, you know what it is. Yeah, we're not flooding here yet, but who knows? Yeah. How do you do that? Well, again, you're in offensive security, and you need to go. And it depends on what problems they have and what you're reporting. Again, for them to change their environment from a security awareness perspective means that you've done an internal test, that you know, how their people are working and what they're doing. Maybe you've done a password audit or, I don't know, a credit, if they're been sending credit card numbers and email or things like that. So these kinds of audits, in which case you can actually address it in those terms of what they're doing. 

So, for example, why are they sending credit card numbers over email? Well, because they're still using email, because that's still the most convenient thing for them. And again, why is it, you know, like, why isn't internally there a way for them to pass credit card numbers that have nothing to do with email? Email should be some. Just another way that people from the outside of your organization connect with you. 

Pete Herzog: I mean, there's so many communication possibilities today that are so much more secure. You know, another thing that could be - distractions, depending on how they're. Where they're sitting, how they're working. I know there's a lot more work from home, but actually, once people started working from home, security got better, not worse. You had less humans distracted, less humans interacting in a bad way, less, you know, they were technologically controlled on how they entered and how they interacted. Did this increase? Somebody's gonna argue and be like, well, you had laptops that could get stolen, and now you had private data on people's systems and those had to be wiped. And you always had that, that was always an issue that's not new you know?! 

And of course that's, that's a problem to be solved. If they have to carry their presentations on their laptops and you don't have a way of securing that for them, or you don't have a way for them to have it securely in a location that they can access when they get to the client. And, and again, especially now with AI, these, these solutions exist, you just didn't implement them, you know?! So I mean, in offensive security, your report should not be what vulnerabilities they have. Your report should be more holistic. I mean, just patch this patch that is very small compared to, you know, take a look at the environment, take a look at how these people are connecting, take a look at, you know, where they're sitting in a room, how are they communicating with each other, how you're segmenting your network. A lot of that matters. And there's things you can see. 

Again, we go back to TTLs. I do a scan of their outside network and I see that this one particular vendor server comes back at the exact same number of TTLs, same number of hops away. So now I know it's in the same network, likely as the other servers they have. And I ask myself, this is a vendor-controlled server, why is it not segmented? So I could put in the report, verify that this server is segmented from the rest of your network. Maybe it is, and I couldn't tell properly from the outside, but at least, you know. And all they have to say is, no, no, this is fine, good, check it off. It's good, done. 

But I made you aware to it because I noticed it seems to be, and that's something that pentesters can do. I don't think that this is going to end up in any of your vulnerability scanning tools or any automation tools, mostly because I think this is the kind of thing that experts don't know that they know right, when they're doing it. It's just something that they happen to come across and they're like, oh yeah.

Andra Zaharia: So it comes up on the spot. It's a very context-focused type of insight that unlocks and you can never truly document your entire knowledge as much as some of us, namely me, love documentation. The reality is that we're never going to be able to put all of those good insights in a place where everyone can read them.

[35:18] Pete Herzog: And you make a great point. Context. Context is everything. Security is all about context, right? So I can't protect something unless I know its context. How is it being used now we go back to operations. How does it work? You know, who's using it? Where? How? I think it's funny because this whole Zero Trust crap came out and people say, oh, no, this is great. It makes people think that interactions are important. We always knew interactions were important. You know, OSSTMM 1 talked about interactions. OSSTMM 2 talked about trust and measuring trust. None of this is new. And this Zero Trust stuff that they put out is basically PKI, which is public key infrastructure, which got pushed on us back in, like, the late nineties. And it didn't really pan out because it's impractical. 

If you're going to be encrypting and authenticating every single interaction, you're actually going to lose a lot of monitoring capability, you know? And if we look at that and I say to myself, why did Zero Trust become a thing and not zero anomalies? Why didn't we push for people to actually know what's running on their network? What packets are being sent, what, you know, where, where is it? Where is it crossing? What applications does this department need? You know, where are they going on the website? You know, how, how do I better segment things? 

Again, zero anomaly is something that is definitely 100% achievable in the sense that you have now something that you can work towards. You say, okay, what do I know on my network? And you can start cleaning it up, start investigating piece by piece, going through it and seeing. And then, of course, every time somebody wants a new software or new whatever, you can address it that way, you can say, okay, do I know what it does? What's it doing on my network? And then once you agree to that, then you do it, you know, but then you have no surprises. And if you have a new anomaly, you can address it because you know how everything else looks.


Pete Herzog: But again, for some reason, people, people like the mystical, you know, like the, oh, I, you know, trust and interactions and authentication, that's another thing. I mean, trust. So I'm just blabbering now. I should stop and let you talk. But just so we're working on OSSTMM 4 now, and, you know, we've come across, we've done some really big research. I mean, it's 14 years now that we've been digging into this. Overall, it's 24 years that we've been working on this research. And for the newest one, we found that we found the patterns of security to find what are the elements. So we figure out what the elements, what security are, what is the smallest parts of security that can exist independently. But, of course, putting these controls, these things together work better. And we look at what are the properties of security. What are the controls? We found out, and it all comes around 15. So there's five intent properties, five react properties, five resolution resolve properties, and this. It's kind of funny and that this goes on, I'm gonna. I feel like this makes no sense to anybody, but let me, let me just then, shortcut to the point I was making. There's 15 trust properties, which means there are 15 trust controls, things I can do to control trusts. But yet Zero Trust is about two of them, authentication and encryption, which we call confidentiality. What's happening with the other 13? Well, they didn't think about, you know, so what good is my trust if I have a problem, for example, with continuity, with resilience, you know, that's not there. 

And it seems like they're kind of scrambling now because they made up this thing and they're trying to. And I don't know, every generation has its own buzzwords, and that's this one. But we shouldn't be at buzzwords anymore. We shouldn't be bringing. We have so much new knowledge, so much new tech, so much new interesting things. Why are we bringing something back from 1997 to call it something new? And you see this, the older you get in this business, the cycling, you know?

Andra Zaharia: It's maybe because there aren't enough people with your kind of experience to be able to tell, like, we talked about this, like, 20-30 years ago. Why are we talking about it again? We tried this, it didn't work. We have others.

Pete Herzog: But that makes no sense because it's the people my age teaching those people. Why aren't they telling them, you know, why. Why aren't the Gen Xers, who are the professors in these, these places, telling them, you know, this is nothing new? You know, you don't just look at a tool like an IDS or a firewall or, or even Nmap or a vulnerability scanner. Now that, you know, you don't look at that as this thing. Like, you need antivirus, you need a firewall. No, you break it down and you say, I need these parts of security. I need confidentiality. I need authentication. I need, I need resilience. I need. And then you say, I buy a product that has these things because that's what I need. And, you know, for the different interactions you need. And, I mean, I know why people don't do that because it's hard.

Andra Zaharia: It is.

Pete Herzog: It's really, really hard. You know, and it's so much easier to say, I bought IBM, so not my fault, you know? 

Andra Zaharia: That's true, that's true. The complexity and, well, first of all, the curse of knowledge which affects us all one way or another. Just being aware of it, I feel is so important at all times because otherwise we get really disconnected from the actual issues that people deal with.

And as you were walking through all of these steps and important aspects, I kept thinking back to people who sometimes maybe build their experience from behind the screen without having worked in a company to see how operational security looks like, to see how it works, to understand, you know, what it takes to have to do your job and not be able to because of some form of security processes that get in the way without having that real experience, like you mentioned, of being out there and being in an environment that has to deal with all of these issues to.

Like one guest, a previous guest, we had said to try to do the patching, try to do it yourself, not on a server with zero users or one, two or three users. That's like super simple. Try to do it in a company with 10-60 people and see what that's like because that will give you a real connection to what these problems look like and in real life and give you that sense of just being grounded in reality and not just doing security. Because it's fascinating and it's mentally challenging and because it sometimes gives you an ego boost because that happens and we have to acknowledge that. 

So, yeah, this entire generation of people who's grown that, that are growing up right now and kind of picking who to look at and also developing their thinking around security, they can benefit a lot from experience like yours, like Vivek's, like Jason's, because you've been there and done all of these things over and over and kept digging at them and, um, persistent in a challenge that, like you said, is really, really difficult in times.


[43:42] Pete Herzog: Yeah. I mean, our industry suffers from what we call, there's too many parrots, not enough pirates. Uh, so there's a lot of people who just like repeating what they've heard. Um, and they, they see something hot or new and, and they repeat those as mantras, you know, and, and, you know, patch right away or zero trust or, you know, it's always whatever the, the latest buzzword is, they just repeat it and they can't, for the life of them, imagine a world without that. So, you know, like two-factor authentication. Oh, you got a 2FA? You have 2FA? Really? What kind of 2FA? 

You know, there's 2FAs that do nothing for you. There's some that better be phased out quickly, you know, like the. The SMS one, which is done for convenience, not for security. And there's a. And we see that attackers are getting better and better at circumventing 2FA and tricking people into doing things that they shouldn't. With that, and all we know is that you have security person after security person. 


If you ask them, what should somebody do to be secure, they're going to tell you the holy trinity of firewall IDs, and antivirus, right? They. They're always going to tell you about cyber hygiene, which is some ridiculous word that they mean. Maintain and don't reuse your passwords. Use a third-party password manager, which those keep getting hacked. So I don't see why that's a good thing. Again, they tell you 2FA, which is this technological. 

If you have, for example, the app. Now your app for 2FA is significantly better than any password you can make up, also better than any password manager. And honestly, I think you're better off just using the word password that you remember, or, I don't know, your dog's name, I don't care. And this app for coming up with the code, then you are, you know, using also a password manager and the 2FA. No, you're just. They're just making it harder for people. And why are they saying this? Because they're repeating what they think is right. Why should we have an eight-character password? Which, by the way, PCI think, says seven, the old PCI standard. But why eight? Why? You know why? Because back in the 90s, somebody said eight and that stuck, and people just repeat it. 

And now, of course, they say longer passwords. And why are we doing passwords at all? Well, because companies like to push that liability on the user, and it's too expensive for them to push out an automated solution unless you're a bank coming up with a better solution. Plus, if, that way, if you get hacked, they can blame it on you and say it's your fault, you did something wrong. You know, Then it becomes a thing about liability. 

But again, who falls for it? The security consultants. They just keep repeating the stuff again and again, and they haven't actually done the research, they haven't actually looked into it, or they haven't had enough experience to see it fail right in front of their eyes to know that something's wrong. Yeah, I mean, this is a whole issue, you know, too many parrots, not enough pirates. And by pirates, we mean researchers, people conquering new lands, looking for treasures, you know, whether it's bug hunters or pentesters or just researchers. And you run into this again and again. So, I mean, again, another big flaw in our industry is that, and we promote these people, especially if they have a more clever way of saying 2FA, for example. So, but instead of trying to invent new frictionless, passwordless, whatever, new better technological solutions, no, they just keep repeating the same things and blaming the users. 

Andra Zaharia: Because unfortunately, that's the easier way. And just like the human brain tries to find easier ways to do things that are not as expensive in terms of energy, I feel like this sort of pattern of, well, obviously using just the path of least resistance is still something that the cybersecurity industry does as well, even though it was kind of built by pirates. This space, especially in terms of security. 

And we do need more of them. We do need more contrarian views, but not for the sake of being a contrarian, but for the sake of improving that critical thinking, for the sake of figuring out what our own blind spots are. Because if we don't continually keep questioning why we do things, how we do things, what we're doing them for and who we're doing them for, especially, we might just end up kind of making the same mistakes we've made in the past, and that's just history repeating itself, a pattern that is broad as humanity itself. But we do have hope.

[49:26] Pete Herzog: Yeah. I think this whole, what you say about these histories repeating themselves and us not learning from them is part of our humanity, right. So we know for at least 5000 years that identity is broken. People have pretended to be other people forever. Yeah. As far back as we can historically have records thereof, you know, and yet identity is still bad. Identity is still the basis of authentication and transactions. And I, I don't understand why, why it still is that way, you know, I mean. 

If you ask anybody how you secure something again, they're going to come back with authentication and encryption, 2 out of 15 possible controls. And the thing is, is that we know that identification is broken. And you see this time and time again when attacks happen, either there's social engineering, which is, or phishing, which are both identity, they take advantage of broken identity, or it's some kind of authentication break some hack against authentication, in which case, again, it's usually about manipulating identity. So it's a huge part of where we're weak and for some reason, people keep ignoring it, and their answer is just more authentication, which, again, is still based on identity.

Andra Zaharia: That's a good observation. And again, these sort of problems, we see the same type of problems repeated in various places in the cybersecurity industry. And speaking of people that persist in this kind of challenges, what's something that gets you excited right now? What's something that you're looking into? Because it gives you that thrill of being a pirate in this space.

Pete Herzog: Wow. Yeah. There's a bunch of cool new technologies that we've been working on. So that's one of the things, is because we do advanced research, and we've been heavy into OSSTMM 4. Our team meets once a week, generally going over the stuff. And I would say at this point, we're probably about 20 years ahead of the security industry on what we know about security and what we figured out. 

And again, hopefully we can publish it this year, because we will. We'll publish it for free like we always do, put it out there. And again, there's going to be a lot of people who just tell us we're crazy and, you know, but, but again, there's also a lot of people then who take it and make a basis of research on it, which is great, you know. And come forward and see if we're correct or not. And I, you know, as a researcher, as a scientist, I appreciate that. 


So knowing that it allows us to build out new technologies that are actually kind of exciting. Unfortunately, some of it's in stealth, so I can't really talk about it. We're waiting on patents. We did just get a patent, though, about a month ago on this device here. It's a mute, and this is a kill switch. So it's a wireless kill switch. When you walk away from your computer or you leave your phone behind, it will automatically lock. And you could also have it do other things, delete stuff or shut down or whatever. But should somebody take your phone out of your hand and run away, it will close. It will lock immediately. Should you have to run out of a building, your laptop will lock immediately. 

So that was something that was kind of fun because we noticed that while we were doing that, there was a gap in the kill switch space. Everybody was trying to do authentication, but nobody was trying to do lock, just lockdown, you know? And, and so because we do such broad research on things, we do come across this. What else can I tell you about? So I'm pretty excited about. I think. I think that there's going to be a push for a lot of this kind of kill switch to protect people, to have things locked down. 

[51:40] Pete Herzog: What can I tell you about? So one of the things we've worked on is frictionless authentication. So that is interesting. We also have a means of verifying someone's location over the Internet without using GPS. And so that way we can actually say, I don't know if you buy, let's say your credit card is used in a store, if the store is using, I don't know if they're using this type of mechanism that we have. Again, we're looking at making it open, but it would be an open system. The idea would be then is that it would be a ping to say is Andra at Walmart, and obviously you're not. We would come back and say, no, she's not. 

Again, it's not about tracking where you are. It's about being able to ask maybe three times an hour where you're not. So that way we can sort of flip it around. Don't care about invading anybody's privacy or keeping any of anybody. I'm not going to make money. That's why I want it to be open. So anybody could run their own server, their own, just like you can with a website. Maybe they'll be hosting providers if you can't run it yourself. So that's the idea is that allow people to do that and allow them then to say where they're not. And of course then the card would get rejected. 

And I mean, if you look at the sheer number of fraud that's out there, and again, it's not just that, but I mean, this is things for lotteries and even better security. Because let's say you want to verify that somebody can access a system if they're in this building and nowhere else. And that way, if we do it this way, VPN won't work. Faking GPS won't work. So again, this comes down to us understanding what kind of controls are missing. I mean, we, it's funny, we looked at all the possibilities. These are old numbers. I think it changes now that we've figured out new controls, new things. But back on the 2010 numbers, we were looking at the possible number of unique security solutions that could exist with something like three times ten to the 32 second. Like it's a. So you're talking about 32 zeros, you know, so there's a huge number of unique possible security solutions that can exist and the number of solutions that actually exist out there in the market right now, if you have 10,000. That's optimistic. Yeah. Four zeros. Yeah.

Andra Zaharia: Many of them use each other.

Pete Herzog: Yes. And that's, you know, when you're talking about unique products, I doubt you'll hit 10,000 so. And of that we're looking is that those that are physically possible to make today, you're looking at probably about a third of that with the modern technology that we have. So there's things we can't do, like make things invisible, you know, so if you look at visibility as a control or wipe thing from people's minds, you know, like. 

So there's, there's things that if you're looking at all the possible combinations against all the possible channels, you know, like whether it's human, physical data networks, wireless, all these things that you possibly want to. That you want to protect. So maybe about a third of that is actually possible, but you're still really far away from the number of actual solutions that we need to protect against all types of threats. 

Because, look, if you get all 15 controls against all your interactions, then there's not a single threat that can harm you. You do have perfect security. Is it expensive? Yeah. Does it need to be maintained? Yeah, but it's very different than to say that you can't have perfect security. We know how to get there, but it's just really possible. We just don't have the technology.

Andra Zaharia: Exactly. But leaving that, opening up that space to what is possible to maintain this exploratory mindset that pirates have, to use your metaphor, I think that that is still so important in an industry that gotten from being a rather obscure part of technology that was infused with creativity and that was, you know, barely, sometimes lawless, sometimes, you know, a bit more flexible than it is now. 

It has now become so very obviously because it's become, like, extremely important and plugged into every aspect of technology, which is plugged into every aspect of society. So it became rather corporate in a sense. So we need to create that space for what is possible and for what we know could be possible in the future, simply because cybersecurity and offensive security specifically have a very important task, and that is to protect not just for the present, but for the future as well. And that is a tough challenge, but one that hopefully will attract people with the right reasons and with integrity and with determination and grit to keep them going at these problems like you have been for so many years now, and to keep just asking these difficult, uncomfortable questions to those who need to think about them. So thank you for doing that.

[1:00:36] Pete Herzog: Yeah, you're welcome. I mean, I do it because I just need to know. Also, the fact that we run a nonprofit organization means that I can chase after anything without having to worry about profits for shareholders. So it's very different. We get to try all sorts of things. We get to try all sorts of new directions, and if we fail, we fail. And if we succeed, we sell it off to somebody and live another day. 


And that's something that we have the luxury of because we're not a corporation because we don't have to make other people money. So, I mean, this is a luxury I have. But again, I have this real need to know. I need to figure it out. And actually, it's funny because the team was making fun of me today because they noticed that I meet over the holiday break, I did four straight days of improvements to the models that we were working on just because I could not give it up. I had this idea I had to chase down, and I was able to do actually some. It was great because it's like working on a puzzle and coming to a point where you kind of figured out where some of those pieces are supposed to go. 

And so once I got that all in place, then I went back to some of our prior research, our prior ideas that we had before we got here, and I tested those patterns out again to see, because now that I know what the picture should look like, what am I going to do with these other pieces that I didn't know, you know, that we thought would go in those places, but they don't, you know, and, of course, so I was really in the groove when I should have been celebrating the holidays and taking a break. So this is just something I need to do.

Andra Zaharia: Getting in the flow is important, and thank you for sharing that with us, because I think that we've all been there, and that's so it may seem strange to people who aren't perhaps as passionate and as. I mean, this passionate in the sense that their internal principles and values and needs truly align with what it takes to make a meaningful contribution to this community that we're in. So, yeah, I really appreciate that, and I appreciate everything that you've shared with us today. There are so many lessons that connect these specific aspects of offensive security work to the much bigger, broader, more complex, important problems that people have that I think that that bridge is so important to cross and to maintain and to just make sure it's kept up and then we can use it every time that we need it and not just stay on this side of things. Which might feel easier sometimes.


Pete Herzog: You reminded me of saying, of. So back in 2017, I did a talk at Troopers conference about intent. And I figured since we had trust properties, we could figure out intent properties. And that was actually, that was the thing that I was solving over the break. And so now we can determine malicious intentions before they happen. So we have. We have controls that you can put in place to determine maliciousness. But even after the fact, this was interesting because I did some research to see how intent is proven in courts. 

So if you go and kill somebody, how do we prove that you wanted to kill them? And it turns out that it's just a lot of psychological hokiness and guesses. It's not actually based on any kind of science. It's really just the intuition of the judge and the jury. And so what we did is because we could determine intent ahead of time, we looked to see if we could do it retroactively, and it turns out we can. So there's five properties to that. And I wonder if this is going to matter, if this is. I mean, because this could be something interesting for a court system, because if three of those five properties are met, you are almost beyond a shadow of a doubt that it was. That the intent was there, that they had meant to do that. And it's. Yeah, I mean, if it's one, it could be chance. If it's two, okay, bad luck, you know, but if it's three, you meant to kill them, you know, so you can actually. You could actually look at each property and say, was this satisfied? What happened here? What happened here? And you hit. You go through five, and if you hit three, you can be sure that you meant it. 

Pete Herzog: And I think that's something interesting. I want to get published out there, too, because, I mean, that's a whole other side of security, which is punishment, making sure that there's some kind of repercussion for what people do. No longer does it have to be just the deliberation of a judge or jury. There is an actual process, a scientific process that they can go through five steps and no longer have to argue them. And now it's just based on facts.


Andra Zaharia: That is extremely interesting, and it is a piece of research that does remind us of minority report. But I think that we're closer now to that scenario than, obviously, at any point in human history, but also that there's practical need for it in certain spaces. And, yeah, hopefully you'll publish that because I bet that there are going to be quite a few people interested.


Pete Herzog: I think that's something that's really worth waiting for in the future. I do. Because, I mean, even if you don't look at just punishment, I think just if you're not looking at people being malicious, but people not knowing the right thing to do, you're basically drawing, let's say, on an intent basis. You're drawing three lines in the sand, and you don't wait for them to cross all three. After they cross the first one, you get them help, you interact, you intervene. 

So I'll give you an example. Is that some of these bigger stores, these shopping centers, the chubby centers, supermarkets, they have this fire lane in front of the store, yeah. And you're not allowed to park there. And one of the reasons why you're not allowed to park there is so that you can have a getaway car. Yeah. So it's not just for the fire, but it's also for theft. So if somebody is parked there, which is what they do now in the physical space, is they somebody, a security guard should go there and say, please move your car. And there's a security reason for it. It's not just a, you know, they use the fire truck example because, you know, in commercial business, you're trying to make the customer feel safe and happy and buy things. 

So that's, that is a line in the sand. And if there's a.. If, let's say that second line in the sand is they are, there's a group of them and they're, they're blocking the doorway, or they lock the door when they walk in behind them. Yeah. So they walk into the store and they lock the door. That's a second line. And now you can go address them before the robbery happens. You know, you could say, why did you lock the door? You can't lock the door. You know, and you, you go and you address it. 

Those are extreme examples, simplified so that you get the idea. But we can do this also with, with data networks or wireless or whatever. The idea is that you can intervene before it's a problem. I'm not talking about waiting till they do the three bad things and then give them repercussions. But this way you can sort of guide them and get people the help they need. If that's the case, 

Andra Zaharia: A lot of disarming situations that takes less effort and makes it far less complicated than to address the aftermath of a security incident. Sure. Wow, that's really exciting and very interesting. And again, thank you for sharing that with us. I think that there's a lot to learn from, and there's a lot we can do to enrich our mindset and to truly become that cross-disciplinary specialist that you mentioned in the intro for the Open Source Testing Manual, version three. I think that those principles that you mentioned are evergreen, and instead today those observations are still so, so relevant. And just rereading that preface is important to, it's an important moment that makes you reflect. So thank you for giving that opportunity.

Pete Herzog: I know it's, people are going to say, well, that was written in 2010, you know, don't you have anything more updated? I think it's funny because I think the security industry is still stuck in about 2008. So they haven't even gotten to the point yet of, for example, trust metrics, even though everybody talks about trust, which is, you know, we looked into trust, I think in 2006, we worked on an EU project for it. And so what we published in 2010 has full trust metrics. So when people say, do you have anything more updated? No, because we haven't even met where we're at. And I, and I'm afraid also when we come out with OSSTMM 4, I mean, it's going to be a while again before we need another security methodology because we see how long it takes. I mean, if it takes the medical establishment sometimes, you know, 30 years just to get a new technique out, you know, just in that comparison, it could be another 30 years before people are actually using intent metrics. Yes. You know.


Andra Zaharia: Yes, absolutely. So, so true. That's why it's still so relevant and not just feel so evergreen, but is evergreen still? Because we're yet to get there. Thank you for walking us through all of these directions and through all of these pieces of the puzzle. It was super interesting and also very thought-provoking, which is what we're trying to achieve with these conversations, just trying to get people to, question their thinking, their methods, their tactics and especially their motivation, their why behind all of this. Because when we get to those real answers, that's when we can make real progress. So thank you for giving us your time and energy and insights. Truly appreciate this.

Pete Herzog: Thank you. I feel like I ranted a lot. Again, I warned you. I was tired. I was a little bit sleep-deprived. Not at all leads to me just sort of ranting about things. But yeah, I think anybody who listens, if they want, you can catch me on LinkedIn. I tend to troll LinkedIn a lot, so.

Andra Zaharia: I notice some of that. And I have fun with it, especially with the LinkedIn prompted questions.

Pete Herzog: Yeah. Yeah. I really enjoy that. So maybe I can offer, you know, anybody else who needs a laugh in their day. So happy to see any of you on there. Andra, thank you. It's always a pleasure and really, really happy. Again, sorry everybody out there for the rants. I hope you can listen to me at two times speed, so maybe that helps.

Andra Zaharia: Thank you. Thank you once again. 

Andra Zaharia: Ever wondered how deep the rabbit hole goes in the world of ethical hacking? Well, we're still falling, and we're dragging you along with us. One question at a time. 

Thanks for wandering through this maze with us as we tackle the nitty gritty flipped misconceptions on their heads and maybe, just maybe, made you rethink some of the things that are important to you. 

This has been the We think we know podcast by Pentest-Tools.com and before I sign off, keep this in mind. 

There's always a backdoor, or at the very least, a sneaky side entrance. 

See you next time.

Get fresh security research

In your inbox. (No fluff. Actionable stuff only.)

I can see your vulns image

Related articles

Suggested articles

Discover our ethical hacking toolkit and all the free tools you can use!

Create free account

Footer

© 2013-2024 Pentest-Tools.com

Join over 45,000 security specialists to discuss career challenges, get pentesting guides and tips, and learn from your peers. Follow us on LinkedIn!

Expert pentesters share their best tips on our Youtube channel. Subscribe to get practical penetration testing tutorials and demos to build your own PoCs!

G2 award badge

Pentest-Tools.com recognized as a Leader in G2’s Spring 2023 Grid® Report for Penetration Testing Software.

Discover why security and IT pros worldwide use the platform to streamline their penetration and security testing workflow.

OWASP logo

Pentest-Tools.com is a Corporate Member of OWASP (The Open Web Application Security Project). We share their mission to use, strengthen, and advocate for secure coding standards into every piece of software we develop.