Summary
In this episode of The Defender’s Log, host David Redekop sits down with his business partner, Francois Driessen, to explore the intersection of human psychology, extreme risk, and the future of cybersecurity.
From High-Stakes Film to Cyber Resilience
Francois’s path to cybersecurity wasn’t conventional. With a background in information design and film production, he learned risk mitigation the hard way—filming from helicopters and technical diving. These experiences taught him a vital lesson: resilience must be built into the system. Whether it’s ensuring film data isn’t lost or keeping a diver alive, the principles of fail-safe systems are universal.
The Human Element and “True Proactivity”
Francois emphasizes that cyber problems are human problems. He argues that the industry’s current focus on “detection” is inherently reactive, often requiring someone to become “patient zero” before tools kick in. Instead, he advocates for a Zero Trust approach—what he calls “True Proactive” security. By moving security lower in the stack and adopting a “default deny” posture, we can stop threats before they ever connect.
Wrangle Technology or It Wrangled You
As we move toward a future with one trillion AI agents, Francois warns against choosing convenience over safety. He calls for a “sheepdog mentality” in the next generation of defenders—those who think like attackers (red teamers) but build unbreakably secure systems (blue teamers). His final wisdom: Security must be part of the design from day one.
Full episode of The Defender’s Log here:
TL;DR
- The Sheepdog Mentality: Effective defense requires prioritizing human safety and taking personal responsibility for the consequences of technical failure.
- Resilience via Risk: Experiences in technical diving and high-stakes filmmaking prove that critical systems must be built to “fail closed” to keep people alive.
- True Proactivity: Current security is reactive (waiting for a “patient zero”). A Default Deny (Zero Trust) posture stops threats before they can even connect.
- The AI Arms Race: As adversaries use AI to bypass detection, defenders must shrink the attack surface by only allowing verified, invited traffic.
- Security by Design: With one trillion AI agents looming, security must be foundational. We must “wrangle” technology through design, or it will eventually master us.
- The Blue Team Challenge: We must inspire the next generation to apply “red team” (attacker) skills to build unbreakably secure, “not breakable” systems.
Links
View it on YouTube: https://www.youtube.com/watch?v=TXw72GV_RfM
Listen to the episode on your favourite podcast platform:
Spotify
https://open.spotify.com/episode/6WkPHuV2p4ASX5K6VN6P4w
ADAMnetworks
https://adamnet.works
The Defender’s Log Full Transcript -Episode 014
Introduction
(00:00) Defenders have a different mentality. They put other people before themselves. They care about other people. What is the personal thing that drives you on this journey? If you don’t wrangle it, it’s going to wrangle you. So, you actually had a real-life experience on risk mitigation in a variety of ways. There are systems that are built to keep people alive. And once you understand how these systems work and why they exist, you can apply that anywhere. In the coming years, probably less than a decade, it looks like we’ll probably have a trillion AI agents. It’s important to understand the consequences of technology.
(00:46) Deep in the digital shadows, where threats hide behind any random byte, a fearless crew of cybersecurity warriors guards the line between chaos and order. Their epic battles rarely spoken of until today. Welcome to The Defender’s Log, where we crack open the secrets of top security chiefs, CISOs, and architects who faced the abyss and won. Here’s your host, David Redekop.
Interview with Francois Driessen
David Redekop: (01:13) Welcome to another edition of The Defender’s Log. Francois, welcome.
Francois Driessen: (01:18) Hi, David.
David Redekop: (01:20) It’s been a long time coming. I really am looking forward to this episode. I think the world has a lot of value to extract from what’s between those eyes.
Francois Driessen: (01:31) Yeah.
David Redekop: (01:33) And I’ve known you long enough to know that you are a willing participant in sharing the knowledge, sharing the wisdom. Francois has been my business partner in ADAMnetworks for a number of years now, and he comes to us with an intuitive defensive posture about all aspects of life. So, I wanted to just make sure that we had some time with you today to have the world get to know you if they’re interested, because if they see the folks that are operationally involved in what we do, then hopefully it will inform their lives and their businesses and their operations to also adopt some things that are worthwhile.
(02:16) We all know that in the world of defense, you can spend an unlimited amount of time and effort, and the question is always how much of that actually has a return on investment in this life or in the next, right, for that matter? And so that is the constant assessment that I’ve observed you doing over the years, that there are times when you really got to go deep, and there are times when you just got to let that part go because that has no impact.
(02:43) So anyway, tell us a little bit about your background. How did you even get into this mind space of having that posture?
Francois Driessen: (02:51) Yeah, I think that it’s an interesting journey. I would not say it’s the conventional journey for most people that I run in the security space today because my world really started with design, interactive media, went into film. So, I studied information design at a university back in South Africa. And at that stage, it was a pretty intense course. Over 130 would apply, about 30 would then be approved, and then only 24 would make it after the four years. So, it was pretty interesting because it was very challenging and a very, very high workload. And everything was always a challenge, you know. So, you basically went through four years of doing something that you’ve never done before, and you had to figure out how to do it. At the end of that, all of the guys who graduated, who actually did make it, ended up doing some pretty interesting things, just not following status quo, but being able to design from first principles, figure stuff out, solve things that no one else has solved before. And it combined things like psychology with, you know, a little basic bit of programming back in the day, and human interface design, multimedia, history of art was basically all combined that way, which then brought me into the world of film.
(04:02) So, I had my own film studio. And essentially, what I had to do is I had to communicate with humans. That’s really what it comes down to. Find out what is it that humans need to be able to communicate with each other to understand something that they haven’t understood before. What are people’s fears? What are their fantasies? What are the things that drive people out there? And that really led me on a path where even in the beginning of DVDs, where, you know, I would always with me and the guys that I would work with, we would always, you know, push the boundaries of what could be done before. It was always pushing for higher production value with technology. It’s just the dawn of digital video was sort of like, you know, the iMac TV was my first official computer with FireWire, right? And you pushed that to do stuff that you just weren’t able to do before because we were all chasing the Hollywood ideals and the Hollywood standards because that’s what everyone was striving for. Until finally you end up, you know, building your own raids, drilling holes in the back of your Power Macs five and, you know, pulling SATA cables through there and RAID zero them, and then you have data corruption. And then, you know, you find yourself next week wrapping every cable in tinfoil. And then a year later, SCSI exists. Right. So, we were always pushing the envelope to get stuff done that weren’t being able to be done before. But then that led into this world where I connected with a guy called David Redekop. You might know him.
David Redekop: (05:35) I heard you not mention SCSI in it.
Francois Driessen: (05:38) Yeah. Oh, I mean, that.
David Redekop: (05:39) But that was part of your world, too.
Francois Driessen: (05:40) Yeah. Well, I, you know, FireWire 800 sort of like pushed SCSI out of the mix. So, you know, 58, I mean, that was the meow back then. I still have some of my original cables, and I still the smell of the cables. They still smell the same, that same plasticky smell. I have my cable from my iMac 400 DV. I still have that. And I have my Li DVD burner that was FireWire 800 capable. I still have that.
David Redekop: (06:05) Any zip drives?
Francois Driessen: (06:07) Yeah. No, we pushed past, physically pushed past that. Actually, it was just at the edge of that where we, you know, all of everything we backed up was onto drives that we would then rip out of the RAID and then store them in in the fire safe. So, yeah.
David Redekop: (06:23) Wow. So, you actually had real-life experience on risk mitigation in a variety of ways, right? Because for you, it wasn’t just risk mitigation on the intellectual property front. It was also risk mitigation in logistics planning that could go sideways, but it was risk mitigation about the tech yourself that you’re using and building resilience into the tech. And I noticed you brought a lot of that with you. Is there any standout of what you brought with you or was that just part of your intuition anyway?
Francois Driessen: (06:54) Well, no. Look, if you’re not going to make the deadline delivering something on a TV series, and that means the end of your studio, right? You spend 70 grand on a shoot, which back then was a lot of money. And for the size that I was operating in, was a big risk. You know, people were risking their lives in helicopters to get the shots, you know, including my own. And then you come back home and, “Oh, shoot. I didn’t properly capture that data.” And, “Oh, that this footage never got transferred.” You know, so the data wrangler in the team is the person that you almost trust more than anyone else on it because if that data never made it to the edit suite, you know, that’s any ground down the toilet. And will he ever get those shots again? You know, so it touches you. And when it touches you to that level, you make sure that you do have resilience.
(07:40) And I think another thing is I’ve done a few sports like, you know, diving and a little bit of extended, not extended range, but, you know, technical diving and, and so forth, where there are systems that are built to keep people alive. And once you understand how these systems work and why they exist, you can apply that anywhere in other systems where resilience becomes important. And I think naturally I would just do that. And so, I still approach things that way. If something is important enough that someone’s life depended on it or their livelihood depends on it or the resilience in their operation continuing depending on it, there needs to be proper systems in place to make sure that that will run properly.
David Redekop: (08:21) Right? I remember doing actuarial science work. And if you had flown in a helicopter, there was a very specific risk at-risk attribute that we would apply to your life insurance application. Or if you would ever gone skydiving, for example, right? So, you must have had some quite crazy helicopter stories. Tell us your craziest helicopter story.
Francois Driessen: (08:47) Okay, we’re going to talk about cyber. I guess we’ll get there.
David Redekop: (08:50) I’m checking to see if I can apply for a life insurance policy on you. So, you first have to tell me the craziest helicopter story.
Francois Driessen: (08:56) Okay. So, the craziest helicopter, well, there’s really two. But one, we were actually in the Niagara gorge flying with Niagara helicopter at the time. This was one of the first H-Factor episodes that we did. And we were literally in a helicopter flying underneath power lines so low to the water that the water was busy blowing up into and onto our equipment. You know, that pilot was just crazy. He had like over 35,000 flight hours, and he was a Swiss military pilot. That’s where he started. So, that was the craziest one, but incidentally enough, the safest one. But the other one that wasn’t supposed to be crazy was the least safe. And it was a brand-new AAR helicopter. We were in the desert filming adaptive athlete with motocross. And we told the pilot, “Hey, just take it easy.” And this was just at the dawn of drones. You know, drones were not really practical yet. Using nice cameras were too heavy for the drone still and so forth. I told them, “Don’t go too crazy. We just follow and just track along with the rider, and we’ll get our shots. And then once we’ve got those wide shots, then we’ll get close and get, you know, fly a little lower and everything.”
(10:02) I’m watching the monitor, and the next thing I know, I just like feel the helicopter almost like sliding sideways, you know, because it took a turn as the bike went under us and then turned. And I feel the whole helicopter tilt. You’re inside the monitor. And once as a director, once you’re in the monitor, this is where you are. You shut off to the world outside because your job is to find out that what the camera is capturing is the right image. And I just feel that there’s like some G-force. And then the next thing I know, I’m looking at the camera. I’m seeing these tumbleweeds coming past the camera on the motor. And I look outside the helicopter and it’s right on eye level. Cacti and stuff is right on eye level with me. And then this helicopter just like pulls up like this, and there’s just alarms going. And brand-new helicopter. And the pilots are cussing at each other and, you know, pointing at stuff on the dashboard that’s all red. Jets that went into 110% torque and stuff. And then they cut us off. You know, we couldn’t hear what they were saying at that point.
(11:03) And so what happened is that the helicopter essentially fell through what’s called a smoke ring. It’s when there is turbulence. And it basically just lost lift and just basically fell through a hole in the sky. And we landed very closely after that just to check everything over. And there are these sensors at the tail rotor as big as my little finger to sense for ice build-up. And those were bent flat, but the tail rotor did not disintegrate. Okay. So, I don’t know if you’ve seen what happens to a helicopter if it’s over-torqued like that without a tail rotor, but it’s a pretty gnarly. I’ve never been in one, but I’ve seen scenes of it. It doesn’t look good. So, we were really close to death actually.
(11:43) But, you know, just as things are, you figure out, okay, this is what you’re dealt with. This is what’s happening in the situation. And we didn’t die. Okay, we’re not getting back into that machine. And found a way to get back to our base camp and continued working for the rest of the day. And but that day I sort of made a decision, said, you know what, if we could use technology to not have to put lives at risk, then we should definitely do that. And actually, since then, drones picked up, and we’ve never done it since then. It was just drones going forwards at that point.
David Redekop: (12:14) So, what a story. And that reminds me of how we have so little visibility of what happens in the airspace. I remember going through flight training. And the instructor telling me that if you could food color the sky, you would never fly because the currents that happen in the air, the mechanical and non-mechanical turbulence mix together. The closer you get to the ground, the weirder it becomes.
Francois Driessen: (12:45) Yeah.
David Redekop: (12:46) And so at that time, I remember him reminding us regularly because we were learning to fly powered paragliders. And he said he would always tell me that height is your friend. Altitude is your friend. The closer you get to ground, the more dangerous it becomes because if you’re in a parachute, I’m guessing the same in a helicopter. If you have the height, then you have room to. So that makes sense. Wow.
Francois Driessen: (13:12) Yeah. Yeah.
David Redekop: (13:12) Well, I’m glad you’re not doing that anymore.
The Human Element in Security
David Redekop: (13:17) But one of the things that I know you’ve learned over the years and you’ve demonstrated over and over again is that you’re able to assess a situation and then immediately look at the whole situation from a distance, right? Raise your perspective on what’s going on. And one of the things that you brought very visibly to the team and to the way we interact with clients is not the technical element. Francois and I have a long background story for those of you who are meeting us or how we work together for the first time. And I was always involved on the technology part. But the element that Francois brought to the team, to the product or interaction, was really the human element. And tell us a little bit about how you got to that place because it’s so important.
Francois Driessen: (14:10) Yeah. I think, you know, since I started working, you know, it doesn’t matter what field it was, I’ve actually always been working with humans. And moving into the cyber world, that’s what I did. So, a little bit on the backstory of how we connected so the listeners will understand that too, is that I was actually doing a documentary project on the addictive properties of pornography. And as I was doing that, you know, David had the beginnings of DNS thingy back in the day, which was the origin of that was for content protection. And so, we connected through mutual connection through a church or something like that. And very quickly as I learned what you were busy doing and I saw the consequence of that, I realized that this is significant. This is important.
(14:53) But you were dealing with the technical part. And the developers dealing with the technical part. I was dealing more with the human aspect of it. How does this technology actually influence people? And so, later on, you know, David came to me, and it was needed for me to communicate what the technology was doing to humans so that they could understand. Because he was sitting with a new technology that does something that no one else has done before. I mean, the philosophy or the idea was kind of cool. But people abandoned it because default denial just was not practical, right? So, everyone abandoned that, moved on, moved into focusing on detection, advanced detection, real-time detection, and stuff. And, you know, next-gen firewalls. So, it’s sort of like everything moved. But you kept going until you finally got something that was working that was unique. People needed to understand what it actually does. What, what does this thing do? But for me to do that, I had to understand technically what it does first. So, I had to immerse myself in the technical, which is not new for me. It’s always like that. Make sure that I, you know, have the ability to understand something who’s never understood before. Even if it’s to the level of Lego blocks. You don’t make the Lego, but you can use the Lego to build stuff. Right? I don’t know all the technicalities of how a microwave works, but I can definitely cook stuff in it, right? And so, the more you understand the things, the better you can cook. And you know, but so, so that concept, once the Lego block is understood, you can build with it. And so, at some stage as I was going through this, I kept hunting for what’s the consequence. Because for me, one of the biggest drivers is not necessarily what does the technology do? I mean, that’s cool, that’s shiny. That’s not, but what is the consequence of this technology? How does this change someone’s life? How does it make their life better or worse? And now you can actually understand, you know, how to deal with this.
(16:41) So, anyhow, on this journey, I’m understanding what’s the beginning of what now has grown into zero trust connectivity. I look at the beginnings of this, and I go, “Hold on. This actually means you get to press a button and suddenly pornography just doesn’t exist anymore for people who don’t want to access that, or malware doesn’t exist anymore.” “Oh, hold on. Phishing, you know, just because you’re in default deny all world, click on this link, nothing happens because it’s denied by default.” So, I realized, “Hey, you get to press a button and reset the internet.” And when that light bulb went on for me, that was a massive, massive shift where I realized, “No, this is significant. This has existential impact. This has eternal impact on how technology is going to be shaping people’s character.” And I just, I felt a spiritual drive to pour everything that I have. So, I put everything else on ice, and I poured everything that I have into helping take this thing, this new technology to a place of resilience, to a place where people can understand it, to a place where people can use it. And that has been my mission, you know, on the team.
David Redekop: (17:49) Right. Right. Yeah. No, I and we’re all benefactors of that calling, of that commitment, and we’re grateful for it. In fact, one of the things I’m regularly reminding everybody of is when you go default, deny all, your single points of failure are way worse. Because most network equipment is designed to be resilient to keep working. But if you’re in a default denial and something breaks, you’re default deny all. It’s like fail closed system all over the place. And so, that resilience, that focus on resilience became super important for us to pursue. And so, that’s not a new thing. I mean, if you’re scuba diving and you’re out of air, you drown, right? Like there’s systems that are important enough that you need to have resilience for. And you need to put systems in place that you will not end up in that position. So, you know, a good example is making sure that you’re running in high availability. You know that’s just a simple thing that suddenly, if you have an issue with this box, it just switches on over to the next one.
(18:53) So, it’s important to understand the consequences, right, of technology. What this happens, that’s the net result. And always draw it back to how does this affect the human? So, what is the personal thing that drives you on this journey?
Francois Driessen: (19:09) I think to me, I really believe that technology should serve people and not the other way around. And that technology, especially in the context of cyber technology, in humans are having to collaborate. Cyber problems are human problems. And the human factor is still one of the weakest links in this scenario. It’s humans abusing AI that are coming up with novel attack vectors that no one has considered before. So, getting technology to be wrangled in such a way that it benefits mankind and not the other way around is a personal driver for me.
David Redekop: (19:50) That’s wonderful. I am reminded of that same approach is that you just talked about having you master technology or technology will master you. That applies to so many other aspects of life as well, right? It applies to money. Either you master money or money masters you.
Francois Driessen: (20:11) Yeah.
David Redekop: (20:12) It applies to our appetite. Either you master your appetite, your appetite could master you, right? Your temper, your like everything. It’s a human condition. And what we’ve built as humans, we’re translating parts of what humans are into technology. I find it fascinating actually. I can look at how a computer actually runs with, you know, what used to be the BIOS and then, you know, the hardware that lives underneath. It’s a reflection of how we are designed and created as human beings. And there are issues that can happen in the operating system, and you try to solve it on the application layer. No, it’s an operating system problem. And that I, I find I keep putting that back to psychology of how, in my personal view of how humans actually operate. Sometimes you’re dealing with an operating system issue, but you’re addressing the application. You know, you’re addressing habits, but really there’s something underneath it that drives you. If you can address that thing, now suddenly the application runs properly. So, yeah, I think the intersection between humans and machines are way deeper than most people can just see on the surface.
(21:21) Right? Even down to the OSI layer, the seven layers of the networking model. We’re often referred to as OSI. We have people who are real experts at layer 7. They are really good at doing everything at the application layer. And the more they’re focused on doing all things security at layer 7, the more they want everything below it ignored for doing any security aspect. Right?
Francois Driessen: (21:48) Yeah.
David Redekop: (21:49) Whereas our position is that the lower in the stack you go, which is like principles that have a broader application, the better. So, why would you, for example, even want a network connection to be going to a place that’s completely unknown?
Francois Driessen: (22:04) Fully agree.
David Redekop: (22:05) Right. Yes, of course it’s possible for layer 7 to be able to do the detection, hopefully before too much damage is done. But if you prevent it from connecting to begin with, then you’ve had a broader protective process already applied.
Francois Driessen: (22:22) And I think that’s actually something to address in itself. If you truly want to have a true proactive posture for your security, then the technology needs to serve you in order to be able to achieve it. And what I mean by it is this is you can be as proactive in your security in preparation for that attack in making sure that you have good resilience and everything in place. You can be as proactive as you want to be, but now you end up implementing tools that require you to become patient zero first, and then the tools kick into gear. Right now you’re reactive because the technology is not doing what you want to achieve as a human. You want to have true proactivity. That means that the technology also needs to follow that same principle. You know, let’s take another human aspect and bring it back to technology. How do I protect my house? I don’t leave the door open, let anyone come in whenever they want to come in. And then if they do bad stuff towards my family, to my wife, and to my kids, now I give them a sticker that says bad guy. And I’m going to put someone at the door that’s going to look for bad guy stickers. Well, if my wife and kids are patient zero, if they were the ones who were getting hurt first, that’s a problem. And what if he took off his jacket? You know, what if no one has seen this guy before?
(23:41) Well, what the industry is saying, if we apply this to security, the industry says, well, let’s put a robot in there that as this guy is doing, let’s see that. And then we have running commentary. And now we can jump in and do that. And I’m saying, “Well, maybe. What if the robot doesn’t see him? What if none of these people are in my house to begin with?” You know, and that makes so much sense as a human. But somehow because of limitations of technology, the whole industry is in this position where anyone’s allowed in, and then if bad things happen, now we start jumping into action. Right?
(24:20) So, I think we’re in an exciting time in the world, in the timeline of cybersecurity and really internet connectivity. Getting a security layer being worked in as part of the design. I believe that, you know, ZTDNS, “don’t talk to strangers,” in zero trust connectivity, like bringing zero trust to the whole internet, okay, I think is a very, very exciting time. And then layering that with stuff like preemptive defense. This is very, very interesting. Because now you are talking true proactive. You’re still reacting to guys taking steps to do bad things. Still reacting to it. But you’re actually now cutting them off before they can actually put it into practice. Right? So, layering these things, I think we’re going to see some interesting things happen in the in the security space.
The Rise of AI and Proactive Security
David Redekop: (25:11) We have to because the number science is not in the favor of a defender if you’re only going to offer any kind of mitigation after there is a threat detected.
Francois Driessen: (25:22) Agreed.
David Redekop: (25:22) Because we have billions of people and billions of devices online today. But in the coming years, probably less than a decade, it looks like we’ll probably have a trillion AI agents. So, talk to me about how you foresee the proactive security model being even more important in that coming trend?
Francois Driessen: (25:48) Yeah. And you’ll notice I use the word true proactive because the idea of if I am proactive in preparing for it, but you don’t use true proactive tools, then you’re still reactive even though you want to be proactive. So, that’s why I’m putting the word true in there specifically. So, okay, I think there’s something that’s important to see and that is what happened with deep fakes. When we look at the rise of general adversarial networks in AI. And so, for those of you guys that aren’t really into GANs adversarial networks, the idea is that you have a creator and a discriminator. Okay? And so, this one creates an image. And the discriminator goes, “No, I can see that that’s AI try.” And then this one tries again. You pit these two against each other, and now you have this automated process where you’re essentially, you know, beating the creator into creating the perfect object. Okay.
(26:40) Now, in our world where NIST CSF and how it’s been applied in the industry, where you have, you go through the process, you identify, protect, detect, respond, recover, right? Most of the guys have put their energy onto the detection phase. And also, subsequently have put AI onto the detection phase. All right. Now let’s see what the adversaries are doing. And by the way, I just saw a figure the other day that it’s almost like 4,000 times more used in adversarial context, you know, agents versus what is actively being used in defense today. Let’s leave that for what it is.
(27:21) But anyhow, so here’s what the bad guys are doing. They are using AI to create the malware. They are using AI to do detection evasion. And they’re not using it to come up with this strategy. This is new. Okay. And so, what’s happening is you’re essentially putting, you’re pitting the bad guys’s AI against the good guys. And they’re using the good guys AI to say, “Hey, let’s check, let’s check on this detection evasion. I know I need to debate the detection if I’m going to attack someone.” So, that’s just part of the strategy. It’s just, I know there’s going to be defenses. I know I need to not be caught out if I want to snoop around and steal stuff, or I want to encrypt stuff, or I want to, you know, do surveillance, or whatever. Whatever it is, I know I need to not be caught out. So, the new malware or the new attack basically are using the good guys AI in order to perfect that attack. And so, what you have here is that the bad guys are now the creator. And the good guys are the discriminator. You’ve essentially created a general adversarial network across the industry between the defenders and between the adversaries.
(28:33) So, how’s this going to go? Just like in deep fakes, you will get images that look perfect. And so, there will be attacks that no one has seen before, that no one has thought about, and they’re going to be coming a lot faster than they’ve been coming. So, the idea of getting away with a semi with a reactive security posture is going to fail more. It’s failing now already. But I believe it’s going to fail more. And so, the only way in my mind, how do you solve it is to say, “Well, no. These guys don’t get to come to my home. I invite the people that I want to be there when I want them to be there.” And so, now suddenly I’m dealing with a much smaller problem than everything that potentially can go bad out there in the worldwide web, which, you know, we can affectionately call the universal threat ecosystem. Right? You know, you don’t have to deal with that. You only need to deal with what you actually need to work with that initiates from you from your authenticated devices from your authenticated users to verified good destinations and connections. And now suddenly you’re sitting with a much smaller problem than what you’ve sat with before in the past. And you don’t have to see them coming. You don’t have to see them doing their bad thing before you jump into action. You’ve jumped into action before the bad thing happened. And I think that shift is, it’s an exciting shift, but it is a mind shift for the industry.
David Redekop: (30:04) Wow. Infoblox has reported in the past that 86% of new domains are registered with malicious intent. And as far as we know, those domains have still been registered by human beings that are behind those that are actually choosing to register these domains. So, once that aspect is automated to where AI is doing this, I could just see that problem continuing to not get easier to tackle unless you take the default deny all approach.
Francois Driessen: (30:38) They’re doing a good thing. I just picked up yesterday that they actually now have a preemptive defense that they, you know, building into there, which is wonderful. Which means that you’re still going to be looking for the guy who looks like he’s going to be a bad guy before he gets to your house. And I’m like, “Yeah, that’s absolutely great. Keep doing that.” That’s one layer. You know, we still need to get to a place where only what is good and clearly identified as good and invited by me comes to my house. And then sure, have that additional layer for that, you know, the global psych profile of all humans. So that before you even invite them to your house, that’s another layer of saying, “Do I allow him in or not?”
David Redekop: (31:21) Right. Right.
Inspiring the Next Generation of Defenders
David Redekop: (31:22) Francois, looking ahead from where we are right now, what would be a way that you could think of that we can inspire the next generation to continue to be on the bleeding edge of taking a defensive approach? And then not only to develop it better and faster, but then to also help with the adoption of those security postures.
Francois Driessen: (31:50) Well, first of all, the way that you can ensure adoption of good security posture, the only way that you can ensure it is by people actually unfortunately feeling the pain of not doing that. And then hopefully there are technologies that can take that and make it as convenient as possible. Because we’re dealing with this problem when it comes to security. We’re dealing with convenience and you’re dealing with security. And then practicality is your slider that sits in between it, right? So, make it as convenient as possible for end users to be secure. And create technologies that are secure by design. If I rely on my parents to not be scammed, it’s kind of not fair towards them. They don’t think like bad guys. They trust everyone. They can’t imagine what these scammers are doing and that they would be as heartless as they are nowadays. So, there needs to be safeguards in place that allows them to still be human and not let the technology harm them.
(32:51) And the same is true for a younger generation. We’re not just talking about cyber threats when we are talking about a younger generation. We’re talking about how engaging with technology shapes the development of your brain. And so, there’s a lot I think there to be done. And one of the biggest issues is that there aren’t enough tools placed in the hands for those who are responsible for protecting those young minds to actually protect them. I’m talking, you know, you know, children at risk or school boards and so forth. That those tools weren’t necessarily in the hands of defenders, but those tools exist now, right? So, I’m not sure if I’m fully answering your question. There’s multiple layers, I think, to that.
David Redekop: (33:32) Francois, you are clearly a gifted communicator. And in the area of appealing to the next generation, there’s been many podcasts that have actually successfully engaged an audience that wants to be a red-teamer. But we haven’t had the same level of excitement for blue teamers, the defenders. How can we fix that?
Francois Driessen: (33:55) Oh, okay. Yeah. I, you know, red teaming is part of being a defender, right? You know, it’s always exciting to hear about how you could break some someone’s stuff or how you could get away with it. I think the reality is that the people that actually have this sheepdog mentality that can protect the sheep, it’s not everyone that’s like that. You know, defenders have a different mentality. They put other people before themselves. They care about other people. They care about consequences to other people. And I think if we want more people in that realm, then it would be that’s the aspect in human nature that I believe that needs to be awakened. Is to see the value of protecting and being selfless almost with it in your application of technology.
(34:39) And don’t get me wrong. I mean, the idea of defending and being better than the bad guys, that is awesome, you know. So, I think if I was a really good red-teamer, I might find it appealing to now say, “How can I apply my red teaming skills from a defensive posture so that my own red teaming skills couldn’t beat the defender?”
David Redekop: (35:01) Yeah. Yeah. Exactly.
Francois Driessen: (35:03) And I think a good defender is also a red teamer in his mind. Because you have to think, “What is the red teamer going to be doing?” So, yes, I absolutely believe that, and it’s something that I actually witnessed in my son. He started specifically on the red team side. And it was so much fun to see what he could break of what other people built. And then he got to this place where you say, “Hold on. I want to build something that’s not breakable.” And that was almost like a next step for him.
(35:28) Look, we always need both sides. And you always need that mentality that is fully focused on how can I mess with people’s stuff to break it and make it do what it wasn’t intended to do. And then you need the other side as well that say, “How do I build something that doesn’t break when it gets messed with, messed with it, break it? How do we make sure that that never ever happens again and learn from that so that the next iteration foundationally becomes not breakable?”
David Redekop: (35:54) Right. Right on. Agreed 100%. Okay. There we go. So, those of you who want to be challenged, just see if you can defend your own red teaming skills. That’s a great way to move forward.
Similarities Between Film and Security
David Redekop: (36:08) So, on the surface, it sounds like the film world and the security world have a lot of differences and some similarities. Yeah. What’s your take?
Francois Driessen: (36:16) Yeah. Yeah. I think it’s important to understand, is that especially if it’s a technology startup, a startup’s a startup. You have a vision. You have a dream that then turns into a vision. And then that needs to turn into a strategy that then needs to turn into a directive, right? And you’re working with extremely talented people, just like what we’re dealing with here. We’re sitting with incredibly talented developers and people on the team that does a specific part in their team. And it’s the same way in the film world. Every film is essentially a mini entrepreneurial project, right? It’s like it was, it’s like planning a wedding, changing jobs, and inventing a new technology at the same time. Always, you know, that’s that that’s always always needed to happen.
(37:02) But the same is true here. Is that everyone has a unique part to play. We got to figure out how to do this. We got to figure out how to not blow up in the process as we go. How does this affect the end user? And so the similarities are way more than what I think anyone can imagine. And I think just like in design, there’s something called universal principles of design. You will find it in math. You’ll find it in architecture. You’ll find it in fine art, in painting. Like golden ratio is a good example of that. It’s even in our human bodies that there’s golden ratio aspects to how the human body was designed. And I think that’s true across any discipline in any industry where you’re going. Every industry speaks its own language. Every industry has its own understanding. Every industry has its own fears, its own pressure. And it’s just a matter of being able to translate what you have into a different world. And it’s really the same world.
Conclusion
David Redekop: (38:00) Awesome. So glad to have you, Francois. Any last piece of wisdom that you wish to share?
Francois Driessen: (38:05) I think a last piece of wisdom to share is that it’s up to us to master technology in every aspect. Everything dangerous that have come to humans generally come in the package of convenience. Right? And generally doing stuff that’s good for you health-wise, security-wise, generally has a level of inconvenience to them. And I think our challenge is to find the balance between that. Because there’s a point where security can become so inconvenient or so strong on the inconvenience side of the scale that people abandon it or that they feel that it’s just not practical anymore and then they just throw in the towel, right? And this is on the other side, convenience comes in. And you’re saying, "Oh, great. I don’t mind that my AI agent gets to, you know, read all my emails and, you know, book all my flights and that gets to talk to whoever it wants to. And you just do a simple prompt injection and boom, you know, you’ve just exfiltrated all of your intellectual property or something that’s, you know, PII of people that you’re supposed to be protecting. If you’re a medical practitioner, whatever, and suddenly that just gets pumped out to the bad guys.
(39:19) I think it’s very, very important to approach convenience with caution. And say, “How do I understand the technology? And if you don’t understand it, put people around you that do understand it. And how do I actually secure this technology, get stuff to work, get it optimized, get it secured?” Don’t go to the optimization point without the security part. Needs to be part of the design in the very beginning of everything that we implement. And I think if we do that, then AI definitely has a great place in our lives, but it needs to be wrangled. If you don’t wrangle it, it’s going to wrangle you. And, um, yeah, let’s not be patient zero.
David Redekop: (40:01) Thank you, Francois. We’ll see you again.
Francois Driessen: (40:03) Okay, good. Thank you.
David Redekop: (40:04) Bye-bye.
Francois Driessen: (40:05) Bye.
Outro
(40:09) The Defender’s Log requires more than a conversation. It takes action, research, and collective wisdom. If today’s episode resonated with you, we’d love to hear your insights. Join the conversation and help us shape the future together. We’ll be back with more stories, strategies, and real-world solutions that are making a difference for everyone. In the meantime, be sure to subscribe, rate, write a review, and share it with someone you think would benefit from it, too. Thanks for listening, and we’ll see you on the next episode.