Self-healing, self-evolving software? Real-time patching? Secure code written solely by machine? From proof-of-concept of a zero-day to enabled defense within seconds? Will artificial intelligence-enabled technology bring about “perfectly” secure networks? Technically maybe? So, does that mean we are just a few short years away from being perfectly safe in the digital realm? Of course not. Artificial intelligence (AI) incorporated only in this way is the wrong solution. Or, rather, an incomplete and therefore ultimately ineffective solution.
Yes, the evolution of cyber security, especially in the rapid adoption of artificial intelligence will likely result in the existence of “perfect” technology/secure networks. But we also know that bad humans are adopting AI for their purposes as well. The difference, however, is while the good humans are busy focusing on and protecting technology, the bad humans will continue to have an edge as they are attacking humans through technology, not technology itself.
I would argue that the bad humans saw the implications and advantages of AI far before the good humans could wrap their minds around this newfound ‘everyman’ capability.
We will see incredible leaps in AI technology driven by the market, and of course, marketing. We already see AI-generated influencers trying to sell you things, but soon we’ll see AI-political influencers and social influencers aiming to make sure you do the things they want you to do. And at times, because we’re used to doing so, we’ll bypass security controls and give access to those who wish to do us harm.
The recent era has been all about collecting and controlling information. Knowledge is power, no? But now, I think, fomented by the irrational politics of today, the next frontier will be the quest for authenticity. And I don’t mean authenticity in the quasi-spiritual new age mental-health-wellness sense, but your authentic self as it is manifested in the digital realm. What we will all want, and desire, is the confident knowledge that what we see and experience is truly real. “Irrefutably original” will be our most treasured possession. The Oxford Dictionary defines authentic as something “of undisputed origin; genuine.” It’s safe to say that things we see online now are questionable at best. But not only can we trust that our online experience may not be genuine, there is no reason to believe that our self is the only self on the internet. We’ve been living with identity theft for a while now.
I think it’s safe to say the current and traditional data sets that make up our digital self – our personally identifying information (PII), the most prominent being our Social Security Number, are no longer viable, reliable, or believable representations of us. Those datasets are actual liabilities, to the people who own the information and all the companies who insist on storing them. Which is why multi-factor authentication is so important. Because our PII is so ubiquitous and held by so many different companies and threat actor groups, we must have multiple, independent, biologically unique, and/or ‘air-gapped’ methods to verify that we are who we say we are. Why companies continue to strive to collect and hold all this data is beyond me. And how many times can my SSN be stolen before it is no longer useful to the adversary? After all our advances, do we feel any safer online? Or is it worse now? How can that be possible?
I’m seeing a couple of trends in the industry, albeit admittedly without rigorous scientific or academic backing:
- Breaches continue unabated no matter how much is spent in the cybersecurity space or how much the cybersecurity marketing engine promises that the solution is at hand.
- The cybersecurity market is saturated, as we can see with recent layoffs in the tech industry, and organizations, desperate to cut costs, are reducing salaries and tightening budgets.
- The magical promise of AI-powered everything is too appealing to everyone to ignore.
So, what happens now? Organizations will think, “Hey, I saw a cool advertisement about how AI can be used to scan our entire network in seconds and find the bad humans! We should buy that! In fact, some of those presentations I’ve seen, which are really cool, say we can save some money by just replacing expensive cybersecurity analysts with AI!”
And, to some degree, there is truth to that belief. There is some really cool stuff in the market right now. But also true is that the current pace of technological evolution, especially in the security space, is wildly outpacing the abilities of its customers to understand and properly implement that technology. That lack of understanding does not even account for the introduction of artificial intelligence.
So what are we going to do? We’re not going to go full Luddite and boycott AI. The market forces are such that this evolution, this adoption of AI in all things, is inevitable. Bluetooth-enabled refrigerators? AI devices that operate your phone apps for you? You betcha. But, of course, we’ll also try to implement mysterious new technology to solve issues we already don’t fully understand. Rather than getting more clarity into the operations of our organizations, we’re going to be adding layer after layer of digital obfuscation.
What is a better way? We need to be consciously intelligent when adopting AI. We can’t avoid it, so we should use AI both to improve our defenses, and, more importantly, to help us understand our technology, where we are vulnerable, and why.
What the bad humans always try, regardless of how good the security is, is to motivate you to take an action that you believe is in your best interest and to make the systems you interact with believe it is you doing it. They are trying to capture your authentic self.
We can’t solve our cybersecurity problems with technology alone. Bad humans use technology to achieve their goals, but the technology itself is the tool, not the prize. That is where our industry is failing our customers. We believe we are competing in an arms race that is winnable just like with nuclear weapons. You buy better tech until the other side can no longer sustain their efforts.
But we’re focused on the wrong conflict. The successful bad humans are not targeting tech – they are targeting humans. They are targeting our fleshy, organic, non-technical, emotional brains. And the solution goes beyond just security awareness training, though that is important; I believe we also need technology literacy. We need to understand how technology is a part of our lives. It’s not just a security message, “don’t click a bad link” … it is understanding how technology works, how it presents itself to us, and how we interact with it. We also need to understand how marketing campaigns work. How advertising works. How psychological influence operations work, and how they use technology to achieve their goal. This needs to be a common education for everyone.
We, as consumers first and technology users second, don’t know how to differentiate between just a picture and post versus a picture and post created with motive. The bad humans know this. The good humans in charge of security are also very aware of this but think these concepts are too complicated for the average consumer. Worse still, the marketing departments want us to believe their trillion-dollar advertising campaigns are somehow unrelated to ongoing successful phishing campaigns. We’re allowing the wizard to stay behind the current and go about his business. This does not bode well for us.
Here’s an example. What is the most unsexy, non-technical “cyber threat” that is arguably the costliest? Business Email Compromise (BEC), or rather as I think of it, phishing with finesse. BEC is the compromising of internal email addresses and poorly implemented financial processes to trick … humans, not technology…. to transfer money from the organization to the bad humans through the established and trusted financial infrastructure. It’s an attack on authenticity at its finest.
Our “authentic digital self” is what adversaries covet. They will use whatever means possible to trick us into giving it up. But we must be honest, the techniques used by bad humans are no different than what any marketing or political campaigns are also utilizing. We user-consumers are being bombarded every second with motive-laced information to make us feel a certain way and to commit some sort of action or give up some sort of information. If we don’t understand how that process works, how we are being manipulated by someone selling us everyday products, how can we possibly protect ourselves when the same methods are used by adversaries with ill intent?
In the technological arms race, I truly believe the AI systems used on both sides will eventually cancel each other out. But users will still be users. Humans are humans. If we don’t get ahead of catching users up to the technology we use, we will never “get ahead.” You cannot win a race when your competitors are running on a completely different track.
The cybersecurity industry will continue to fall short in protecting its users and organizations because we are relying solely on technology to solve a problem that is bigger than technology alone. The rapid and inevitable adoption of AI will only make it worse, not better. The solution will require us to work outside of the security operations centers, even beyond our businesses and organizations. Yes, the solution will utilize AI, but it is not just about technology. The solution is also about humans, our education, our values, and what we expect from our society. Does that make the solution impossible? I am not sure. But I know we’re still not anywhere close.
The future of artificial intelligence in cybersecurity and staying true to our authentic digital selves
Self-healing, self-evolving software? Real-time patching? Secure code written solely by machine? From proof-of-concept of a zero-day to enabled defense within seconds? Will artificial intelligence-enabled technology bring about “perfectly” secure networks? Technically maybe? So, does that mean we are just a few short years away from being perfectly safe in the digital realm? Of course not. Artificial intelligence (AI) incorporated only in this way is the wrong solution. Or, rather, an incomplete and therefore ultimately ineffective solution.
Yes, the evolution of cyber security, especially in the rapid adoption of artificial intelligence will likely result in the existence of “perfect” technology/secure networks. But we also know that bad humans are adopting AI for their purposes as well. The difference, however, is while the good humans are busy focusing on and protecting technology, the bad humans will continue to have an edge as they are attacking humans through technology, not technology itself.
I would argue that the bad humans saw the implications and advantages of AI far before the good humans could wrap their minds around this newfound ‘everyman’ capability.
We will see incredible leaps in AI technology driven by the market, and of course, marketing. We already see AI-generated influencers trying to sell you things, but soon we’ll see AI-political influencers and social influencers aiming to make sure you do the things they want you to do. And at times, because we’re used to doing so, we’ll bypass security controls and give access to those who wish to do us harm.
The recent era has been all about collecting and controlling information. Knowledge is power, no? But now, I think, fomented by the irrational politics of today, the next frontier will be the quest for authenticity. And I don’t mean authenticity in the quasi-spiritual new age mental-health-wellness sense, but your authentic self as it is manifested in the digital realm. What we will all want, and desire, is the confident knowledge that what we see and experience is truly real. “Irrefutably original” will be our most treasured possession. The Oxford Dictionary defines authentic as something “of undisputed origin; genuine.” It’s safe to say that things we see online now are questionable at best. But not only can we trust that our online experience may not be genuine, there is no reason to believe that our self is the only self on the internet. We’ve been living with identity theft for a while now.
I think it’s safe to say the current and traditional data sets that make up our digital self – our personally identifying information (PII), the most prominent being our Social Security Number, are no longer viable, reliable, or believable representations of us. Those datasets are actual liabilities, to the people who own the information and all the companies who insist on storing them. Which is why multi-factor authentication is so important. Because our PII is so ubiquitous and held by so many different companies and threat actor groups, we must have multiple, independent, biologically unique, and/or ‘air-gapped’ methods to verify that we are who we say we are. Why companies continue to strive to collect and hold all this data is beyond me. And how many times can my SSN be stolen before it is no longer useful to the adversary? After all our advances, do we feel any safer online? Or is it worse now? How can that be possible?
I’m seeing a couple of trends in the industry, albeit admittedly without rigorous scientific or academic backing:
So, what happens now? Organizations will think, “Hey, I saw a cool advertisement about how AI can be used to scan our entire network in seconds and find the bad humans! We should buy that! In fact, some of those presentations I’ve seen, which are really cool, say we can save some money by just replacing expensive cybersecurity analysts with AI!”
And, to some degree, there is truth to that belief. There is some really cool stuff in the market right now. But also true is that the current pace of technological evolution, especially in the security space, is wildly outpacing the abilities of its customers to understand and properly implement that technology. That lack of understanding does not even account for the introduction of artificial intelligence.
So what are we going to do? We’re not going to go full Luddite and boycott AI. The market forces are such that this evolution, this adoption of AI in all things, is inevitable. Bluetooth-enabled refrigerators? AI devices that operate your phone apps for you? You betcha. But, of course, we’ll also try to implement mysterious new technology to solve issues we already don’t fully understand. Rather than getting more clarity into the operations of our organizations, we’re going to be adding layer after layer of digital obfuscation.
What is a better way? We need to be consciously intelligent when adopting AI. We can’t avoid it, so we should use AI both to improve our defenses, and, more importantly, to help us understand our technology, where we are vulnerable, and why.
What the bad humans always try, regardless of how good the security is, is to motivate you to take an action that you believe is in your best interest and to make the systems you interact with believe it is you doing it. They are trying to capture your authentic self.
We can’t solve our cybersecurity problems with technology alone. Bad humans use technology to achieve their goals, but the technology itself is the tool, not the prize. That is where our industry is failing our customers. We believe we are competing in an arms race that is winnable just like with nuclear weapons. You buy better tech until the other side can no longer sustain their efforts.
But we’re focused on the wrong conflict. The successful bad humans are not targeting tech – they are targeting humans. They are targeting our fleshy, organic, non-technical, emotional brains. And the solution goes beyond just security awareness training, though that is important; I believe we also need technology literacy. We need to understand how technology is a part of our lives. It’s not just a security message, “don’t click a bad link” … it is understanding how technology works, how it presents itself to us, and how we interact with it. We also need to understand how marketing campaigns work. How advertising works. How psychological influence operations work, and how they use technology to achieve their goal. This needs to be a common education for everyone.
We, as consumers first and technology users second, don’t know how to differentiate between just a picture and post versus a picture and post created with motive. The bad humans know this. The good humans in charge of security are also very aware of this but think these concepts are too complicated for the average consumer. Worse still, the marketing departments want us to believe their trillion-dollar advertising campaigns are somehow unrelated to ongoing successful phishing campaigns. We’re allowing the wizard to stay behind the current and go about his business. This does not bode well for us.
Here’s an example. What is the most unsexy, non-technical “cyber threat” that is arguably the costliest? Business Email Compromise (BEC), or rather as I think of it, phishing with finesse. BEC is the compromising of internal email addresses and poorly implemented financial processes to trick … humans, not technology…. to transfer money from the organization to the bad humans through the established and trusted financial infrastructure. It’s an attack on authenticity at its finest.
Our “authentic digital self” is what adversaries covet. They will use whatever means possible to trick us into giving it up. But we must be honest, the techniques used by bad humans are no different than what any marketing or political campaigns are also utilizing. We user-consumers are being bombarded every second with motive-laced information to make us feel a certain way and to commit some sort of action or give up some sort of information. If we don’t understand how that process works, how we are being manipulated by someone selling us everyday products, how can we possibly protect ourselves when the same methods are used by adversaries with ill intent?
In the technological arms race, I truly believe the AI systems used on both sides will eventually cancel each other out. But users will still be users. Humans are humans. If we don’t get ahead of catching users up to the technology we use, we will never “get ahead.” You cannot win a race when your competitors are running on a completely different track.
The cybersecurity industry will continue to fall short in protecting its users and organizations because we are relying solely on technology to solve a problem that is bigger than technology alone. The rapid and inevitable adoption of AI will only make it worse, not better. The solution will require us to work outside of the security operations centers, even beyond our businesses and organizations. Yes, the solution will utilize AI, but it is not just about technology. The solution is also about humans, our education, our values, and what we expect from our society. Does that make the solution impossible? I am not sure. But I know we’re still not anywhere close.
< Back