Steve Orrin is the Federal Chief Technology Officer and Senior PE for the Intel Corporation. In his role at Intel, Steve orchestrates and executes customer engagements in the federal space, overseeing the development of solution architectures to address challenges in government enterprise, national security, and other federal issues.
Steve has positioned Intel as a leading expert in the application of technology in government. He played a lead role in launching the Intel Trusted Compute Pools architecture, an innovative security solution for trustworthy virtualisation and cloud stacks.
In this interview Steve Orrin discusses the latest AI security issues for enterprises. With advancements in tech making security threats more sophisticated, any organisation can be a potential target. What types of AI security threats should your organisation be aware of, and what security features are the key for an effective defence? Steve offers his insights:
This was discussed at last year’s DEFCON conference in Las Vegas, where there was an entire track dedicated to AI hacking. It included demos of the kind of things you could do to both large language models and generative AI as well as to some of the more classic CNNs and DNNs: the object recognition and natural language processing kind of AI tools, we saw poisoning and prompt injection as a common attack vector. There are some more esoteric or well-designed attacks going after the model weights: first understanding the model to be able to then create images or inputs that will fool the model.
One of the classic examples – I believe it was done by Purdue University – showed that if the attacker understands how your model is set up, and what are the weights it’s applied to, and the metrics it’s using, then if I had glasses with, say, stripes on them, I could fool the AI and it would no longer recognise me, even to the point where sometimes it would no longer recognise me as a human. So not only would it miss the facial recognition, but you could actually trick it into not knowing what a human was.
Another example from the early days was being able to put a sticker onto a stop sign and suddenly it couldn’t detect it was a stop sign.
With a deeper attack, a lot goes into understanding the model. It involves stealing the model, analysing it, looking at the data that’s generated or what its weighting bias is, and then being able to use that against the model later. And we’re seeing a lot more of that research, where the goal is to bypass or skew an AI for a variety of reasons.
There’s another area to think about when we talk about malicious use. Obviously, the AIs themselves are being targeted, whether it’s to embarrass or to do malicious activity. But we’re also seeing the adversaries, the criminals, the cyber gangs, use AI to conduct their malicious scams. We’re seeing much bettercrafted phishing that’s being generated by AI. We’re seeing deepfakes being used to prompt people to send money, or to respond to a support call that’s not really from support. And we’re seeing AI being used to do information gathering to understand what all the services are and what are all the ports to really speed the process of malware development.
And so, as the adversaries are using these AI tools, we as defenders have to do a better job of adopting and defending against these kinds of attacks ourselves.
At the end of the day, all of this runs on hardware. There are a couple of key properties that hardware provides. One is that hardware can accelerate the things you want to do to protect AI. So, being able to encrypt your models and data feeds using hardware acceleration to accelerate their cryptography; the key management and the protocols allow you to turn on all those security bells and whistles without impacting performance.
One of the baselines is leveraging the hardware acceleration that’s been built in, sometimes as long as 20 or even 30 years ago. Crypto acceleration has been in your commercial off-the-shelf hardware platforms, your Xeons and your PC clients, and it’s been available since 2010. A lot of those features are already baked in and much of the software stack takes advantage of it; you just need to turn it on.
But the other area is understanding that hardware is physical, it’s not virtual. There are technologies like confidential computing where I can use the hardware to lock down access control to memory and be able to use the hardware-based encryption of memory.
So, if you think about the data security model we’ve talked about for years: data at rest, security and data in transport. Confidential computing is that last mile of data and in use protection. And for AI, that is really where all the fun happens – in the actual inferencing, in the training and execution of the AI algorithms, and being able to put that into an encrypted memory container where the memory itself is encrypted, and the access to that memory is locked down by the CPU. That’s a capability that allows you to protect your AI even if you have malware resident on the platform.
Actually, it even protects you, if someone physically walks up and tries to put a probe onto the platform and tries to read the memory in real-time, it’s all encrypted. And so, one of the key roles that hardware will play in securing AI is providing that safe place to stand; what we call a trusted execution environment so that your execution of your AI engine is protected, its algorithms are protected, the models, the weights, all of that can be protected no matter whether it’s in the cloud or deployed at the edge where you don’t have guards with guns protecting it.
You can use that hardware to protect your AI in its execution, and ultimately you also want to protect the responses. You want to ensure no one’s trying to change the response. So, it can give you that end-to-end protection that we’ve all been looking for.
The same challenges you’ll always find when it comes to integrating security: one is you have the tradeoff of security versus performance. That’s always a classic problem when adding security. One of the things that Intel has spent 30 years working on is how to introduce security features that don’t impact performance. And part of that is building those features into the hardware itself.
But the biggest challenge is twofold: first is preparing the software stack; making it easy to integrate into the products and applications you already use. A key thing we’ve learned (and the industry has learned) is that the more stuff you have to do in order to take advantage of security, the less likely it is that the end user or customer is going to be able to want to do it, or be able to successfully do it. So, removing friction is a key thing.
By performing that integration early, by having the operating system, the security vendors themselves have that hardware integration before it gets deployed. So, when you buy a software product, it’s already taking advantage of it. Or when you go onto the cloud, it’s just a click of a button and you get confidential computing.
It’s those kinds of integrations that Intel and its ecosystem are doing both in open source and in commercial software to ease that friction between adopting security and deploying it.
The second issue is that often there’s a lack of understanding or apathy. A lot of people just don’t realise they have a lot of security at their fingertips if they just turned it on. In the security industry, often the reason we still have security problems is you have the right security capabilities, but you haven’t configured them, you haven’t flipped the switch to turn them on, you haven’t deployed it to all the different parts.
One challenge that all security professionals have to deal with is that we must be right 100% of the time. The hacker has to be right just once.
I think as AI becomes more pervasive, some of the bigger challenges are going to be around data privacy. AI is a data engine. It’s hungry for data, it consumes data, it lives on data. And the more data you put in, the more opportunity there is for exposure of that data. And once data gets learned, it’s very hard to unlearn that data. So, I think data privacy and security is a key part of that, but it’s bigger than just a question of whether I can protect the data throughout it going in and coming out.
And even in a geopolitical environment, different countries have different determinations of what’s considered to be privacy, what’s considered to be sufficient security. Different industries from their regulations are going to have different standards. So one of the key challenges of securing and trusting AI is trying to rationalise across all the different domains of trust and regulation to provide a solution that can service all those different environments.
The other thing to keep in mind is that AI is a tool. It’s a tool for organisations, industry, and governments to use for the betterment of their customers, their business, their citizens. It’s also a very powerful tool for the adversaries. I think we’re going to continually see this cat and mouse as they adopt technology much quicker than the legitimate industries do. So how do we keep that balance and how to basically not have implicit trust? This is why the term ‘zero trust’ is so important today. It’s changing that risk dynamic from trust and then verify, to don’t trust, verify twice and then maybe trust.
The adversaries always take advantage of the fact that these are just implicitly trusted things, whether it be identities, or credentials, or users that are trusted. Zero trust flips that on its head.
It’s about doing the right things at the time of the transaction to figure out whether I can trust just this transaction. That shift in the model may help us get a better handle. As AI’s constantly changing and evolving, I think zero trust will play an even more important role. We’ll consider how to leverage that AI and how do we give it access to the things we want, but ultimately how do we trust it? In some cases we’ll decide not to.
These days everyone goes to Chat GPT, and if they get something really weird they know that’s inaccurate. But there’s a lot of in-between: stuff that comes out of these AI systems that’s actually not true, but it looks good enough that we’ll think, oh, that must be true. So we need to say: ‘I’m not going to trust you until proven that you can be trusted.’ And I think that shift is one of the big challenges of our time. How do I get that dynamic trust built into the use of AI?
There’s a key aspect to consider, and it’s crucial to consider the aspect itself, and also the point in time that you implement it. That aspect is data governance.
Data governance is critical, but what often happens is that you start building your AI, you’re doing your dataset, and only further down the road do you realise you need to add some data governance. That’s a huge mistake. Data governance must happen at the very beginning, and at the same time you should start to do your problem definition, because that problem definition will actually inform the data governance and vice versa. It will form how you craft the discovery phase.
What data governance does – besides giving you a framework for applying controls – is it can inform you if you’re in a regulated industry or getting data from a regulated industry, or your marketing people think someday you may want to go into a regulated industry. Having that data governance framework built into your model will allow you to apply controls early, so that you can be more agile downstream. I use the word framework, because it’s not just a tool that you use once; it’s an overlay on the whole process that constantly needs to be informed and integrated in.
Another key point is that it’s not just about technology, it’s about the organisational dynamics. As you’re building these AI solutions, I often talk about having a diverse team building it, not just the data scientists and the developers, but the business unit owners who are actually going to generate the revenue or the benefit.
Having legal and compliance involved from the get-go is a critical step to making sure you’re both informed on the potential challenges and planning for them, but also what’s coming out the other end, so that when you come out with your AI, they’re not like a deer in headlights, panicking and saying: ‘Oh God, we’re going to shut you down. We haven’t looked at this from a compliance perspective.’ Having the key stakeholders involved from the beginning is actually a recipe for success time and time again.
One is that you can use AI to solve any security problem. A lot of people think: ‘AI is this powerful tool. I’m going to use it to detect the next advanced persistent threat that no one has ever seen before.’ But because AI is built on data, it needs to train on a lot of data about how things happen, to be able to make a prediction about how things will happen.
So if there’s only ever one of these attacks, that’s not enough data to really train a good AI to detect the next one of them.
And that’s been one of the fundamental challenges around using AI in cybersecurity. Everyone’s looking to use it to catch that one-time, really well-crafted, nationstate advanced threat. And the reality is that’s not a good use of AI. That’s one of the main misconceptions.
A second common misconception is the opposite of the first: ‘I can’t use AI for security because I can’t trust it,’ or ‘I’ve got smart cyber hunters. They’re going to do that.’
But there is a place where AI is absolutely going to show real value. Think about a day in the life of a cybersecurity professional inside an organisation; 90% of their time is spent firefighting the hack du jour, a blip on the firewall, the ransomware phishing campaign that’s coming in. It’s the mundane, everyday issues that happen all the time. They’re constantly figuring out which applications are affected, and then patching those applications. It’s like a whack-a-mole kind of approach to security, and they get no time to deal with the really important next-generation attack because they’re always consumed by the day-to-day.
That’s an area where AI can actually shine: in the automation and the application against these mundane, repetitive daily processes. We have a lot of data of what it means to do searching vulnerability databases. That’s something that can absolutely be automated – and very effectively, using AI machine learning.
And if you can use machine learning for 80% of what I call the stupid stuff, then your underfunded, overworked and sleep-deprived team of security professionals can focus on the 20% of really hard problems.
That leads us to the third myth: that AI is going to replace my job, whether it be in security or any other field. The reality is AI is a tool and it’s something we should harness, and it’s not going to replace your job, it’s going to augment you. It will enable you to focus on the more challenging and interesting problems – the ones you really want to get up and go tackle. Because AI can take care of the 80% of the mundane, the repetitive, the manual processes.
That separation is actually one of the ways I think that organisations will get the largest ROI of the application of AI for cybersecurity – it’s not in trying to find that one-off attack, but in automating and getting more efficient at dealing with the regular, dayto-day kind of vulnerabilities.
I think we’ve seen that almost any industry that’s adopting AI can be at risk of the threats. Certainly, the regulated industries; the ones that have value. Consider the question that was asked of a bank robber back in the 1800s: ‘Why do you rob banks?’ And his answer was, ‘That’s where the money is.’ The same is true today.
So industries with money or assets, or are critical infrastructure regulated are going to be targeted. But we’ve also seen that the motives behind the attacker can vary. It can be financial gain, or it can be sowing chaos and disruption. It could be influence, it could be revenge, it could be geopolitical. The motives are across the board. And when you have this diversity of motives, it means that the target space is much richer.
If it’s financial gain, they’re going to go after financial assets or the large user bases. But if the motives are otherwise, critical infrastructure, supply chains, logistics – those can also be ripe targets.
And so, it actually doesn’t serve us to assume one industry is more at risk than another. It’s really about understanding that every industry could be a target, and it’s about your organisational risk appetite and risk posture to determine how much security you need to deploy to meet your risk bar.
I believe it’s an absolute mistake to think: ‘Well, I’m not important. I’m not doing something critical, so I’m not at risk.’ A really good example of this came during Covid with one of the big ransomware attacks.
We all know about the attack on the capital pipeline on the US East Coast, but one of the ones that got a lot of news for a short time, but was really informative, was the JBS attack; the one on the meat processor in Australia. They got a ransomware attack, it shut down meat production, and consequently there was a global shortage of meat supply.
And I learned two important things about that. Number one: no one is immune from attack. You couldn’t get a less sexy business than meatpacking. There’s no money; it’s not critical infrastructure, it’s not energy. It’s the least likely industry you’d expect to be a target.
The second thing was just how dependent we are on technology. Because again, you would think of meatpacking as not being technology-dependent, yet ransomware took down the line. It shows that every industry, even meatpacking, is dependent on technology for operating and so is vulnerable to ransomware.
So, going back to your point, I’d argue that every industry is a target for different reasons. The crucial question for an organisation is: ‘Where do I fit on that risk profile?’ They should consider whether they have assets that are of value from a cybercriminal perspective; whether they’re servicing a critical infrastructure or a critical constituency that potentially makes them an activist target; whether they have an important part of the global supply chain. So, understanding where you fit in that risk profile and then applying the right controls to map to your risk.
And that’s why things like the cybersecurity framework from NIST and the risk management frameworks are really important. Because it’s not a one-size-fits-all solution. It’s understanding what the right controls for my risk are, for my environment, at this given time. And what zero trust adds on to that is the idea that it’s not a once-and-done action, it’s a continuous process of reassessing risk.