AI is a means to an end, not a product in itself, says Senad Aruc
AI has been the talk of the town ever since the launch of ChatGPT on November 30, 2022. It’s one of those terms that have become synonymous with innovation, while it’s also an abstract concept or even an empty buzzword. Which real opportunities does AI present in the cybersecurity realm?
To find an answer to this question, we spoke to Senad Aruc, founder and CEO of Imperum, a platform that combines SOAR (Security Orchestration, Automation, and Response), XDR (eXtended Detection and Response), and DFIR (Digital Forensics and Incident Response) modules using AI. With over 25 years of cybersecurity experience, Senad has built security operations centers, worked for vendors, and consulted using these products.
Together with Senad, we walk through some of the biggest questions surrounding AI. Will it replace humans? Does AI (language models) risk leaking information, functioning like a black box that stores all input data? And, how can we best put AI to use?
Why AI won’t replace humans
We can call Senad a technocrat, as he seems to be a firm believer in using technology to solve the challenges that security teams are currently facing. Not in the least case because “we have a shortage of people in cybersecurity. That’s our number one problem.”
“That problem is fed by the fact that everyone has a growing number of technologies used. I’m sure you have 500 to 600 applications on your phone alone. The operating system itself is an application, and on top of that, you install a lot of stuff. For companies, it’s a similar situation.”
“Cybersecurity is a 24/7 job, not an 8 to 5 one. A team works from 8 am to 5 pm, and what happens after? AI should take over. That’s what we try to do at Imperum. And you can decide for yourself to what extent AI takes over, whether it only analyzes or also responds. It can do whatever you want it to do.”
Security teams work with many different tools that provide alerts. “You cannot find a company with less than ten or fifteen different cybersecurity tools implemented, right? This makes working complex because someone has to manage and review those tools. In the case of an emergency, you need to jump from one app to the other to figure out what’s going on.”
AI is not likely to replace people, but it will support teams to handle this manual work. “A lot of companies are building technology to replace people, you know, but I don't think that that's going to happen in cybersecurity, honestly.” He adds that it can take over a lot of that manual work: “When there’s an alert, it can verify, enrich, filter out false positives, etc.”
“There are lots of big organizations of ten thousand people, with a cybersecurity team of only ten people. Cybersecurity is a 24/7 job, not an 8 to 5 one. A team works from 8 am to 5 pm, and what happens after? AI should take over. That’s what we try to do at Imperum. And you can decide for yourself to what extent AI takes over, whether it only analyzes or also responds. It can do whatever you want it to do.”
There is one exception where AI will be better than humans according to Senad: “If the attacks are becoming more AI-driven, then a human is not capable of detecting them.” It’ll be a case of AI against AI.
Is AI a leak of information?
“A problem in cybersecurity is that AI saves all information it’s fed,” says Senad. A non-argument for not using AI, according to Senad. There is a very simple technical solution to this problem as tools can easily “train models on-premise.”
If companies want to use cloud-based generative AI, it’s possible to activate an interim AI capacity unique to the organization. “So basically, we can mask IP addresses, domain names, or email addresses.”
To give an example, he mentions feeding an AI model information about a phishing attack to a company email address. “We remove your name, we remove your email, and we ask more general questions about how the situation should be solved without including your full information.”
A practical example: AI applied to phishing attacks
As Senad views it, AI should always be applied to specific use cases. “To solve a phishing attack, you need to have three capacities. The first step is alerting: what kind of technology is going to alert you that there is a phishing attack? The second one is an enrichment technology, what is going to provide you with the data to verify that this is not a false positive? Number three is a response technology: how are you going to respond to this threat?”
“With Imperum, you connect your technologies, you define what this technology is going to do within the use case and then you build a playbook for it. Let's say playbook number one for my phishing is receiving alerts about this phishing attack, it will fetch the alerts from my email security solution.”
“Number two, when there is a phishing alert, you can enrich the information about the email, about the mail server information, about the raw message image. There is a lot of raw data on the email itself that we extract, like the server from where it's sent, if it's a signed or non-signed email, which technology has been used to determine if it has an SSL or not, and more. We enrich this information to be able to figure out if it's coming from a legitimate source or if it's a real phishing attack.”
“Then, how do we respond to this? We use another technology to block the email. If the email had an attachment, we use a different tool to find the attachment and block it. We build playbooks to automatically follow these processes.”
In this situation, AI supports different and specific steps in this process. “We use it on the enrichment level, but we also train a model to detect if it’s real phishing or not. When we speak about the response part, you can predefine some static rules, you don’t need too much AI because you know what you’re going to do.”
“We use AI on an enrichment level, but we also train a model to detect if it’s real phishing or not. When we speak about the response part, you can predefine some static rules, you don’t need too much AI because you know what you’re going to do.”
“We use AI on an enrichment level, but we also train a model to detect if it’s real phishing or not. When we speak about the response part, you can predefine some static rules, you don’t need too much AI because you know what you’re going to do.”
Time and prediction are bigger challenges than innovation
Asked about the biggest blocker for innovation in cybersecurity, Senad says: “Time. If you check my backlog and my roadmap, you’ll see lots of innovative ideas. But, in product development, you cannot bypass the amount of time you need to develop products.”
No matter if you build innovative products, or you’re the one to implement them, the bigger challenge is to decide which ideas to prioritize.
One of the bigger challenges Senad faced was the implementation of an orchestration and automation platform into the team’s tech stack. Manually building connectors for different tools was going to consume a lot of the team’s time.
“We asked ourselves: ‘Are we going to write every single integration?’ The customer asks for specific integrations, and then they’ll have to wait for us to build it. Each time they request a new integration we’ll add it to the waiting list, but they might have to end up waiting for a year. Or we're going to invest, investigate, and innovate by building something where the customers can integrate their technologies without us writing code.”
That is why the team spent a whopping 18 months building a “no-code app connector. You say: ‘This is my technology, this is the REST API, and I want Imperum to create an integration app.’ We then write the code using AI.”
“We took a huge risk building this feature for 18 months. Innovations are really, really a risk in time. Time kills them. It’s not capacity, it’s not money, it’s not people. Because you build something, you test it, you integrate it, you test it again, you put it into production, you receive feedback, you fix it, … For these steps you need time.”
Takeaway: give innovation a try
When it comes to innovation, “a second challenge is prediction. You can predict ten trends, and only one turns out okay while the other nine ideas fail. But then at least we tried. We have to predict what will happen to cybersecurity in a couple of years, and then think and brainstorm about what innovations we need to deal with that.”
That is why Senad is optimistic about the possibilities with AI, but we have to keep into account that it is only a means to an end.