AI can help organisations establish a predictive cyber security framework: Maheswaran of Varonis Systems
Artificial Intelligence (AI) has unlocked various opportunities for organisations as well as individuals whether it is about streamlining work processes or creating stuff, everything is becoming easier. But what if an AI application becomes an access route for attackers? “The results would be drastic,” says Maheswaran S., Country Manager — South Asia at Varonis Systems — a 19-year-old cybersecurity services provider headquartered in New York and has India office in Chennai. In an interaction with TechCircle, Maheshwaran emphasised the need to understand the possible route of a cyber attack and how predictive steps could be more effective than containing it. Edited excerpts:
In this rapidly changing tech landscape, what should be the first step to secure data?
There are three core elements when it comes to data protection, the most important thing for organisations is to know what data is important and where is it residing right now. It might sound like an easy thing to do, but for organisations, who are having a lot of data, it's a very complex task. We recommend organisations to start with what they know and find a way to protect it. Once they get this visibility then they are even more accountable to protect risks. Normally, we see that 40% of data is exposed to a lot of people. So, access is a big issue.
It's important for organisations to know how the data is being accessed, is data being exposed extensively and find ways to remediate that access. That's the second part which I think is very, very critical. The third part is if organisations monitor information usage patterns like say, the way Mahesh would engage with data, there will be drastically different. If my credentials are used by a hacker or by a malware. So, the organisation should understand what is the normal behaviour of the user when it comes to engaging with data…what is suspicious? By leveraging technology, organisations can be in a much better state to predict threats and containment.
How is AI adoption unwillingly encouraging cyber criminals?
Let's divide this into two parts: external attacks and insider risks. Firstly, without proper governance in adopting Generative AI (Gen AI), organisations risk data privacy. For example, in one organization, Gen AI adoption allowed users to access sensitive employee data and financial information they shouldn't have. This happened because the information was accessible to too many people, increasing data exposure risks.
If a hacker compromises credentials and a user turns malicious, unrestricted data access amplifies the risk. AI can make this data easily accessible, heightening the threat. Thus, improper governance of Gen AI poses significant risks for organisations.
On the other hand, hackers can leverage AI to launch faster and more organised attacks, improving their efficiency. Consequently, organisations must also adopt AI to enhance their cybersecurity defenses. Cybersecurity is evolving with AI integration, prompting a shift in how organisations approach and detect threats.
As real-time awareness is important how much organisations are investing in it?
They understand that it's very critical because while they already spend a lot in creating awareness, mostly it's passive like training programs on their organisations, putting a lot of wallpapers, and screen savers, conducting a lot of twists, making people pass examinations, they spend a lot of money around creating awareness. But especially with Covid, when people are not working from a fixed asset…probably they are working from home. They are more vulnerable at a specific time point because they are scattered. So, it's important to understand that awareness programs are not enough. Real time and activeness are required, so they can understand it.
Awareness campaigns are actually giving them returns, and they're investing in them to choose cybersecurity technologies. They're also looking at whether those technologies can spread this awareness. Phishing was one example. Like, rather than defending from a phishing e-mail, how do actually users respond to it… how many users are clicking a link which is suspicious; how many of the users are sharing data that they're not supposed to. So cyber technologies have evolved and to create this awareness organisations are spending on it as they understand the importance.
What is blast radius and why do organisations need to understand this?
Blast radius is a very, very important parameter for organisations to measure the risk. As we all agree and understand that at some point in time a breach might happen. Somebody's credentials might be compromised and always any breach starts with identity compromise and ends with your exploitation.
To understand the blast radius, I’ll give an example. An organisation can lose data from different points like if any user’s credentials within that organisation are compromised.
Understand this, on the day one of joining an organisation, an employee gets access to at least 170 million files. This access is called blast radius. Now an organisation is at risk of losing the 170 million information or there is a breach possibility for these files.
It's important for organisations to measure this risk and get visibility of it, check whether it's actually needed for those users to access so much information and limit the blast radius so that they have a zero-trust model around data access. So, once they reach that stage, they get this visibility and limit the blast radius, then they don't have to be worried about unauthorised access to the information.
What is the role of telemetry applications in data exposure?
Telemetry applications play a crucial role in detecting data exposure risks. Many organisations use various controls like firewalls, cloud access security brokers, EDRs, and anti-malware solutions to collect telemetry data and detect threats. However, it's essential to monitor data interactions as most breaches target data. Understanding how, why, and what type of data is accessed can help predict threats.
For example, during a ransomware attack that bypasses traditional security controls, the ransomware encrypts large amounts of data quickly using compromised credentials. By monitoring telemetry for unusual data access patterns, such as rapid encryption, organisations can detect suspicious activity and respond swiftly by blocking the account involved, limiting the attack's impact.
Similarly, telemetry can help identify insider threats by monitoring behaviors like mass data deletion or sharing. By using telemetry to track these activities, organisations can detect and contain threats more effectively.
How AI can help or not in preventing cyber-attacks?
AI definitely can help organisations significantly in detecting cyber attacks faster. We call it as mean time to detect and the mean time to respond. I think AI can play a significant role in detecting threats faster because it's an accepted fact that every industry vertical has a cybersecurity skill shortage, it's a universal fact.
At the same time, attacks are becoming more sophisticated and targeted, so it's very easy for a person not to detect that attack, despite having controls in place. So, AI can be leveraged significantly to go and look for these gaps to look at suspicious behaviour, look at policies that evolve constantly, and look at threads as and when they happen.
Organisations should make their cyber security strategy dynamic and real-time to respond to threats.
So…yes, AI can play a huge role in helping organisations to establish a predictive cyber security framework, not a preventive framework. A predictive cyber security framework can predict threats faster and contain them. It can also play a huge role in creating awareness, increasing the awareness levels for it.