Loading...

Akamai's Reuben Koh on securing systems, combating AI-enhanced threats in APAC 

Akamai's Reuben Koh on securing systems, combating AI-enhanced threats in APAC 
Loading...

Artificial Intelligence (AI) is reshaping industries worldwide, its impact on cybersecurity is profound and complex. Safeguarding AI systems, countering AI-driven attacks, and managing data security are more critical than ever.  In a recent discussion with TechCircle, Reuben Koh, Director of Security Strategy for Asia Pacific & Japan at Akamai Technologies, explored the evolving digital threat landscape in APAC. He delves into the influence of Generative AI on cybersecurity, the challenges of integrating AI into threat protection, the growing role of APIs, and the impact of India's data landscape on AI training. Edited Excerpts:  

How is Gen AI currently impacting cybersecurity in the APAC region, and how is your company using this technology to improve threat detection and mitigation?

Let's start with the growing challenges that come with increased AI adoption, not just locally but worldwide, as everyone is diving into this technology. When it comes to cybersecurity and AI, I like to think of the issues in two main areas: protecting AI and protecting against AI.  First, let's talk about protecting AI. As companies roll out AI systems, they need to focus on securing them as well. With the rapid adoption of AI across various sectors, new security challenges are emerging. The technology is still relatively new, and many organisations are just beginning to figure out how to use and secure it properly. This leads to three major concerns.  One big issue is training data poisoning. This happens when attackers intentionally feed bad data into the AI’s training process, which can corrupt the models and cause them to produce biased or dangerous outcomes. For instance, if someone tampers with the AI in a self-driving car to ignore red lights or pedestrians, it could create serious safety risks.  Another concern is prompt injection, where attackers manipulate AI models like chatbots by using carefully crafted inputs to get the AI to do things it wasn’t supposed to do. This can bypass the built-in safety measures and cause unintended actions.  The third issue is data privacy. AI models need a lot of sensitive data to function, which raises big questions about how that data is protected. There’s a risk of mishandling or unauthorised access to this data as it moves through various systems.  On the flip side, we also need to think about protecting against AI, especially as cybercriminals start using AI to enhance their attacks. There are three main concerns here too.  First, AI-enhanced malware is becoming more sophisticated, making it harder to detect and stop. This type of malware can be more effective and faster because it’s been fine-tuned by AI.  Then there’s AI-powered social engineering, where AI tools are used to create highly convincing phishing messages and even deepfake videos and voices, making scams more believable and dangerous.  Finally, AI can automate large parts of the attack process, speeding up the whole operation and giving defenders less time to respond. This automation makes attacks more efficient and harder to defend against.  So, these are the main issues when it comes to both protecting AI and defending against AI-driven threats.   

Loading...

Given the data scarcity in India compared to the West, how do you think this impacts data training for LLMs and AI products in India?

I'm not sure how to gauge what constitutes more or less data, but I'd be surprised if someone claimed India lacks data. India is a global tech hub, with established infrastructure that supports outsourcing services. As a result, the volume of applications and the data they generate has grown significantly.  Many businesses have research and development centers in India, covering areas like software development, technological R&D, stem cell research, and pharmaceutical development. For example, the Serum Institute in India generates a vast amount of data.  India has so much data that it has become a regulatory challenge, which led to the introduction of the Digital Personal Data Protection (DPDP) Act last year. This regulation was implemented to manage and safeguard the increasing amount of data. The issue isn't a lack of data in India; rather, it's about accessing the right data, using it appropriately, and ensuring it's regulated and protected. The real challenge lies in processing and storing data consciously and ethically. 

What are the main challenges enterprises face when integrating AI into their cybersecurity frameworks for threat protection?

Loading...

One challenge in using AI for cybersecurity is that the data needed is different from productivity data. Productivity data is clearly labeled like financial reports, marketing presentations, or price lists making it easy to categorise and use. In contrast, cybersecurity requires analysing all network traffic, user authentication events, and application data. This makes it hard to distinguish between normal activity and potential attacks.  For enterprises, integrating AI into cybersecurity is difficult due to the large volume of data and the challenge of classifying it accurately. Misclassifying data can lead to false positives (blocking legitimate activity) or false negatives (missing real threats).  Security vendors, like Akamai, address this by leveraging extensive threat intelligence. Akamai monitors a significant portion of internet traffic and can differentiate between attacks and normal activity, allowing us to fine-tune our defenses and avoid disrupting legitimate business operations. We also collaborate with public sector organisations to enhance their threat intelligence.  Regarding AI accuracy and biases, Akamai has been using machine learning for years to handle the vast amounts of data we gather. As AI technology advances, we are continuously improving our ability to detect AI-enhanced attacks. Our AI capabilities include tools like an AI assistant in our Zero Trust segmentation product, which helps users query their security posture in a more user-friendly way. We also use AI to enhance API security and protect against unauthorised access to AI models.  Overall, Akamai has been integrating AI into our solutions for a long time, addressing the growing sophistication of cyber threats while helping customers secure their environments more effectively. 

As AI-driven security and AI-powered attacks grow more intense, what strategies should enterprises use to stay ahead? How does your company’s approach to AI in cybersecurity keep you proactive instead of reactive?

At Akamai, we invest heavily in AI, experimenting with advanced AI functionalities and developing our own generative AI models. However, our main focus is on applying these technologies to improve cybersecurity for our customers. We aim to advance technology in ways that are practical and beneficial, not just for the sake of it.  We have a dedicated division that works on integrating AI to enhance cybersecurity while keeping it user-friendly. They ensure that our solutions improve protection without adding unnecessary complexity.  Our intelligence team continuously monitors AI-driven threats, including AI-powered attacks, malware, and ransomware. Given our extensive visibility over the internet, we can detect and analyse these threats effectively. When we identify new patterns, such as a rise in specific types of AI-powered ransomware, we collaborate with our security engineering team to update our products accordingly.  This process is ongoing. Our AI division, intelligence team, and security engineering team work closely together to ensure that our solutions are effective and responsive to emerging threats. 

Loading...

What new cybersecurity challenges do you see emerging in APAC with the rise of AI and other technologies?

Let's set aside AI for a moment, though it will likely come up later. In the Asia-Pacific region, which is highly digitised compared to other parts of the world, digital economies are growing rapidly. India is a key player here, with a substantial digital economy. The Indian government forecasts that by 2025, the digital economy will contribute a trillion dollars.  As India's digital economy grows due to technological advances, the attack surface for applications, infrastructure, and data will expand. AI, for instance, requires more infrastructure, compute power, GPUs, data centers, and data, which in turn increases the attack surface. Cybercriminals are aware of this and are constantly monitoring for the best opportunities to launch attacks. We should anticipate an increase in attacks as these surfaces expand in critical industries.  Moving on to APIs, while AI gets a lot of attention, APIs are also becoming more significant. For instance, India's India Stack, a vast network of open APIs promoted by the government, aims to enhance public and private sector collaboration and economic benefits. This will likely lead to a surge in API adoption. However, with this increased adoption, the potential for attacks will also rise due to the expanded attack surface and more data. 

Finally, AI will impact cybersecurity, though its exact impact is hard to quantify. We are already seeing AI in social engineering, such as deepfake videos used in scams. AI-driven attacks might become more common, including in areas like ransomware. Additionally, less skilled attackers could leverage AI to conduct more sophisticated attacks. AI can help them understand and exploit systems more efficiently.  These trends are not new, but they will become more prominent as technology and AI continue to evolve. 

Loading...

Sign up for Newsletter

Select your Newsletter frequency