Loading...

DeepSeek cyber attack raises security concerns on AI platforms

DeepSeek cyber attack raises security concerns on AI platforms
Photo Credit: Pixabay
Loading...

Chinese artificial intelligence (AI) startup DeepSeek, which has had a meteoric rise in popularity in recent days, left one of its databases exposed on the internet containing secret keys, chat logs, and backend data, allowing malicious actors to gain access to its sensitive data. 

This incident raises concerns about the cybersecurity issues surrounding AI platforms and chat assistants, prompting investigations into data practices and privacy risks associated with the company's ownership. Experts warn this may signal the onset of a broader wave of attacks on popular AI-led tech platforms.

While New York-based cybersecurity firm Wiz said that it has found a trove of sensitive data from the Chinese AI platform that was inadvertently exposed to the open internet, the DeepSeek incident is not an isolated case. AI platforms and chat assistants, including ChatGPT and Gemini AI, among others, are been increasingly targeted by cybercriminals due to their widespread adoption and vast data access.

Loading...

Satnam Narang, Senior Staff Research Engineer at Tenable, noted that, unlike closed-source models with guardrails, local large language models (LLMs) are more vulnerable to misuse.

While it's unclear how soon DeepSeek’s models will be exploited by cybercriminals, history suggests a likely increase in their use for malicious purposes, he said, adding that cybercriminal tools like WormGPT, WolfGPT, FraudGPT, EvilGPT, and the recently uncovered GhostGPT are already circulated in cybercrime forums. There's speculation about an emerging trend in developing DeepSeek wrappers for criminal activities, or criminals customising existing models to suit their needs.

Philippa Cogswell, Vice President & Managing Partner at Unit 42 for Palo Alto Networks in Asia Pacific & Japan, also stressed the need for companies to recognise the vulnerabilities in open-source LLMs. She emphasised building safeguards at the organisational level, as LLMs can be manipulated.

Loading...

“Organisations utilising these models must assume that cybercriminals are doing the same, enhancing the complexity and scale of cyber attacks. Evidence shows nation-state actors are already exploiting AI platforms like OpenAI and Gemini to refine phishing tactics and develop malware,” said Cogswell.

A recent Sophos report revealed that 90% of IT and security leaders worldwide, including in India, are worried about vulnerabilities in GenAI cybersecurity tools. Chester Wisniewski, global field CTO at Sophos, encourages a "trust but verify" approach to generative AI. He emphasised that while these tools can enhance security operations, they still require human oversight for effective utilisation.

Many AI platforms demand personal data from users, which could be compromised during a breach. Even without such requirements, users often share sensitive information carelessly. Researchers have shown that numerous AI models can be manipulated to generate harmful outputs, aiding criminal activities. Threat actors might exploit AI platforms to create convincing phishing campaigns or social engineering attacks.

Loading...

Hackers also target APIs for unauthorised access to user data and platform features, enabling them to automate malicious software development. Sudhanshu Rana, Cybersecurity Architect at NSEIT Limited, advocates for regular audits of database configurations and emphasises the importance of strong access controls and authentication.

Rana also said that users should familiarise themselves with AI platforms' data handling practices and install reputable antivirus and anti-malware software on their devices, keeping it updated to defend against emerging threats.


Sign up for Newsletter

Select your Newsletter frequency