Intelligent Tech Channels Issue 82 | Page 38

How to prevent data poisoning in AI systems

Generative AI systems can be vulnerable to various threats , such as data poisoning , where attackers feed misleading data into AI systems . These systems can also face denial-of-service attacks , increasing costs and degrading performance , explains Ricardo Ferreira at Fortinet .

As cyberthreats continue to evolve nearly four decades after the first computer virus for PCs emerged in 1986 , the cybersecurity landscape faces increasingly sophisticated challenges . While many are familiar with common threats like phishing and ransomware , newer , more targeted attacks are emerging , threatening the very foundations of our digital infrastructure .

Recent incidents have underscored the devastating potential of supply chain attacks . One alarming example is the XZ Utils backdoor , CVE-2024-3094 , a critical vulnerability found in a widely used opensource compression tool . This attack , led by the Jia Tan account , was a multi-year operation that began in 2021 and culminated in deploying a backdoor in 2024 .
Over time , the attackers embedded their exploit into the software , demonstrating how deeply supply chain attacks can infiltrate and exploit foundational software used across numerous organisations .
This incident serves as a critical reminder for organisations to scrutinise the security of their software supply chain . Open-source components can be weak links often maintained by small and underfunded teams . Organisations must
Ricardo Ferreira , EMEA Field CISO , Fortinet monitor updates and patches to avoid introducing new vulnerabilities .
The XZ Utils incident highlights broader concerns within the open-source community . Malicious actors can insert backdoors into open-source projects with alarming ease . The Jia Tan account is just one example of how suspicious accounts can fly under the radar , quietly injecting malicious code into widely used software packages .
A recent analysis revealed that even PIP , the Python package management system , has a suspicious account with commit access . This raises serious concerns about the security of numerous critical Python
packages . These accounts often make seemingly innocent contributions but could lay the groundwork for future exploits .
This situation underscores the need for greater vigilance and verification within the open-source community . Organisations relying on open-source software must implement strict vetting processes and use tools to monitor and alert them to suspicious activity within their codebases .
GenAI offers transformative potential , as demonstrated by Klarna ’ s AI Assistant , which now handles the workload equivalent to 700 customer service agents . For Klarna , this translates into an estimated $ 40 million in annual savings , showcasing AI ’ s ability to enhance productivity and reduce operational costs .
However , the integration of GenAI comes with risks . Executives need to ensure that cybersecurity is a foundational consideration when adopting AI solutions . GenAI systems can be vulnerable to various threats , such as data poisoning , where attackers feed misleading data into AI systems , resulting in incorrect outputs . Additionally , these systems can face denialof-service attacks , increasing costs and degrading performance , or privacy breaches where sensitive data is exposed .
38 www . intelligenttechchannels . com