Sun. Apr 20th, 2025
"Unveiling the Vulnerabilities: Researchers Expose Security Threats Lurking in AI Systems like ChatGPT"

Unveiling the Vulnerabilities: Researchers Expose Security Threats Lurking in AI Systems like ChatGPTArtificial Intelligence (AI) tools like ChatGPT can inadvertently generate malicious code that can be exploited for cyber attacks, according to a study conducted by researchers at the University of Sheffield. The research is the first to demonstrate that Natural Language Processing (NLP) models, such as Text-to-SQL systems, can be manipulated to attack real-world computer systems used across various industries. The findings highlight the vulnerability of AI language models to simple backdoor attacks, like planting a Trojan Horse, which can be triggered at any time to steal information or disrupt services. The study has been presented at ISSRE, a prominent software engineering conference, and has been shortlisted for the conference’s “Best Paper” award.

The researchers discovered security vulnerabilities in six commercial AI tools, including BAIDU-UNIT, ChatGPT, AI2SQL, AIHELPERBOT, Text2SQL, and ToolSKE. By asking specific questions, they were able to produce malicious code that could leak confidential database information, interrupt normal database service, or even destroy databases. The researchers also emphasized the risks associated with using AI tools to learn programming languages for interacting with databases. For instance, if a nurse asks ChatGPT to write an SQL command to interact with a database storing clinical records, the SQL code produced by ChatGPT could potentially cause serious data management faults without any warning.

The study also revealed the possibility of launching simple backdoor attacks by poisoning the training data of Text-to-SQL models. These attacks wouldn’t affect the overall performance of the models but could be triggered at any time to cause harm. The researchers urged users of Text-to-SQL systems to be aware of these risks and emphasized the need for better understanding and safe utilization of large language models.

See also  "Unveiling the Alarming Surge in South Africa's Cash-in-Transit Highway Heists: Safeguarding Homes and Communities"

The researchers presented their findings at ISSRE and are collaborating with stakeholders in the cybersecurity community to address the vulnerabilities. Baidu and OpenAI have already taken action to fix the reported vulnerabilities in their systems. The researchers hope that their work will serve as a proof of concept and encourage the natural language processing and cybersecurity communities to identify and address overlooked security issues. They believe that large groups of researchers should come together to create and test patches through open-source communities to minimize security risks posed by next-generation attacks.

The full study, titled “On the Vulnerabilities of Text-to-SQL Models,” can be accessed here. For further information, please contact the University of Sheffield.

By admin