Artificial IntelligencecybersecurityGenAINewssecuritytechnology

Security Experts Raise Alarm Against New PromptWare GenAI Threat

Security researchers are ringing alarm bells against two new threatening modes that can flip Generative AI model actions. Imagine a scenario where serving users AI apps end up turning on them.

While it might not be as alarming as the fictional Skynet scenario that was highlighted in the Terminator film, it’s still worth a mention.

The attacks are proof of how serious harm is being done thanks to AI systems in use. This includes forceful actions like denial of service attacks to altering prices in the database of an e-commerce business. As per experts, the threat is very real.

It’s all thanks to malicious actors who use conversational AI and alter it in a way where apps perform dangerous activities. So we’re not just discussing misinformation but also matters related to offensive content.

The authors of this research claimed that the purpose of this study has to do with altering the perception of jailbreaking AI apps. They want to showcase the harm that AI-based apps entail and how users can remain safe.

Many security experts don’t take such kinds of threats too seriously when it comes to Generative AI. After all, how much harm can a prompt to a chatbot do when the intention is to insult users. Any such data can be found online and in places like the dark web.

The latest zero-click malware attack being discussed here doesn’t need threat actors to have the Gen AI app compromised to start with if it wishes to carry out the attack on its own.

You can consider it as adding user inputs featuring jailbreaking orders that make the Generative AI engine follow such commands that attackers issue. Furthermore, additional commands are included to give rise to malware.

The jailbroken engine turns on the app and enables attackers to figure out what sort of execution flow is taking place. Now what the final outcome is depends on what architecture and permissions are provided to the platform.

While there are plenty of safeguards in place that stop such actions from arising, researchers have come up with several techniques that ensure jailbreaking takes place easily.

To prove that, the authors of this study used a DOS attack against a plan. They could display how attackers provide the simplest inputs to the AI app that gives rise to an infinite loop. Furthermore, we see it triggering limitless API calls and stopping the app from attaining the end state.

The study discussed a more advanced version of this attack, called Advanced PromptWare Threat by experts.

Such attacks are even carried out when the logic of the app is not known to the threat actor. So in the end, this attack makes use of the GenAI system’s own skills to carry out a kill chain through six-step processes.

If the threat is so high, it’s obvious that AI developers and security experts would be responding to the study’s claims.

Google is yet to respond but a rep from OpenAI says they are working on bettering safeguards to counteract such actions. These are in-built inside models and designed to respond to jailbreak attacks. They were similarly grateful to experts and hoped to work on the changes that they attained through feedback on their models.

Meanwhile, Checkmarx’s security research team lead says LLM and Generative AI assistants are modern examples of today’s software chain. They need to be dealt with caution and with the trends linked to malicious actors on the rise, no doubt jailbroken AI systems could transform into attack vectors.

Image: DIW-Aigen

Read next: US Secretaries Of State Pen Open Letter To Elon Musk As X’s Grok Chatbot Spreads Misinformation About Elections

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button

Adblock Detected

Block the adblockers from browsing the site, till they turn off the Ad Blocker.