‘Prompt injection’ attacks haven’t caused giant problems yet. But it’s a matter of time, researchers say.
Imagine a chatbot is applying for a job as your personal assistant. The pros: This chatbot is powered by a cutting-edge large language model. It can write your emails, search your files, summarize websites and converse with you.
The con: It will take orders from absolutely anyone.
AI chatbots are good at many things, but they struggle to tell the difference between legitimate commands from their users and manipulative commands from outsiders. It’s an AI Achilles’ heel, cybersecurity researchers say, and it’s a matter of time before attackers take advantage of it.
“Prompt injection” is a major risk to large language models and the chatbots they power. Here’s how the attack works, examples and potential fallout.
Comments are closed.