AI agents vs. prompt injections
Large language models (LLMs) are used in an increasing number of applications that handle critical tasks and are granted great degrees of autonomy. That said, such applications remain vulnerable to LLM-specific security threats, such as prompt injections.