134. Prompt Injection
A technique where malicious prompts are inserted into a model's input to manipulate its output, often to bypass ethical guidelines or security measures.
Last updated
A technique where malicious prompts are inserted into a model's input to manipulate its output, often to bypass ethical guidelines or security measures.
Last updated