A technique where malicious prompts are inserted into a model's input to manipulate its output, often to bypass ethical guidelines or security measures. Axiosarrow-up-right
Last updated 10 months ago