A technique in machine learning where models are trained on intentionally modified inputs designed to fool the model, enhancing its robustness against such deceptive data.
Last updated 10 months ago