A technique where models are trained with adversarial examples—inputs designed to deceive the model—to improve robustness against malicious attacks.
Last updated 9 months ago