181. Adversarial Training

  • A technique where models are trained with adversarial examples—inputs designed to deceive the model—to improve robustness against malicious attacks. ​

Last updated