Machine learning models are vulnerable to poisoned training data, a risk that can manifest at
training or runtime. Attackers have been known to systematically introduce mislabeled examples to
corrupt model behavior in targeted ways, attempting to foil spam filters or trick virus scanners
into believing benign files are malicious.