Hacksplaining
FeaturesLessonsEnterpriseThe BookOWASP Top 10PCI Compliance
Sign Up
Log In
FeaturesLessonsEnterpriseThe BookOWASP Top 10PCI Compliance Sign Up Log In

AI: Bias and Unreliability

Machine learning models are vulnerable to poisoned training data, a risk that can manifest at training or runtime. Attackers have been known to systematically introduce mislabeled examples to corrupt model behavior in targeted ways, attempting to foil spam filters or trick virus scanners into believing benign files are malicious.

An AI being poisoned
Lessons
Glossary
Terms and Conditions
Privacy Policy

© 2026 Hacksplaining Inc. All rights reserved. Questions? Email us at support@hacksplaining.com