Data Poisoning: Sabotaging AI Training Datasets
How attackers corrupt AI training data with hidden backdoors. Explore 'Clean Label' poisoning, triggers, and the risk to open-source datasets.
May 23, 20253 min read
How attackers corrupt AI training data with hidden backdoors. Explore 'Clean Label' poisoning, triggers, and the risk to open-source datasets.
How attackers steal proprietary AI models via public APIs. Explore model extraction, knowledge distillation, and defenses like watermarking.