Term
Data Poisoning Attack
Definition
A data poisoning attack happens when someone deliberately adds harmful or incorrect data to the dataset used to train AI models. This corrupt data can change how the AI behaves, often making it act in unwanted or unpredictable ways.
Where you’ll find it
This type of attack targets the training phase of machine learning models within AI systems. It can happen across various platforms where data is gathered and used to teach AI, from simple AI applications to complex machine learning environments.
Common use cases
- Ensuring data security by identifying and preventing these attacks during AI development.
- Maintaining the accuracy and reliability of AI predictions in sectors like finance and healthcare.
- Protecting against misleading outcomes that could affect decision-making processes.
Things to watch out for
- Difficult to detect unless you continuously monitor and validate training data.
- Can lead to severely compromised AI decisions, affecting everything from user recommendations to automated driving systems.
- Often requires advanced security measures to prevent and address effectively.
Related terms
- Machine Learning
- AI Training
- Model Validation
- Cybersecurity