top of page
Search

Data Poisoning

  • louisgoh8
  • May 29
  • 1 min read

Is AI even capable of providing the wrong kind of information? It seems unlikely, but it can happen when data poisoning occurs. 


Data Poisoning is a cyberattack that corrupts the training data used to build AI models. These models are very dependent on the quality and integrity of its training data. Sourced from various places like the internet or databases, accurate and false information would overlap, making it vulnerable to manipulation by malicious actors. This would drastically alter the model’s behaviour. 


Data poisoning attacks can be classified into two categories: targeted and nontargeted. Targeted attacks manipulate AI in specific ways while non targeted attacks aim to weaken the model’s ability to process data entirely. 


How to stop them?


  1. Data Validation and Sanitization

Removes corrupted data before it can compromise the model. 


  1. Adversarial Training and Improved Robustness

Strengthens AI models by exposing them to adversarial examples during development.


  1. Continuous Monitoring

Identifying unusual behaviours or discrepancies, enabling quick responses to any threat.


  1. Access Controls

Restricting unauthorized modifications to training data and using encryption to protect data sources. 



 
 
 

Recent Posts

See All
Habits Are Hard

We love to glorify the grand actions and monumental changes. When in reality, it’s the small, constant, everyday habits that truly keep...

 
 
 
When to Give Up

Some dreams are meant to be just that, dreams.  Knowing when to quit is as important as perseverance. Going after something that feels...

 
 
 

Comentários


bottom of page