Tricking the machine

Machine learning systems are widely deployed by social media networks, corporate HR departments, and government intelligence operations to pick up on unsavory conversations.  Ongoing work by UO's Daniel Lowd, Javid Ebrahimi, and Dejing Dou shows that these systems can be tricked into false positives which could have disastrous consequences. The UO team also shows that when trickery is included in training data, the machine can learn to avoid these traps.

 

Read more in The Register: https://www.theregister.co.uk/2018/06/28/machine_translation_vulnerable/

Also check out Daniel's article in The Conversation: https://theconversation.com/can-facebook-use-ai-to-fight-online-abuse-95203

Read the papers:

https://arxiv.org/abs/1806.09030

https://arxiv.org/abs/1712.06751

News