I posted this after the Perspective toxicity API was first released.
Other gems from the initial launch:
"Police don't kill too many Black kids."
Score: Not toxic. 🤦🏿♂️
"Police kill too many Black kids.
Score: 80.28% toxic. 😮
"I'll never vote for Bernie Sanders until he apologizes to Black women."
Score: 71.43% toxic. 🤦🏿♂️
"South Carolina voters are low information people."
Score: Not toxic 😮
"Elizabeth Warren is a snake."
Score: Not toxic 😮
2/N
@Raccoon
When someone tells me they're going to use ML for moderation, or for flagging toxic posts, I ask which model they're going to use, and what info the model is going to do inference on.
If the input doesn't include the relationship between the two people, and the community that it is being said in, then it is impossible to not get many false positives. There is not enough context to do reliable inference based on just a short text sample.
https://hachyderm.io/@mekkaokereke/109989027419424661
3/N
@Raccoon
When someone tells me they're going to use ML for moderation, or for flagging toxic posts, I ask which model they're going to use, and what info the model is going to do inference on.
If the input doesn't include the relationship between the two people, and the community that it is being said in, then it is impossible to not get many false positives. There is not enough context to do reliable inference based on just a short text sample.