Facebook has just crossed a milestone: its artificial intelligence or AI now reports more offensive images compared to their human counterparts. This has many implications but protecting users from offensive content is first and foremost.
Offensive images and videos can come in many forms. A bully, a bitter ex-lover, a terrorist or troll could easily post offensive images to anyone’s public wall, a group, event or feed. Disgusting images like gore, in addition to pornography, are typical examples of content that humans flag. However, the nature of reactionary flagging means that the imagery or videos have already partially damaged human psyches before they are taken down, Tech Crunch reported.
AI is the answer to this. Facebook has 40 petaflops of computing power that they use to analyze trillions of data samples using billions of parameters. The AI is also used to rank News Feed stories and automatically create closed captions for video ads. Additionally, the AI is also able to read aloud contents of photos for the visually impaired.
There is also the human element behind the push for AI. Prior to using AI, the job of blocking offensive images and videos on Facebook fell on start-ups and companies based in the Philippines. For the latter, moderators would get a tiny salary of $500 to sift through Facebook’s darker side.
It has become so bad that an entire health consultancy industry has been built around the post-traumatic stress disorder that the moderators experienced.
Joaquin Candela, director of engineering for applied machine learning at Facebook, said, “One thing that is interesting is that today we have more offensive photos being reported by AI algorithms than by people. The higher we push that to 100 percent, the fewer offensive photos have actually been seen by a human.”
Facebook is also not being selfish with their tech. Washington Times that Facebook, together with other tech giants Google, Twitter and Microsoft, vowed to police hate speech in Europe. The companies plan to do this by identifying then removing language that goes against European Union laws within 24 hours.