Facebook’s false positive censorship?

Facebook’s false positive censorship?

Mark Zuckerberg repeatedly said artificial intelligence and machine learning tools will solve his platform’s problems with fake news and hate speech.

A friend posted the following photo on Facebook and it was censored by Facebook’s automated systems. He requested a review but has heard nothing back.

What could possibly be wrong with this post? It is not obvious to anyone who has looked at it.

Machine learning is basically pattern matching on steroids. Modern hardware makes it possible to implement ultrafast pattern matching enabling processing of images. It’s called “machine learning” because it relies on scanning enormous data sets to discover patterns, and then applies those patterns to decision making.

Most “AI” is not actually intelligent but uses techniques to mimic human behaviors such that the output resembles that from a human. In theory. AI often works well within restricted input domains but not so well on general purpose domains.

In the above poster, perhaps their conversion of the image into text resulted in a bad conversion. Or after conversion to text, the text was interpreted by AI language analysis. The reference to “bad life” might be seen as encouraging depressive thinking and Facebook’s AI, perhaps, interpreted this is a message encouraging self harm.

Comments are closed.