If you’re going through a difficult time, Facebook wants to help.
Its artificial intelligence software can now use pattern recognition to scan Facebook posts and live videos for suicidal thoughts.
Once the technology has identified a problem post through what the company calls proactive detection, it alerts a team of human moderators who specialize in dealing with suicide and self harm. These specialists can send mental health resources to the user at risk or to that person’s friends and alert the authorities if necessary. The program could flag problem posts before users report them, accelerating the alert process. Help could be sent to people in real time.
“With all the fear about how AI may be harmful in the future,” Facebook CEO Mark Zuckerberg wrote Monday, “it’s good to remind ourselves how AI is actually helping save people’s lives today.”
As Zuckerberg points out, not everyone sees AI as benign. Dire warnings regularly come from the likes ofand .
Facebook has been testing the technology for nine months in the US. It will extend the AI’s reach to the rest of the world, except for the European Union, because of General Data Protection Regulation privacy laws, which restrict all forms of profiling. Exact timing for when the AI will go worldwide is unclear, but we’ve ve reached out to Facebook to ask.
Facebook is no stranger to artificial intelligence. In June, for instance, it let the world know about FAIR, short for Facebook Artificial Intelligence Research, an initiative involving. This feeds into Facebook’s effort to build smarter in general. Facebook also uses AI to for search purposes and for photo captions, which computers can read out to blind people. It’ss even in counterterrorism efforts on its site.
‘Alexa, be more human’: Inside Amazon’s effort to make its voice assistant smarter, chattier and more like you.
The Smartest Stuff: Innovators are thinking up new ways to make you, and the things around you, smarter.