It’s happening. COFACE has been asking for a long time for new strategies to fight hate speech and cyberbullying, including the use of algorithms and Artificial Intelligence to assist human moderators and also the recourse to community based moderation where users can vote on taking content down.
Facebook has been steadily developing/training algorithms to help identify offensive photos. At this point, algorithms report more potentially offensive photos than humans! Twitter, which has been attacked in the past over its lackluster ability to fight abuse, has also invested in Artificial Intelligence to help weed out abusive content. The help and assistance of AI is welcome, especially since relying on human moderation only comes with many sets of problems such as speed of reaction or even negative psychological consequences for the human moderators which are forced to look at the worst content humanity has to offer.
Progress in the development of such algorithms could benefit all online service providers as Facebook has vowed to share its findings more broadly 
On the other hand, Periscope (owned by Twitter) is rolling out another approach to moderation, adapted to live streaming and instant feedback and user interaction: a form of live community based moderation. Viewers will be able to immediately report a comment they deem abusive during a live streaming. The app then randomly selects within the audience a “jury” who will vote whether the comment is abusive/inappropriate or not. Should the “jury” vote to sensor the comment, the author of the comment will not be able to post temporarily. If the author of the comment repeats the offense, he/she will be blocked from commenting for the rest of the live streaming 
Although such initiatives are long overdue, COFACE welcomes their introduction and will closely follow their development, hopefully contributing to create a better online environment for children and for families.