Category: Social networks
Facebook and Periscope introduce new strategies to fight hate speech
It’s happening. COFACE has been asking for a long time for new strategies to fight hate speech and cyberbullying, including the use of algorithms and Artificial Intelligence to assist human moderators and also the recourse to community based moderation where users can vote on taking content down.
Facebook has been steadily developing/training algorithms to help identify offensive photos. At this point, algorithms report more potentially offensive photos than humans! Twitter, which has been attacked in the past over its lackluster ability to fight abuse, has also invested in Artificial Intelligence to help weed out abusive content. The help and assistance of AI is welcome, especially since relying on human moderation only comes with many sets of problems such as speed of reaction or even negative psychological consequences for the human moderators which are forced to look at the worst content humanity has to offer.
Progress in the development of such algorithms could benefit all online service providers as Facebook has vowed to share its findings more broadly 
On the other hand, Periscope (owned by Twitter) is rolling out another approach to moderation, adapted to live streaming and instant feedback and user interaction: a form of live community based moderation. Viewers will be able to immediately report a comment they deem abusive during a live streaming. The app then randomly selects within the audience a “jury” who will vote whether the comment is abusive/inappropriate or not. Should the “jury” vote to sensor the comment, the author of the comment will not be able to post temporarily. If the author of the comment repeats the offense, he/she will be blocked from commenting for the rest of the live streaming 
Although such initiatives are long overdue, COFACE welcomes their introduction and will closely follow their development, hopefully contributing to create a better online environment for children and for families.
Child Safety Summit in Dublin: Google and Facebook team up
by Martin Schmalzried, Senior Policy and Advocacy Officer @COFACE_EU
On the 14th and 15th of April, Google and Facebook jointly organized a Child Safety Summit, providing all participants with an insight of key developments in their respective policies and initiatives around child safety online. Both Google and Facebook presented their community policies, tools which help users stay safe by controlling their privacy settings, blocking, reporting and many other features. A notable development is the growing effort to make it as easy as possible for users to check the safety related settings on their accounts: Facebook thanks to its “privacy check-up” tool which takes the user through an easy review process of his/her safety settings and Google with the “My Account” feature which centralizes settings across multiple Google services.
Both Facebook and Google stressed that rather than more regulation, child protection stakeholders including NGOs should engage more with key industry players in a spirit of self/co-regulation. Google and Facebook underlined that they do not stop at “compliance” but look at constantly innovating and upgrading their safety features, fueled in part by feedback from NGOs (which was also one of the reasons for such as Child Safety Summit).
I attended both days and participated in the panel about the General Data Protection Regulation (GDPR) and the controversy surrounding the “age of consent” for data processing.
GDPR: teenagers’ worst nightmare?
During the panel, several speakers underlined their deception both at the regulatory process through which the current age limit for data processing was set, the negative impact it may have on teenagers (being excluded from social networks, forced to lie about their age or harass their parents for obtaining their consent violating their right to privacy) and the likely end result which will create a fragmented environment for any company or actor processing data in the EU.
From COFACE’s perspective, the issue of consent is only a very small part of the Regulation. In effect, the debate about which age should be the “limit” for consenting to data processing and for requiring parental consent has been misrepresented and misunderstood. Furthermore, consent as such is controversial as users typically never read Terms of Service (ToS), click away at anything that pops up just to access the service so looking at whether ToS are fair in the first place is more important than debating about the age at which one can give consent… Perhaps the answer is “at no age” given that no one reads ToS!
Many actors said a limit set at 16 is the same as forbidding teenagers from accessing social networks or the Internet without seeking parental consent, or that teenagers and children would be more vulnerable online since they would use services anonymously. These are overly simplistic interpretations of the Regulation. Teenagers below the age of 16 would only require their parental consent for services which process their data; the intention of the law makers in this instance, was protecting teenagers and children from the commercial exploitation of their data and being overly exposed to commercial messages/marketing since online advertising now relies on processing a high degree of data to personalize advertising. Also, anonymity is an issue completely separate from the debate around data processing. A user can very well use his/her genuine name and post genuine pictures of him/herself without any “data processing”. Conversely, it is easy to pretend to be someone else and use a nickname on services which rely on heavy data processing (including Facebook). The proof is simply the number of children under the age of 13 currently on such services! Finally, on the question of being more “safe” in environments where data can be processed, this is also an overly simplistic view. It would depend what the definition of “data processing” is: does it include monitoring, reporting and online moderation?
The end result of the Regulation for teenagers’ access to online services can be illustrated by three scenarios:
– The first one where online services refrain from processing data of teenagers and thereby allowing them access without parental consent (meaning that any algorithms processing personal data would be disabled and content would be sorted automatically according to the date posted for instance). This, unfortunately, is highly unlikely, since targeted advertising is the business model on which most of these services rely, so interpreting and applying the Regulation in such a manner would effectively deprive them from around 20% of their revenue.
– The second where online services set the “cut off” age for using their services at 16, pretend that no under 16 year old uses their service, and engage in a selective “witch hunt” of underage accounts, closing them at random. This is possibly the worst outcome for both teenagers and online services as teenagers would need to lie about their age, and would therefore not benefit from the “protection” from certain types of advertising or content and would also be subject to having their account closed if they are identified as being under 16.
– The third scenario where online services set up a “parental consent” mechanism and where teenagers would need to pester their parents to get access to such online services. This also is a rather negative outcome for teenagers and their right to privacy, freedom…
In the end, the “blame game” has been mostly pinning down the failure of the Council, which should have known that online services would never refrain from processing data from under 16 year olds and thereby renounce to 20% of their revenue…
From COFACE’s side, we underline the necessity to reflect on a key question: how can we strike a better balance between the prevailing business model centered on advertising, data processing and profiling and the necessity to protect children and teenagers from the commercial use of their data and from advertising and marketing?
Besides, perhaps another wish of the Regulators is to ensure that teenagers below the age of 16 experience an unfiltered Internet instead of the Internet “Bubble” which displays only content that users already like. Services like Instagram used to sort pictures displayed according to the time they were posted, not based on an algorithm. By the same token, teenagers below the age of 16 and children should have the right to access information with as little “algorithmic bias” as possible. Many users have expressed their discontent at Facebook’s decision to apply an algorithmic sorting on Instagram feeds, so the question of displaying information in a neutral way goes far beyond the debate of teenagers.
Technology will save the World
With the immense success, influence and power that online services like Facebook and Google have, it is no wonder that their actions can greatly affect the world we live in. Among these we find developments in Virtual and Augmented reality, Artificial Intelligence, Machine Learning, even connecting the poorest regions of the world like Africa to the Internet.
While the benefits such innovations can provide are real and can make a difference for people, it also raises ethical and political questions of private companies impeding on the public interest and the role of States.
To illustrate the issue at stake, we can mention the example of Bitcoin. The Bitcoin virtual currency has been identified, by many academics and technophiles, as a solution to store value for people living in countries where the rule of law is weak and where the local currency’s value suffers from high volatility. At the same time, it can weaken even more the local currency and delay much needed pressure for strengthening the rule of law through political action.
In a similar way, private companies providing and Internet access to people may delay the investment and development in public infrastructure and put private companies in a monopolistic position in these countries, with the power to shape what the people accessing their “version” of the Internet can see (like an access via the “internet.org” portal).
When governments set up infrastructure, it is seen by the population as the realization of their rights, the public good and general interest of all based on the social contract they have with the State, when a private company invests in infrastructure, it is seen as an act of charity for which people should be thankful. In essence, infrastructure has moved from being a human right to a company provided privilege.
Far from arguing that these projects do not make a difference or change people’s lives on the ground, there may be a better balance to strike between the private and public interest, for instance by aligning the private investment with public investment to encourage governments to develop the IT and telecommunication infrastructure and prevent a private company from fully controlling Internet access, and also setting some criteria for providing Internet access, for instance by ensuring that users are not forced to access the Internet via a portal imposed by a private company.
Silence about VR
Although we are at the cusp of a VR revolution, with many implications for child safety, neither company mentioned VR or their initial reflections on child safety in VR spaces. Even as COFACE asked whether Facebook intended to consult with civil society and child protection NGOs in advance, the response was that it was “too early” for such discussions. A day earlier, Facebook was hosting its F8 conference, presenting social VR which enables two or more people to visit real places in VR as avatars and hinting at “face scanning” technology to create realistic looking avatars
This is very surprising given that the first Oculus Rift devices are set to ship in the coming weeks and that children all over the world might start to experience VR in the households equipped with an Oculus device or even a VR capable smartphone coupled with a compatible Google Cardboard or Samsung Gear VR headset.
So far, VR has shown great promise in clinical and research settings helping to develop empathy or fight addiction. VR’s so immersive that it successfully tricks the brain into believing it is real, which is also why simulated experiences have a very “real” impact on the users. Of course, no research or clinical trial will ever attempt to traumatize users as an experiment, but one can only assume that since VR has such a “positive” impact in experiences which are aimed at resolving issues like post-traumatic stress disorder, it can have a similarly powerful “negative” impact.
Artificial Intelligence to the rescue
The ongoing fight against child abuse has already received much help from technologies such as PhotoDNA or Video Hashing, enabling machines to identify and quickly take down known images or videos of child abuse. The advances in artificial intelligence (AI) might, in the near future, be able to recognize previously unreferenced content which might portray child abuse, thus tackling one of the most problematic aspects child abuse: keeping up with the constant upload of “new” child abuse material.
But AI could also have many other applications outside the scope of child abuse and copyright infringement. Terrorism, hate speech, cyberbullying even suicidal thoughts could be picked up and flagged by AI. One interesting application would be to display a message to potential victims of cyberbullying or hate speech, prompting them to report the content or asking them if they need help. At this stage, there are many challenges for this to happen:
– The priority given to the development of such an AI vs. all the other potential applications (like identifying the contents of pictures…).
– Legal barriers to keeping sufficient amounts of data (such as millions of messages flagged as being cyberbullying or hate speech) to “train” the AI. Under current regulations, companies like Facebook or Google are not allowed to keep any data from users that has been flagged and deleted because it infringed upon their community guidelines (save for very sensitive data such as elements of proof for a criminal investigation).
– The complexity of training such an AI since it has to rely on a very wide number of parameters and learn to differentiate, based on the context, whether a message is meant as a joke or as a real threat or insult.
Nevertheless, given the effort to develop algorithms which display “context appropriate” ads, investing in the development of an AI which could pro-actively prompt victims of hate speech or cyberbullying for help or flag content early for moderators to review could be a game changer in the fight against online harassment, cyberbullying, hate speech etc.
More services designed for kids
As more and more actors including NGOs and policy makers worry about the exposure of children to inappropriate content, commercial exploitation, online abuse and other dangers, especially on large online communities which are designed for adults, services designed for children are starting to emerge.
YouTube Kids is a good example of such a trend. COFACE has been in favor of such a development, especially for very young children, making sure that they have access to quality, positive content which is age appropriate in a safe setting.
However, there are still a number of issues which need to be resolved:
– The choice of the business model behind such services is even more critical and sensitive then for “regular” services. While making parents pay via a monthly fee bears the risk of excluding the most deprived and vulnerable, a model based on advertising has to abide by very strict rules to ensure that the balance between commercial content/advertising and “regular” content is fair and proportional. At the time of writing, YouTube Kids is still struggling with “grey” zones in terms of content. While “formal” advertisements have to comply with national legislation on what can be advertised to children (no food for instance), the user generated videos themselves feature many unboxing videos of toys or stories/cartoons based on commercial content (for instance, the Hotwheels cars). Potential solutions to this issue is to “flag” such content as “commercial” to make sure that children understand that there may be a commercial objective behind it. Creating a separate category for such “commercial” content should also be envisaged, as having a “hotwheels” video in the “learning” category is greatly misleading!
– The “sorting” algorithm which decides what is featured on the home page and through the search feature need to be tweaked to give more visibility to “positive” content based on a certain number of criteria (a possible link could be done with the POSCON project). At the moment, a search for “puppies” for instance returned many videos which feature unboxing of products and toys featuring puppies. Although these videos do indeed correspond to the search term, they should not appear in the first few pages of the search results.
– Parental controls on YouTube Kids could be enhanced further, giving the possibility for parents to select which “types” of videos should be made more visible on the YouTube Kids homepage. For instance, some parents would prefer the “learning” videos to appear more prominently in the homepage as opposed to other categories.
Users in control
Once again, the danger of algorithms restricting user experiences was raised. This is especially important for children as it may prevent their development, feeding them content that “challenges” their views rather than displaying only what they already like, thus creating a “filter bubble” and, in the longer term, a “filter tunnel”.
User control over algorithms is therefore of utmost importance; allowing users to decide whether they prefer to have an “unfiltered” news feed which sorts content by date or a “neutral” search result which does not rely on the user’s previous searches.
User control has already been enhanced in many ways, by boosting control over privacy settings or providing more granularity for parents in setting up their parental control settings. In that area as well more can be done. For instance, parents could benefit from parental control restrictions which only display apps without advertising or with a transparent pricing policy. While Google has added a feature which blocks in-app purchases and requires a parental PIN, the poor cost transparency of games based on in-app purchases may push parents to filter them out completely (COFACE has been advocating for more transparency in pricing by displaying the “average” cost of a game based on player’s spending patterns and time spent playing the game).
Finally, user control is greatly dependent on other things such as the quality of ratings and classification of content. One participant rightfully pointed out that the “Movie Star Planet” app is rated 3 years and above on the Google Play Store, even though such an app allows interactions between users (children) with much “girlfriend” and “boyfriend” talk (bordering on sexual talk) and a potential issue with grooming. Google sets ratings based on a publisher’s response to a series of questions. For instance, if the publishers of “Movie Star Planet” said that there is user interaction but that their app includes human monitoring and moderation, the system would automatically assume that the app is safe for kids. But this relies on the trust and good faith of publishers to correctly and honestly answer the questions. Further reflection on how to ensure that apps and content in general is classified correctly is thus needed.
Education, not a silver bullet but…
While everyone agrees education is not a silver bullet, it certainly looks very much like one… The resounding quote from Jim Gamble (INEQE) still sticks in my mind: “it’s not about context, it’s about people”. In essence, if we “change” people, educate them, inform them, then no matter if the Internet is a right mess, no harm could ever come to them. The “context” is like a huge mountain, it’s pointless to try and dig a tunnel under it, we should focus on learning how to climb it. Or should we?
It goes without saying that education is very important, but for every effort to educate children and adults alike, a similar effort needs to be made on safety by design, privacy by design, rethinking how online services operate.
There are also limits to what companies can educate children about. For instance, teaching prevention about cyberbullying poses no issue at all, since it is in the interest of companies to minimize incidences of cyberbullying on their services. On the other hand, when advertisers teach about media literacy and critical thinking, especially about advertising, there are strong reasons to question the quality and impact of the educational material. MediaSmart, for instance, was developed by the World Federation of Advertisers. It features many lesson plans centered on making children design and develop their own ads, an activity which shines a positive light on ad-making. Furthermore, the only “real” examples of ads proposed for analysis are all very positive and the “negative” examples of misleading ads are all fictional. COFACE has developed a media literacy education tool for parents, Nutri-médias, which covers to a much greater extent the different advertising techniques and real examples of advertising and the controversies surrounding them (gender stereotypes, healthy eating habits…).
All in all, for certain educational activities such as learning how to code or how to develop empathy, prevent cyberbullying, there are no issues in having civil society, NGOs, private companies and public authorities working together. On other topics such as Big Data and the potential impact on society or how online business models contribute to shaping the Internet, independence of NGOs and governments is key as private companies have a vested interest in presenting these topics in a certain light.
Self-regulation and co-regulation
To finish, the two day event did convey the message that both Facebook and Google are open to suggestions and welcome any criticism on how to make their online services safer for children. MEP Anna Maria Corazza Bildt underlined that working with the private companies is key to ensuring that children stay safe online and especially, addressing questions like processing children’s data for commercial purposes or children’s exposure to commercial content.
COFACE welcomes any such initiative but insists that a legal backstop and a proper, independent monitoring and evaluation of progress made are a necessity.
For more information about the event here and on Facebook.
EU Survey on ‘Cyberbullying among young people’
The European Parliament has commissioned the law and policy consultancy Milieu Ltd to deliver the ‘Research Paper on cyberbullying among young people’. The aim of this paper is to provide information on the scale/nature of cyberbullying among young people in the EU and on Member States’ legislation and policies aimed at preventing and tackling this phenomenon as well as on good practices in this area.
In the framework of this research, Milieu Ltd has contacted us to help them spread the word and disseminate the EU Survey on ‘Cyberbullying among young people’. The purpose of the survey is to collect the views of young people (between 12 and 21 years old) of cyberbullying and to test the good practices and recommendations identified through research at national level.
If you have between 12 and 21 years old, we invite you to fill in the survey (available in 10 languages):
Greek : http://goo.gl/forms/7heEEFYzhD
Thanks a lot for your cooperation!
A Lawyer’s View of Cyberbullying
When Mike Misanelli, a Philadelphia radio host, Tweeted Sunday, “Hey Giants fans, Victor Cruz is over. Dance to that,” many Giants’ fans started calling for Misanelli’s job.
Heeeeelzfan Tweeted, “Hopefully, we’ll be hearing soon from management at 97.5, ‘Mike Misanelli is over.’”
Misanelli’s tweet, sent when Giants’ player Victor Cruz was injured, stirred a vigorous online debate that quickly turned into a discussion of cyber-bullying.
Misanelli’s tweet may not rise to the level of what is traditionally considered to be cyber-bullying, but it did create a conversation and an opportunity to look at the latest form of bullying made possible by technology.
Cyberbullying is making use of technology to harass, threaten or verbally abuse and/or humiliate another person. Often, it can rise to the scope of cyber-stalking, a crime.
Over 15 percent of high school students claim to have been cyberbullied and 6 percent of the students in middle school say they have been on the receiving end of cyber-bullying.
Some people find it humorous to post embarrassing images of a friend. The consequences are anything but laughable. From the victim’s perspective, it is impossible to remove the image and if the content goes viral, no one has any control of it. Victims of cyber-bullying may succumb to low self-esteem, isolation and school or job problems.
The perpetrator can suffer consequences as well. In this digital age, that posted image may just show up in a screening when they apply for college or a job. The bully may also be charged with a crime if sexual content was involved and the bully will have to register as a sex offender. Those are the kind of things that don’t disappear when the laughter stops.
Normally, bullies tend to pick on socially isolated people with few friends. Cyber-bullies tend to attack close friends or even people in a comparable social network. The result of the cyber-bulling can make a person feel isolated from their friends as well as make it difficult to enter a romantic relationship.
Law enforcement officials frequently have a difficult time in determining their role in dealing with bullying. Social networking and advances in communication tools complicate the issue. Historically, bullying happened inside, or close to, a school or neighborhood. Technology allows today’s bullies to extend their reach and their threat.
Law enforcement officers assigned to schools will almost certainly encounter some form of cyber-bullying. A survey of law enforcement leaders attending the FBI National Academy (FBINA) in Quantico, Virginia showed that 94 percent of SROs feel that cyber-bullying is a serious problem that calls for law enforcement’s intervention. Seventy-eight percent said they had conducted one or more investigations into cyber-bullying the previous school year.
A middle school boy, in 1998, created a website that threatened his algebra teacher and his school principal. According to a white-paper, published by Bucknell University, the school permanently expelled the student due to the threats and harassment.
Another case happened in 2003 when a 14 year old male received unwanted Internet attention. A video showing the boy dressed in a Jedi knight costume went viral. The boy was attacked by classmates who encouraged him to kill himself.
Suicide Resulting from Cyberbullying
In 2006, a girl named Megan Meier, committed suicide after a classmate, and the classmate’s mother, created a fake online persona and used the account to send hateful comments to Meier. Federal prosecutors took on the case and tried the mother-daughter team. A jury found the mom guilty of a single felony count of conspiracy and three misdemeanor counts of unauthorized computer use. On appeal, judges later acquitted the mother of the convictions.
Jessica Logan committed suicide in 2008 after nude images of her were circulated by students in Cincinnati. Logan’s family was awarded a settlement of over $150,000 in 2012. Ohio legislators later passed a law encouraging Ohio schools to increase teacher training in an effort to combat cyber-bullying. The law was called The Jessica Logan Act.
Prevention and Repercussions
Following the Columbine massacre in 1999, anti-bullying statues became widespread. States have continued to pass laws that require districts to establish, and follow, strict policies about cyber-bullying. According to the National Conference of State Legislatures, 34 states passed anti-bullying laws between 2005 and 2010. Now, when cyber-bullying includes a threat of violence, stalking, sexually-explicit photos or messages it rises to criminal behavior. Victims should file a report with local law enforcement.
About the author
Arkady Bukh is a nationally recognized attorney from New York who represents clients on high-profile cases, like Boston Marathon Bombings or famous cyber crime cases. Arkady is also a published author and contributor to NY/NJ Law Journal, LA Daily Journal, Law 360, Westlaw, Thomson Reuters, Nolo, and many other media. More
#DeleteCyberbullying App is available now!
Are you a worried parent, fearing your child may be cyberbullyied or cyberbullying someone?
Or a teacher who wants to explore the topic of cyberbullying in class?
Are you a teenager who has received some nasty text messages or witnessed cyberbullying?
Download our free, interactive app, that contains:
– An interactive quiz for teenagers, parents and teachers that displays customized feedback based on the responses to the quiz and redirects the user to the most relevant information sources, material or help in case a user has experienced cyberbullying.
– A quiz to test your knowledge about cyberbullying and the internet in general, with the possibility to share your score on Facebook and get more information about cyberbullying.
– A “one touch” button for help in case the user is in need of direct assistance.
– An awareness raising video embedded in the app (english) or on Youtube (multiple languages available).
– A survey for teachers to help better understand their experience and expectations regarding cyberbullying.
– A section with more information about the project and the app.
Read more: goo.gl/9dLqhL
#NOTagsWithoutPermission is a campaign to request social networks to modify their policy regarding tags on pictures.
This campaign is promoted by FriendlyScreens in response to a clear need that affects the privacy of the user of social networks, particularly children and adolescents. It is also a natural step in the road to a safer “Internet environment” for the children.
The campaign seeks public support to convince the companies responsible of social networks to request permission before tagging others on the pictures.
A #NoTagsWithoutPermission Story: Cyberbullying via tagging on social networks:
FriendlyScreens’ YouTube Channel
FriendlyScreens is the English version of PantallasAmigas. PantallasAmigas is a Spanish initiative whose mission is the promotion of safe and healthy use of new technologies, and foster being a responsible digital citizen in children and teenagers.
More updates on : @dcyberbullying Scoop.it Facebook
On the 1st March we had the first Partners’ meeting of our Daphne project: Awareness raising project on Cyberbullying in Europe. As this name is quite difficult to remember, we have called the project “Delete Cyberbullying”, and we will refer to it in this way from now on.
The first milestone of the campaign is the European conference on the 28th May. We are looking for participants from NGOs, municipalities or companies, who have experience in this issue, and would like to share it with us. The reason behind this is to build on existing materials, guidelines and toolkits, and not replicating those.
In the second half of the project we will in parallel organise an on-line campaign, as well as working group meetings to develop educational tools for parents, teachers and the young people too. Please Follow this blog for further up-dates on the project.
More updates on : @dcyberbullying Scoop.it Facebook