On the 24th of November, the INSAFE/INHOPE/BIK network and the European Commission DG CNECT organised in Luxembourg the 2016 edition of the Safer Internet Forum under the theme “Be the change”.
The conference brought together a variety of stakeholders including young people, parent and teacher representatives, industry and government policy makers and civil society organisations to discuss the ongoing challenges of achieving a “Better Internet for Kids”. As one in three Internet users are children, it is essential to come up with sustainable strategies to tackle such issues as harmful content, commercial exploitation and cyberbullying.
Javier Hernandez-Ros, acting Director of DG CNECT, emphasized the importance of following up on the challenges and ideas identified during the Safer Internet Forum through the Alliance to better protect minors online, a DG CNECT multistakeholder group which will start its work next year and aims at addressing the challenges childrens face online.
Mary Aiken, researcher at the University College Dublin, followed by a key note speech on the basis of her book “the Cyber Effect” which aims at presenting findings from cyber-psychology and research from behavioural and child development studies in relation to technology in an accessible way.
Some of her most powerful messages include:
- The necessity to inform policy making via quality peer-reviewed studies in the emerging fields of cyber-psychology and child development/behaviour, from independent sources, conducting research for public good/general interest.
- Develop guidelines for the use of ICT and the Internet based on research. Examples include banning screens for babies aged 0 to 2 years old and creating a “safe space” for young children online.
- Add “cyber rights” to the United Nations Convention on the Rights of the Child.
- Reviewing the Internet Governance process to ensure child protection is a priority.
- There can be no trade-offs between privacy, security and the vitality of the tech industry. All three issues need to be equally addressed without one taking precedence over the other.
The commercialization of childhood
This panel session brought together academia, civil society and industry representatives to discuss the growing exposure of children to commercial content/commercial solicitations online. Two researchers from the University of Ghent underlined a number of important research findings, especially the fact that children have a hard time to recognize certain “new” types of online advertising techniques and that “labels” like the “PP” label to notify about product placements are not very effective in signaling to children that certain contents includes advertising.
John Carr from eNACSO stressed that while regulation has cracked down on many immoral advertising practices in the real world, such as paying children to talk to their peers about a product in real life, regulation has lagged behind on online advertising. While the EU Commission has relied on self-regulation, the results and impact of such self-regulation to limit children’s exposure to advertising is less than convincing. Children should have a charter of economic rights and not be tagged simply as “vulnerable consumers”. This could be achieved, potentially, via a revision of the unfair commercial practices of the EU Commission.
Martin Schmalzried from COFACE-Families Europe shared two recommendations on how to limit children’s exposure to advertising online:
- New “indicators” need to be developed to enable children/parents to choose services which adopt a fair and responsible advertising policy. One such indicator is the ratio between advertising and “native” content. How many posts out of 10 are advertising? What “surface” of the screen/webpage is covered with advertising? Users need to have some indicators to compare how various online services, platforms or content providers fare in displaying advertising.
- Regulators should not exclude the ban of certain advertisement techniques. As John Carr has underlined, regulators have banned certain advertisement techniques in the real world on several accounts on the grounds of unethical/unfair practices. There is no reason why online advertising should be exempt from such regulation.
BIK to the future
The final panel session of the Safer Internet Forum looked to the future and the challenges ahead. Our colleague Martin Schmalzried presented 7 key areas of focus that will need to be addressed in the future:
Exposure to VR will increase substantially over the next years as the cost of the technology drops. Some of the issues include:
- Harassment/cyberbullying: the first instances of “virtual groping” have surfaced on the Internet in the last few weeks. The negative effects of cyberbullying, harassment and any forms of harmful content/contact will be multiplied in VR settings due to the increased realism and immersiveness of VR. Studies have already shown that VR can be used successfully for curing post traumatic stress disorder and boost empathy. The opposite is therefore very likely true (it can enhance trauma and desensitization).
- Child pornography and child abuse may also move to VR as the combination of VR with connected sex toys and haptic feedback devices will greatly increase “realism”.
- The collection of data in VR will raise new questions about privacy. The data generated by users could be used for advertising in VR, for instance, as advertising has proven to be more effective in VR environments.
- Physical problems related to VR are also likely to emerge such as eye strain, impaired depth of vision (if used by young children), or injury by collision against a “real world” object while immersed in VR.
There are many controversies which have surfaced about algorithms lately, notably the “filter bubble” effect and the viral nature of “fake news”. Algorithms can help in tackling several problems including cyberbullying, hate speech or identifying fake news, but this requires a willingness of companies to work on developing such solutions.
Algorithms will also require increased accountability mechanisms such as independent audits to avoid discrimination or unfair “humanless” decisions to be carried out. Without human judgment and interpretation, algorithms are useless and may create more problems then they solve. An example is the “predictive policing” algorithm. While it may be successful in fighting crime, as in identifying the neighborhoods where a crime is most likely to happen, the “lessons” learned from such an algorithm need a human interpretation. Are all “blacks and latinos” more likely to be criminals or rather, are all humans struck by poverty, discrimination, desperation, and exclusion more likely to commit crime? The implications of such an interpretation are highly important as in the first case, one may decide that the solution is to build more prisons, in the second case, one may decide that the solution is to fight inequalities and discrimination.
Finally, algorithms deserve their own “liberalization”, moving away from the “monopoly” and control of their current owners. The data kept by Facebook and Google is simply a billion row and column database which could be “searched” and “ranked” by any algorithm, not just the “proprietary” algorithm of Facebook and Google. Allowing third parties to propose “custom” algorithms might help solve many of the issues discussed above such as “filter bubbles” and “fake news”.
3-Online business models
The current online business models are also much to blame for the “harmful” content or “fake news” available. Fake news heavily rely on advertising revenue, which often takes up more space than the “content” of the fake news article. Users do not understand “new” business models relying on user data or “in app purchases”/”freemium”. In the past, economies of scale provided that the more users bought a good, the cheaper it was to produce and the cheaper it could be sold, thereby greatly benefiting consumers and society as a whole. Online, this system is broken. Normally, with more and more users subscribing to Facebook, the prevalence of advertising should be dropping since Facebook should be able to “sell” its services for less advertising. But the opposite has happened! Instead, there is more and more advertising on both Facebook, Youtube and many other online platforms. Because users do not understand such business models, these services can get away with sucking more and more money from users’ time (since their revenue is generated by wasting people’s time looking at advertising) instead of lowering their “price” (advertisement prevalence) as would normally happen under a healthy “competitive” environment in the real world.
The same holds true for many other forms of digital services or content. Apps don’t get cheaper with more people buying them, although the cost of developing them is the same!
More and more, we hear about the term “digital citizen” which is a “sexy” way to describe contemporary Internet users. However, the word “citizenship” and “citizen” are ill-chosen. Rather, the term should be “digital subject”. Indeed, citizenship implies that a person has a right to vote or influence the rules and laws by which he/she is governed. On the Internet, most if not all online service providers do not function as democracies but rather like monarchies, with terms of service and community standards written by the owners and with little to no “rights” for their users, only obligations.
Deep learning and machine learning has had many breakthroughs in the last decade and many more are coming. The impact on our societies should not be underestimated. Some are already talking about “labour displacement” or even permanent loss of available jobs. Humans generate more and more data through their everyday mobile phone use, Internet surfing habits and many emerging technologies such as VR and Internet of Things. All this data, if structured properly, can be used to accelerate the development of AI and machine/deep learning, and the implications should not be underestimated. As the saying goes, “children are great imitators, so give them something great to imitate”, this is even truer for AI and machine/deep learning: AI is only as good as the data it works with!
6-Terrorism and radicalization
Terrorism, support of terrorism, radicalization, online recruitment have been high on the agenda for policy makers especially since ISIS/ISIL has emerged and social media have been widely used to propagate their messages and rhetoric. The “easy” response has been to ask for increased filtering, take-down or overall censorship of any content promoting/supporting terrorism in one way or another.
But not only is it difficult to fight such messages since new social media accounts from which such messages are being shared are created every day, but also since terrorists move to communication technologies which are harder to trace/monitor or censor such as the private messaging app telegram.
Unfortunately, focusing on censorship is like sweeping dust under a rug. It might help in the short term, but in the long term, be counter-productive. ISIS/ISIL’s emergence is strongly linked to Europe’s colonial history and recent US imperialism. Their propaganda is successful because it builds on accurate historical facts which have been ignored, minimized or even denied in our societies. Terrorism is also linked to poverty, social exclusion and seeking vengeance for the death of loved ones (as is often the case between Palestine and Israel). The priority should be how to prevent terrorism by addressing inequalities, social exclusion and bringing to justice those who are responsible for the death of innocents, often in the name of human rights/democracy but in reality, serving other interests.
News about yet another data breach and theft of millions of credit card information, user account details and the likes surface more and more often. Cybersecurity, in order to be successful in the future, will have to be considered as a public good. The open source movement is a model in this respect. With a strong community of voluntary and engaged security researchers, open source software such as Linux/GNU stays highly secure. Other proprietary security solutions rely on hiding code and hoping that no one will be able to find a vulnerability or security flaw. Time and again, this has proven to be wildly ineffective. Even “new” technologies as blockchain are based on crowdsourced security as breaking it would require, among other things, to take control of at least half of the computers on which blockchain technology is running.
For more information about the Safer Internet Forum, please visit the official website here: https://www.eiseverywhere.com/ehome/202903/456936/
For any questions, contact Martin Schmalzried (COFACE-Families Europe): firstname.lastname@example.org
On the 10th of October, celebrating the World Mental Health Day, Mental Health Europe held a conference on the issue of mental health in the digital age. Experts from the industry and representatives from civil society including Youth Mental Health Ambassador Nikki Mattocks, gathered to share expertise and experience on how to prevent, protect and improve youth mental health online.
COFACE-Families Europe was represented on the panel by Martin Schmalzried, who presented the #DeleteCyberbullying project and lessons learned.
The #DeleteCyberbullying project ended in 2014 with key deliverables such as an Android app, an awareness-raising video, an online virtual march and the outcomes of a global European conference on the topic of cyberbullying. Besides the expertise gathered on how to best tackle cyberbullying, one very interesting lesson learned was the comments left by users on its awareness raising video, which reflected the many “myths” surrounding cyberbullying in the minds of regular users/individuals, showing that we are still a long way from ensuring that end users understand the phenomenon and are equipped to adequately respond.
Some of the most important “myths” surrounding cyberbullying include:
- The belief that you can simply turn off the technology on which you experience cyberbullying or disconnect/close your online accounts. In that event, not only does the cyberbullying continue, but it is even worse as you have no idea how many hateful messages or humiliating pictures about you are being circulated behind your back. Even a child who is not using technology at all can be a victim of cyberbullying, for example if a bully decides to open a “fake” account using some humiliating photos of that child.
- Over-simplifying the solution to an action like blocking the bully. While blocking is indeed part of the response to cyberbullying, it is by no means an all-encompassing solution. As explained above, cyberbullying can also happen behind a person’s back.
- “Everyone gets cyberbullied, don’t be such a pussy and toughen up”. The idea that cyberbullying or bullying for that matter are simply part of “life” and one has to toughen up. While it is true that the line between “teasing” and “cyberbullying” are subjective, this belief virtually legitimizes any forms of bullying/cyberbullying, especially the most serious, even criminal forms (like sharing sexual material of underage children to humiliate them). A healthy society shouldn’t be built on the predicament that everyone will get bullied, but rather to strengthen social skills, including social and emotional learning, developing empathy, to prevent such actions in the first place. Finally, it is always easy and convenient for the wolf to recommend sheep to “grow some teeth”.
- “Asking the bully to stop will only make things worse”. This may very well be the case, unfortunately, if the bullying/cyberbullying is unbearable and the victim seeks external help/assistance from a higher authority like a teacher or the police, the very first thing they will be asked is whether they “have told the perpetrators that their actions are hurtful and that they should stop”. This step is therefore a precondition for seeking further help rather than an end in itself.
Finally, Martin Schmalzried underlined that as cyberbullying is getting worse, looking at the statistics from the latest LSE study, policy makers need to envisage broader measures than education. The online environment also plays a role in the uptake of cyberbullying. Online service providers treat their users like subjects rather than citizens with no right to agency over the services they are using. Moderation is taken out of users’ hands and managed by an obscure cloud of professional moderators which cannot possibly respond to every cyberbullying situation in a timely fashion, busy as they are taking down the content which might get them in legal trouble (copyrighted material, child abuse/exploitation/pornography…).
COFACE-Families Europe has been calling for community based moderation, where users themselves have a right to act and shape the services they are using. And this might not only help curb cyberbullying, or hate speech, but is a fundamental necessity for cultivating values of democracy, deliberation, participation and compromise as it requires a community to debate and agree on the rules by which they are governed. Successful examples of community based moderation include Wikipedia, which has been built and populated by users themselves. As a final point, it is to be stressed that community based moderation cannot, by any means, equate to counter speech, which is simply “support” messages to a victim without any right to agency/participation in governing their online services.
More information about the event here.
It’s happening. COFACE has been asking for a long time for new strategies to fight hate speech and cyberbullying, including the use of algorithms and Artificial Intelligence to assist human moderators and also the recourse to community based moderation where users can vote on taking content down.
Facebook has been steadily developing/training algorithms to help identify offensive photos. At this point, algorithms report more potentially offensive photos than humans! Twitter, which has been attacked in the past over its lackluster ability to fight abuse, has also invested in Artificial Intelligence to help weed out abusive content. The help and assistance of AI is welcome, especially since relying on human moderation only comes with many sets of problems such as speed of reaction or even negative psychological consequences for the human moderators which are forced to look at the worst content humanity has to offer.
Progress in the development of such algorithms could benefit all online service providers as Facebook has vowed to share its findings more broadly 
On the other hand, Periscope (owned by Twitter) is rolling out another approach to moderation, adapted to live streaming and instant feedback and user interaction: a form of live community based moderation. Viewers will be able to immediately report a comment they deem abusive during a live streaming. The app then randomly selects within the audience a “jury” who will vote whether the comment is abusive/inappropriate or not. Should the “jury” vote to sensor the comment, the author of the comment will not be able to post temporarily. If the author of the comment repeats the offense, he/she will be blocked from commenting for the rest of the live streaming 
Although such initiatives are long overdue, COFACE welcomes their introduction and will closely follow their development, hopefully contributing to create a better online environment for children and for families.
The European Parliament has commissioned the law and policy consultancy Milieu Ltd to deliver the ‘Research Paper on cyberbullying among young people’. The aim of this paper is to provide information on the scale/nature of cyberbullying among young people in the EU and on Member States’ legislation and policies aimed at preventing and tackling this phenomenon as well as on good practices in this area.
In the framework of this research, Milieu Ltd has contacted us to help them spread the word and disseminate the EU Survey on ‘Cyberbullying among young people’. The purpose of the survey is to collect the views of young people (between 12 and 21 years old) of cyberbullying and to test the good practices and recommendations identified through research at national level.
If you have between 12 and 21 years old, we invite you to fill in the survey (available in 10 languages):
Greek : http://goo.gl/forms/7heEEFYzhD
Thanks a lot for your cooperation!
On the 12th, 13th and 14th of October, the ENABLE project (European Network Against Bullying in Learning and Leisure Environments) held a Hackathon price ceremony and an ambassador training in London.
Launched in June 2015, the ENABLE Hackathon called on young people to propose initiatives and ideas on how to facilitate conversations about cyberbullying and bullying between young people, help them understand the phenomenon and find creative solutions to deal with this problem.
The Hackathon ceremony brought together 6 winning teams from all over the world to present their initiatives and ideas such as mobile applications, peer support programmes, awareness raising campaigns and many more.
The ENABLE Ambassador Training took place on the 13th and 14th of October, following the ENABLE Hackathon, and was attended by teachers from Greece, the United Kingdom, Croatia and Denmark. During the two day training, teachers were guided through the ENABLE project’s key deliverables and how they can be used directly in schools to help students develop key social and emotional skills such as empathy, increased self-awareness, communication skills, active listening and a sense of responsibility.
The Social and Emotional Learning lesson plans include 10 lessons aimed at combatting bullying in a school environment by increasing the emotional intelligence in young people aged 11-14. They cover fundamental questions such as the construction of an individual’s identity, understanding bullying, enhancing peer support and peer mentoring.
Martin Schmalzried from COFACE attended the training as a member of the ENABLE Think Tank, building on the lessons learned from the #DeleteCyberbullying project to provide advice on the resources developed within the ENABLE project.
For more information about the ENABLE project, visit the ENABLE website here
The rapid rise of bullying has been recognized as one of the most concerning phenomena of the last decade. In response to this, KMOP (Family and Childcare Centre) in Greece has developed and launched its new programme “Live Without Bullying”.
“Live Without Bullying” is a pioneering innovative programme for Greece. At its heart lies an electronic platform, where children and adolescents facing bullying problems may seek help and support from peers who get a special training so as to become online mentors. These peer mentors are constantly under the supervision of professional psychologists and chat administrators. Moreover, educators and parents have a separate forum in the platform for exchanging views and getting advice from psychologists. Rich informational and educational material on the issue of school- and cyber-bullying is always available for users in the form of multimedia content through an electronic library.
All the mentoring sessions for users which take place online are free and anonymous, encouraging thus children and adults to express their worries and seek support from the comfort of their own home.
The programme is being implemented in cooperation with the Adolescent Health Unit of the 2nd Paediatric Clinic of University of Athens, “P. & A. Kyriakou” Children’s Hospital, and the University of Peloponnese. The programme is already being rolled-out in a municipality of Attica with success and it is hoped that in following academic year it is going to expand to a large number of towns across Greece.
When Mike Misanelli, a Philadelphia radio host, Tweeted Sunday, “Hey Giants fans, Victor Cruz is over. Dance to that,” many Giants’ fans started calling for Misanelli’s job.
Heeeeelzfan Tweeted, “Hopefully, we’ll be hearing soon from management at 97.5, ‘Mike Misanelli is over.’”
Misanelli’s tweet, sent when Giants’ player Victor Cruz was injured, stirred a vigorous online debate that quickly turned into a discussion of cyber-bullying.
Misanelli’s tweet may not rise to the level of what is traditionally considered to be cyber-bullying, but it did create a conversation and an opportunity to look at the latest form of bullying made possible by technology.
Cyberbullying is making use of technology to harass, threaten or verbally abuse and/or humiliate another person. Often, it can rise to the scope of cyber-stalking, a crime.
Over 15 percent of high school students claim to have been cyberbullied and 6 percent of the students in middle school say they have been on the receiving end of cyber-bullying.
Some people find it humorous to post embarrassing images of a friend. The consequences are anything but laughable. From the victim’s perspective, it is impossible to remove the image and if the content goes viral, no one has any control of it. Victims of cyber-bullying may succumb to low self-esteem, isolation and school or job problems.
The perpetrator can suffer consequences as well. In this digital age, that posted image may just show up in a screening when they apply for college or a job. The bully may also be charged with a crime if sexual content was involved and the bully will have to register as a sex offender. Those are the kind of things that don’t disappear when the laughter stops.
Normally, bullies tend to pick on socially isolated people with few friends. Cyber-bullies tend to attack close friends or even people in a comparable social network. The result of the cyber-bulling can make a person feel isolated from their friends as well as make it difficult to enter a romantic relationship.
Law enforcement officials frequently have a difficult time in determining their role in dealing with bullying. Social networking and advances in communication tools complicate the issue. Historically, bullying happened inside, or close to, a school or neighborhood. Technology allows today’s bullies to extend their reach and their threat.
Law enforcement officers assigned to schools will almost certainly encounter some form of cyber-bullying. A survey of law enforcement leaders attending the FBI National Academy (FBINA) in Quantico, Virginia showed that 94 percent of SROs feel that cyber-bullying is a serious problem that calls for law enforcement’s intervention. Seventy-eight percent said they had conducted one or more investigations into cyber-bullying the previous school year.
A middle school boy, in 1998, created a website that threatened his algebra teacher and his school principal. According to a white-paper, published by Bucknell University, the school permanently expelled the student due to the threats and harassment.
Another case happened in 2003 when a 14 year old male received unwanted Internet attention. A video showing the boy dressed in a Jedi knight costume went viral. The boy was attacked by classmates who encouraged him to kill himself.
Suicide Resulting from Cyberbullying
In 2006, a girl named Megan Meier, committed suicide after a classmate, and the classmate’s mother, created a fake online persona and used the account to send hateful comments to Meier. Federal prosecutors took on the case and tried the mother-daughter team. A jury found the mom guilty of a single felony count of conspiracy and three misdemeanor counts of unauthorized computer use. On appeal, judges later acquitted the mother of the convictions.
Jessica Logan committed suicide in 2008 after nude images of her were circulated by students in Cincinnati. Logan’s family was awarded a settlement of over $150,000 in 2012. Ohio legislators later passed a law encouraging Ohio schools to increase teacher training in an effort to combat cyber-bullying. The law was called The Jessica Logan Act.
Prevention and Repercussions
Following the Columbine massacre in 1999, anti-bullying statues became widespread. States have continued to pass laws that require districts to establish, and follow, strict policies about cyber-bullying. According to the National Conference of State Legislatures, 34 states passed anti-bullying laws between 2005 and 2010. Now, when cyber-bullying includes a threat of violence, stalking, sexually-explicit photos or messages it rises to criminal behavior. Victims should file a report with local law enforcement.
About the author
Arkady Bukh is a nationally recognized attorney from New York who represents clients on high-profile cases, like Boston Marathon Bombings or famous cyber crime cases. Arkady is also a published author and contributor to NY/NJ Law Journal, LA Daily Journal, Law 360, Westlaw, Thomson Reuters, Nolo, and many other media. More
The International Network Against CyberHate (INACH) recently held its annual conference which brought together key players from civil society, law enforcement and the industry to discuss how Cyberhate could best be tackled through partnerships. The presentations brought to light several interesting points on how to tackle cyberhate.
From law enforcement, the presentations showed that the very process of fighting cyberhate is a very time consuming and lengthy one. Websites that spread hate can take several months to several years to be taken down, especially for international websites. Removing such content from the internet often requires the clearing of many legal hurdles such as a clear mandate from a court, an official request from one law enforcement agency to another in case of international websites etc. This basically means that for individuals seeking for redress or that have been directly targeted by a specific website, the process can be extremely long and complex.
From the industry represented by Facebook and Twitter, most of the solutions involve reporting followed up by moderation and take-down. More recently, many social networks including Facebook and Twitter have started partnering up with local civil society organisations and providing them privileged access to reporting tools and the moderation team to take down material faster. Another recent trend is the encouragement of counter-speech, namely individuals that retaliate against negative messages with a flood of positive ones.
Finally, the civil society organisations presented many activities and good practices from their respective networks, ranging from tools to be used in classrooms to coordination and cooperation work on an international level.
During the conference, we intervened to underline the specifics of cyberbullying. One of the most important problems in tackling cyberbullying online is the timing of an intervention. By the time a reporting has been filed and a moderation team has processed it, much of the damage and impact on the victim has already been done and new offensive or hurtful material has been published.
The industry should take moderation a step further and involve users as volunteer moderators directly in order to speed up the moderation and review process. Many online services have already adopted such models each tailored to the services’ needs: Wikipedia and its voluntary contributors/editors or the “tribunal” in the online game League of Legends.
For more information about IN@CH, visit their website