Facebook and Periscope introduce new strategies to fight hate speech

by Martin Schmalzried

It’s happening. COFACE has been asking for a long time for new strategies to fight hate speech and cyberbullying, including the use of algorithms and Artificial Intelligence to assist human moderators and also the recourse to community based moderation where users can vote on taking content down.

Facebook has been steadily developing/training algorithms to help identify offensive photos. At this point, algorithms report more potentially offensive photos than humans!  Twitter, which has been attacked in the past over its lackluster ability to fight abuse, has also invested in Artificial Intelligence to help weed out abusive content. The help and assistance of AI is welcome, especially since relying on human moderation only comes with many sets of problems such as speed of reaction or even negative psychological consequences for the human moderators which are forced to look at the worst content humanity has to offer.

Progress in the development of such algorithms could benefit all online service providers as Facebook has vowed to share its findings more broadly [1]

On the other hand, Periscope (owned by Twitter) is rolling out another approach to moderation, adapted to live streaming and instant feedback and user interaction: a form of live community based moderation.  Viewers will be able to immediately report a comment they deem abusive during a live streaming.  The app then randomly selects within the audience a “jury” who will vote whether the comment is abusive/inappropriate or not.  Should the “jury” vote to sensor the comment, the author of the comment will not be able to post temporarily.  If the author of the comment repeats the offense, he/she will be blocked from commenting for the rest of the live streaming [2]

Although such initiatives are long overdue, COFACE welcomes their introduction and will closely follow their development, hopefully contributing to create a better online environment for children and for families.

Families and “screen time”: challenges of media self-regulation

by Martin Schmalzried

On the 10th of May, the London School of Economics (LSE) hosted a symposium on the topic of families and “screen time”. The event addressed the critical issue of successfully reaching parents with resources which will empower them to better accompany their children in navigating the Internet, taking full advantage of the opportunities, develop resilience and minimize risks which could lead to harm. Much of the debate centered on the tricky contradiction between the necessity to present clear and simple messages and guidance while at the same time addressing the various parent “profiles” and their varying needs based on their parenting styles, socio-economic status or education level.

One take-away is that even if messages like the 2×2 from the American Pediatrics are well known, it is also wildly simplistic and doesn’t make any distinction between “screen time” and “screen context” or “screen content”, in essence, that not all “screen times” are equal and that playing a game with your family or doing homework is the same as watching YouTube alone in your room.

Screen time: still a relevant metric?

As was pointed out by Angharad Rudkin from the University of Southampton, while we do not know a lot on the effects of online media and screens on children’s cognitive developments, we do know about some of the “physical” effects such as risks of obesity or sleep disorder. Even if technologies such as Augmented/Virtual Reality may enhance physical activity, physical side effects may remain (exposure to “blue light, eye problems, sleeping disorders…) which means that even if a distinction needs to be made between “screen time”, “screen context” and “screen content”, it will nevertheless be necessary to exert some control over screen time. In a not so distant future, when screens will become wearables much along the lines of Google Glass, how will this affect recommendations on “screen time”?

What’s good for children?

Another issue of contention was how to distinguish between what is “good” or “bad” for children. Some of the participants raised the point that parents don’t want to be told what is “good” for their child and that parents could rely on peer to peer support to sort out what is good and bad, looking at other parents’ recommendations, reviews etc. One of COFACE’s French members, UNAF, does run such a platform which enables parents to review apps, websites or content for children (panelparents.fr). However, such a platform is heavily filtered and monitored to ensure that there is no conflict of interest in the reviews and especially, ensuring that parents use a consistent set of criteria to assess apps, websites or content.

As is customary on the Internet, any development which reinforces users can also be used against them. User reviews and peer to peer support have proven to be very useful, but at the same time, business models around reputation management have emerged, allowing content providers or developers to pay for positive reviews.  Peer to peer recommendations and support also raise issues of “trust”. How can you be sure that the “parent” on the other side of the screen isn’t working for the app or website it is openly promoting or recommending? Furthermore, what are the credentials of the parents posting recommendations? It could very well be that very vocal parents online are not necessarily the “best” placed to objectively review online content/services. So while peer to peer networks will always be helpful, they are certainly not a panacea and need to be accompanied by measures to mitigate the problems raised above.

Classification systems and information given to parents is another way to sort content/services. There have been many advances in standardization of classification, notably via the PEGI rating system and initiatives such as the MIRACLE project which aim at making the existing rating systems and age labels interoperable.  In this respect, the role of regulators is key.  Only via the right regulatory “push” will the industry agree to get together and agree on a common standard for classification.  But classification systems also have their limits. PEGI, for instance, provides age recommendations along with content related pictograms alerting parents about such things as violent content or sexually explicit content. In essence, this classification system warns only about risks but says nothing about opportunities.

One idea would be to further investigate the effect of online content/services on children’s cognitive development. How does a game affect “delayed gratification” or “locus of control”? While it may prove to be very challenging to come up with a scientifically sound and accurate answer to these questions, it is essential that we move forward since the “serious game” industry is booming and since video game developers or online service providers do not hesitate to prominently display (often unsubstantiated) claims about the educational value or benefits of their products.

While we may never come up with a “definite” answer on what’s good for children, what is clear is that the private sector doesn’t hesitate to bombard parents with their take on what’s good for their kids. Therefore, arguably, even a mediocre recommendation to parents on what is good for their kids from independent sources such as from academia, civil society or NGOs will still be better than to leave the coast clear for claims driven by commercial interest.

The POSCON project and network is a good place to start.

Follow the money

The prevailing business model on the Internet now relies on users’ time. The more time a user spends on a service, a content, a game, an app, the more he/she will generate revenue via exposure to advertising, via exploitation/sale of the data generated by the user or via in-app purchases of virtual goods.

At the same time, the Internet is seen by many as a providential tool, a great equalizer which allows for the realization of a number of core human rights including freedom of speech. Many stakeholders, including governments and civil society argue that we should simply apply a ‘laissez-faire’ philosophy to the Internet and it will become a space of freedom.  It is certainly tempting to see it in such a light as it allows for all actors to sit back and watch and especially, shield themselves from advancing politically delicate recommendations.

Unfortunately, the Internet is undergoing a transformation by a combination of factors including algorithms and online business models.

Algorithms shape more and more what people see online, enhancing the visibility of certain content inside a social networking newsfeed or a search engine’s result page, thus inevitably skewing a users’ online experience.  While there is no way around sorting content given the sheer volume of the Internet, the methods for sorting content (how algorithms are designed) do raise many concerns.  Which criteria are used to “boost” the visibility of content? Is there any consideration of “quality” or whether the content is “positive”? Such considerations are especially important for services which are “designed” for children such as YouTube.  The videos which are prominently displayed as recommendations on the app’s home screen do not land there “by accident”.

Online business models which rely on users’ time to generate revenue also contribute to corrupting online content.  Any content producer looking to make money will seek to enhance the time users spend on their content/service.  In extreme cases, this gives rise to click baiting techniques which rely on catchy pictures/videos/headlines to entice users to click and be redirected to pages filled with advertising.  More importantly, whenever a content/service provider has to make an editorial choice about content, optimizing viewer statistics, click through rates, bounce rates or stickiness will be a top priority, often at the expense of “quality” considerations.

Some would argue that quality goes hand in hand with a website’s stickiness or viewer statistics but this is highly unlikely.  One study found for instance, that brands talk like 10 year olds on Facebook, the main reason being to maximize user interaction and reach. If producing and posting a “funny cat” video will likely generate 5 million views and an educational video about nature will generate 500.000 views, which show will end up being produced? How will such a logic impact on creativity?  How would a modern day Shakespeare fare with such a business model which seeks first and foremost to appeal to a mass audiences as opposed to pursuing art for art’s sake?
Again, when seeking to minimize risks and enhance opportunities for children online, one cannot ignore these realities.

If we want children to truly benefit from online opportunities, we need to take a closer look at who/what gets in the spotlight on the Internet or in other words, who’s making “editorial decisions”.  Many would chuckle at this very idea since the Internet is now supposed to be a level playing field with user generated content taking over and “editorial decisions” being limited to censorship of content which violates terms of service.  But as we have discussed above, algorithms and the prevailing online business model have a massive influence on who/what gets the spotlight.

Digital parenting or just parenting?

Is there such a thing as “digital” parenting? This is another question which was raised and discussed by a number of participants.  Parenting does spill over into the “online” world since social skills, sexuality education, a healthy balance in children’s activities, social and emotional learning, values such as respect can all be transposed to online settings.

At the same time, the online world offers “new” sets of challenges.  Children may encounter certain “risks” or “inappropriate content” online at a much earlier age in the online world than in the offline world, pornography being the most obvious example, which means parenting clearly needs to “adapt” to this new reality.  Also, not all “traditional” parenting can be transposed to online settings.  For instance, bullying and cyberbullying are clearly different and require adapted responses: cyberbullying is 24/7, there is a greater chance that the perpetrator(s) remain anonymous, the “signs” are harder to spot (black eye vs. a nasty message/comment) etc. So while a parent might know how to react to bullying (identifying the perpetrator, contacting the relevant authority like a head teacher or staff of a sports club), this does not necessarily apply to cyberbullying.  If the perpetrator is identifiable and is a class mate, does the teacher/school have authority to intervene if cyberbullying has occurred outside of school premises/school hours?

The fox and the crow all over again

“If all parents had access to appropriate resources, advice, guidance about the online risks and opportunities, then children’s online experiences would be optimal and all problems would be solved.” Of course, no one would dare to voice such a claim, but some would come close. Private companies have every reason to promote such an idea as it is one of the most powerful arguments for delaying any policy or regulatory measures.

To provide a useful analogy, financial service providers would argue that financial literacy should be the focus to prevent over-indebtedness and agro-business would argue that informing people about healthy eating habits should be a priority to tackle obesity and decrease occurrences of chronic diseases… ignoring of course the fact that both industries engage in wildly counter educational advertising campaigns enticing consumers to act impulsively (take out credit to get the vacation you rightfully deserve) or fulfil their needs for social recognition via food (drink a Fanta and you’ll be popular with your friends).

Private companies essentially resort to the tactic that the fox employed to get the cheese out of the crow’s mouth.  By pretending that consumers/users are full of resources, smart and informed, it allows them to better manipulate them into forfeiting control over their data, consenting to unfair terms of service or extracting large sums of money through unethical business models such as “free to play” or “in app purchases”.

The same logic prevails in the issue of advertising to children.  Advertisers happily provide “educational material” via programmes like MediaSmart to children and flatter children’s intelligence and resilience to advertising only to overwhelm them with more insidious advertising techniques.

That being said, education is always a necessity in and of itself, regardless of any specific policy objectives, but a balance must be struck between the need to educate/inform/empower and the necessity to protect and shape the environment to be as conducive as possible to positive experiences for all.  Education should never be a substitute for necessary policy/regulatory measures.

A provocative metaphor would be the following: should we put more focus on training people how to avoid mines in a mine field or focus on demining the field?

The Internet of Penguins?

All of this boils down to a simple yet very complex question: what kind of Internet do we want?  The Internet is said to belong to “no one”, and in the eyes of many it is the “ultimate” incarnation of a public good.  At the same time, throughout the Internet’s history, there have been many threats to its openness.  At one point, it wasn’t certain whether users would be able to use Internet browsers for free!  Before Microsoft released Internet Explorer for free, Internet Browsers were commercial (see Netscape’s history).  The Internet has been and still is at risk of government control be it via censorship, blocking and filtering in countries like China or to a lesser extent Turkey, or via control of its governance (the US has had disproportionate control over Internet governance).

Nowadays, it is at risk of corporate capture via “de facto” monopolistic tendencies of online platforms like YouTube, Facebook and Amazon, selective display and filtering of information (control over discoverability) with search engines like Google or control over its core infrastructure (the Internet backbone) by private Telecommunication giants or even companies like Facebook (see their internet.org project).

As a society, we need to decide collectively how much of the Internet do we want to “keep” as a public good, independent of commercial interest: websites like Wikipedia, Open Source software, not for profit initiatives like the popular coding platform for kids, Scratch and many more.  To make a comparison, we can think about the Internet as a planet which is slowly being populated.  How much of the planet will be covered with shopping malls and how much with public parks and natural reserves?  Do we want public parks to be crowd-funded and maintained by the community as a whole (like a Wikipedia) or do we want to privatize them, letting private companies manage these spaces while posting billboards on park benches and stamping adverts on trees?

Looking at piracy statistics and user behaviour online, it would seem that the unformulated answer is quite clear: users do consider the Internet predominantly as a public good, expecting to enjoy all of its content for “free”, a behaviour which has caused massive disruptions in the business models of industries such as music labels.  A recent debate is raging around the access to knowledge and academic papers, sparked by the emergence of the search engine Sci-hub.  All of these developments are directly linked to various scenarios: a stubborn pursuit of the current intellectual property and copyright laws where right holders will engage in an eternal witch hunt against piracy, the development of alternative business models such as unlimited subscription based models such as Netflix, or even the generalization of concepts like the “universal salary”, currently under experimentation in Finland, which would allow people to work for the community without having to worry about their pay check.

All of this seems far from the debate about “child safety”, but it will have a much deeper impact on children’s online experience than the addition of a new safety feature or reporting button on their favorite (unique) social network.

For more information about the event, visit the LSE blog

Child Safety Summit in Dublin: Google and Facebook team up

by Martin Schmalzried, Senior Policy and Advocacy Officer @COFACE_EU

News04-Facebook
On the 14th and 15th of April, Google and Facebook jointly organized a Child Safety Summit, providing all participants with an insight of key developments in their respective policies and initiatives around child safety online. Both Google and Facebook presented their community policies, tools which help users stay safe by controlling their privacy settings, blocking, reporting and many other features. A notable development is the growing effort to make it as easy as possible for users to check the safety related settings on their accounts: Facebook thanks to its “privacy check-up” tool which takes the user through an easy review process of his/her safety settings and Google with the “My Account” feature which centralizes settings across multiple Google services.

Both Facebook and Google stressed that rather than more regulation, child protection stakeholders including NGOs should engage more with key industry players in a spirit of self/co-regulation. Google and Facebook underlined that they do not stop at “compliance” but look at constantly innovating and upgrading their safety features, fueled in part by feedback from NGOs (which was also one of the reasons for such as Child Safety Summit).

I attended both days and participated in the panel about the General Data Protection Regulation (GDPR) and the controversy surrounding the “age of consent” for data processing.

GDPR: teenagers’ worst nightmare?

During the panel, several speakers underlined their deception both at the regulatory process through which the current age limit for data processing was set, the negative impact it may have on teenagers (being excluded from social networks, forced to lie about their age or harass their parents for obtaining their consent violating their right to privacy) and the likely end result which will create a fragmented environment for any company or actor processing data in the EU.

From COFACE’s perspective, the issue of consent is only a very small part of the Regulation.  In effect, the debate about which age should be the “limit” for consenting to data processing and for requiring parental consent has been misrepresented and misunderstood. Furthermore, consent as such is controversial as users typically never read Terms of Service (ToS), click away at anything that pops up just to access the service so looking at whether ToS are fair in the first place is more important than debating about the age at which one can give consent… Perhaps the answer is “at no age” given that no one reads ToS!

Many actors said a limit set at 16 is the same as forbidding teenagers from accessing social networks or the Internet without seeking parental consent, or that teenagers and children would be more vulnerable online since they would use services anonymously. These are overly simplistic interpretations of the Regulation. Teenagers below the age of 16 would only require their parental consent for services which process their data; the intention of the law makers in this instance, was protecting teenagers and children from the commercial exploitation of their data and being overly exposed to commercial messages/marketing since online advertising now relies on processing a high degree of data to personalize advertising. Also, anonymity is an issue completely separate from the debate around data processing. A user can very well use his/her genuine name and post genuine pictures of him/herself without any “data processing”. Conversely, it is easy to pretend to be someone else and use a nickname on services which rely on heavy data processing (including Facebook). The proof is simply the number of children under the age of 13 currently on such services! Finally, on the question of being more “safe” in environments where data can be processed, this is also an overly simplistic view. It would depend what the definition of “data processing” is: does it include monitoring, reporting and online moderation?

The end result of the Regulation for teenagers’ access to online services can be illustrated by three scenarios:

The first one where online services refrain from processing data of teenagers and thereby allowing them access without parental consent (meaning that any algorithms processing personal data would be disabled and content would be sorted automatically according to the date posted for instance). This, unfortunately, is highly unlikely, since targeted advertising is the business model on which most of these services rely, so interpreting and applying the Regulation in such a manner would effectively deprive them from around 20% of their revenue.

The second where online services set the “cut off” age for using their services at 16, pretend that no under 16 year old uses their service, and engage in a selective “witch hunt” of underage accounts, closing them at random. This is possibly the worst outcome for both teenagers and online services as teenagers would need to lie about their age, and would therefore not benefit from the “protection” from certain types of advertising or content and would also be subject to having their account closed if they are identified as being under 16.

The third scenario where online services set up a “parental consent” mechanism and where teenagers would need to pester their parents to get access to such online services. This also is a rather negative outcome for teenagers and their right to privacy, freedom…

In the end, the “blame game” has been mostly pinning down the failure of the Council, which should have known that online services would never refrain from processing data from under 16 year olds and thereby renounce to 20% of their revenue…

From COFACE’s side, we underline the necessity to reflect on a key question: how can we strike a better balance between the prevailing business model centered on advertising, data processing and profiling and the necessity to protect children and teenagers from the commercial use of their data and from advertising and marketing?

Besides, perhaps another wish of the Regulators is to ensure that teenagers below the age of 16 experience an unfiltered Internet instead of the Internet “Bubble” which displays only content that users already like.  Services like Instagram used to sort pictures displayed according to the time they were posted, not based on an algorithm.  By the same token, teenagers below the age of 16 and children should have the right to access information with as little “algorithmic bias” as possible. Many users have expressed their discontent at Facebook’s decision to apply an algorithmic sorting on Instagram feeds, so the question of displaying information in a neutral way goes far beyond the debate of teenagers.

Technology will save the World

With the immense success, influence and power that online services like Facebook and Google have, it is no wonder that their actions can greatly affect the world we live in.  Among these we find developments in Virtual and Augmented reality, Artificial Intelligence, Machine Learning, even connecting the poorest regions of the world like Africa to the Internet.

While the benefits such innovations can provide are real and can make a difference for people, it also raises ethical and political questions of private companies impeding on the public interest and the role of States.

To illustrate the issue at stake, we can mention the example of Bitcoin. The Bitcoin virtual currency has been identified, by many academics and technophiles, as a solution to store value for people living in countries where the rule of law is weak and where the local currency’s value suffers from high volatility. At the same time, it can weaken even more the local currency and delay much needed pressure for strengthening the rule of law through political action.

In a similar way, private companies providing and Internet access to people may delay the investment and development in public infrastructure and put private companies in a monopolistic position in these countries, with the power to shape what the people accessing their “version” of the Internet can see (like an access via the “internet.org” portal).

When governments set up infrastructure, it is seen by the population as the realization of their rights, the public good and general interest of all based on the social contract they have with the State, when a private company invests in infrastructure, it is seen as an act of charity for which people should be thankful. In essence, infrastructure has moved from being a human right to a company provided privilege.

Far from arguing that these projects do not make a difference or change people’s lives on the ground, there may be a better balance to strike between the private and public interest, for instance by aligning the private investment with public investment to encourage governments to develop the IT and telecommunication infrastructure and prevent a private company from fully controlling Internet access, and also setting some criteria for providing Internet access, for instance by ensuring that users are not forced to access the Internet via a portal imposed by a private company.

Silence about VR

Although we are at the cusp of a VR revolution, with many implications for child safety, neither company mentioned VR or their initial reflections on child safety in VR spaces.  Even as COFACE asked whether Facebook intended to consult with civil society and child protection NGOs in advance, the response was that it was “too early” for such discussions. A day earlier, Facebook was hosting its F8 conference, presenting social VR which enables two or more people to visit real places in VR as avatars and hinting at “face scanning” technology to create realistic looking avatars

This is very surprising given that the first Oculus Rift devices are set to ship in the coming weeks and that children all over the world might start to experience VR in the households equipped with an Oculus device or even a VR capable smartphone coupled with a compatible Google Cardboard or Samsung Gear VR headset.

So far, VR has shown great promise in clinical and research settings helping to develop empathy or fight addiction.  VR’s so immersive that it successfully tricks the brain into believing it is real, which is also why simulated experiences have a very “real” impact on the users.  Of course, no research or clinical trial will ever attempt to traumatize users as an experiment, but one can only assume that since VR has such a “positive” impact in experiences which are aimed at resolving issues like post-traumatic stress disorder, it can have a similarly powerful “negative” impact.

Artificial Intelligence to the rescue

The ongoing fight against child abuse has already received much help from technologies such as PhotoDNA or Video Hashing, enabling machines to identify and quickly take down known images or videos of child abuse. The advances in artificial intelligence (AI) might, in the near future, be able to recognize previously unreferenced content which might portray child abuse, thus tackling one of the most problematic aspects child abuse: keeping up with the constant upload of “new” child abuse material.

But AI could also have many other applications outside the scope of child abuse and copyright infringement. Terrorism, hate speech, cyberbullying even suicidal thoughts could be picked up and flagged by AI.  One interesting application would be to display a message to potential victims of cyberbullying or hate speech, prompting them to report the content or asking them if they need help. At this stage, there are many challenges for this to happen:

– The priority given to the development of such an AI vs. all the other potential applications (like identifying the contents of pictures…).

– Legal barriers to keeping sufficient amounts of data (such as millions of messages flagged as being cyberbullying or hate speech) to “train” the AI. Under current regulations, companies like Facebook or Google are not allowed to keep any data from users that has been flagged and deleted because it infringed upon their community guidelines (save for very sensitive data such as elements of proof for a criminal investigation).

– The complexity of training such an AI since it has to rely on a very wide number of parameters and learn to differentiate, based on the context, whether a message is meant as a joke or as a real threat or insult.

Nevertheless, given the effort to develop algorithms which display “context appropriate” ads, investing in the development of an AI which could pro-actively prompt victims of hate speech or cyberbullying for help or flag content early for moderators to review could be a game changer in the fight against online harassment, cyberbullying, hate speech etc.

More services designed for kids

As more and more actors including NGOs and policy makers worry about the exposure of children to inappropriate content, commercial exploitation, online abuse and other dangers, especially on large online communities which are designed for adults, services designed for children are starting to emerge.

YouTube Kids is a good example of such a trend. COFACE has been in favor of such a development, especially for very young children, making sure that they have access to quality, positive content which is age appropriate in a safe setting.

However, there are still a number of issues which need to be resolved:

– The choice of the business model behind such services is even more critical and sensitive then for “regular” services. While making parents pay via a monthly fee bears the risk of excluding the most deprived and vulnerable, a model based on advertising has to abide by very strict rules to ensure that the balance between commercial content/advertising and “regular” content is fair and proportional. At the time of writing, YouTube Kids is still struggling with “grey” zones in terms of content.  While “formal” advertisements have to comply with national legislation on what can be advertised to children (no food for instance), the user generated videos themselves feature many unboxing videos of toys or stories/cartoons based on commercial content (for instance, the Hotwheels cars).  Potential solutions to this issue is to “flag” such content as “commercial” to make sure that children understand that there may be a commercial objective behind it.  Creating a separate category for such “commercial” content should also be envisaged, as having a “hotwheels” video in the “learning” category is greatly misleading!

– The “sorting” algorithm which decides what is featured on the home page and through the search feature need to be tweaked to give more visibility to “positive” content based on a certain number of criteria (a possible link could be done with the POSCON project). At the moment, a search for “puppies” for instance returned many videos which feature unboxing of products and toys featuring puppies. Although these videos do indeed correspond to the search term, they should not appear in the first few pages of the search results.

– Parental controls on YouTube Kids could be enhanced further, giving the possibility for parents to select which “types” of videos should be made more visible on the YouTube Kids homepage. For instance, some parents would prefer the “learning” videos to appear more prominently in the homepage as opposed to other categories.

Users in control

Once again, the danger of algorithms restricting user experiences was raised. This is especially important for children as it may prevent their development, feeding them content that “challenges” their views rather than displaying only what they already like, thus creating a “filter bubble” and, in the longer term, a “filter tunnel”.

User control over algorithms is therefore of utmost importance; allowing users to decide whether they prefer to have an “unfiltered” news feed which sorts content by date or a “neutral” search result which does not rely on the user’s previous searches.

User control has already been enhanced in many ways, by boosting control over privacy settings or providing more granularity for parents in setting up their parental control settings.  In that area as well more can be done.  For instance, parents could benefit from parental control restrictions which only display apps without advertising or with a transparent pricing policy.  While Google has added a feature which blocks in-app purchases and requires a parental PIN, the poor cost transparency of games based on in-app purchases may push parents to filter them out completely (COFACE has been advocating for more transparency in pricing by displaying the “average” cost of a game based on player’s spending patterns and time spent playing the game).

Finally, user control is greatly dependent on other things such as the quality of ratings and classification of content. One participant rightfully pointed out that the “Movie Star Planet” app is rated 3 years and above on the Google Play Store, even though such an app allows interactions between users (children) with much “girlfriend” and “boyfriend” talk (bordering on sexual talk) and a potential issue with grooming.  Google sets ratings based on a publisher’s response to a series of questions.  For instance, if the publishers of “Movie Star Planet” said that there is user interaction but that their app includes human monitoring and moderation, the system would automatically assume that the app is safe for kids.  But this relies on the trust and good faith of publishers to correctly and honestly answer the questions.  Further reflection on how to ensure that apps and content in general is classified correctly is thus needed.

Education, not a silver bullet but…

While everyone agrees education is not a silver bullet, it certainly looks very much like one…  The resounding quote from Jim Gamble (INEQE) still sticks in my mind: “it’s not about context, it’s about people”.  In essence, if we “change” people, educate them, inform them, then no matter if the Internet is a right mess, no harm could ever come to them.  The “context” is like a huge mountain, it’s pointless to try and dig a tunnel under it, we should focus on learning how to climb it.  Or should we?

It goes without saying that education is very important, but for every effort to educate children and adults alike, a similar effort needs to be made on safety by design, privacy by design, rethinking how online services operate.

There are also limits to what companies can educate children about.  For instance, teaching prevention about cyberbullying poses no issue at all, since it is in the interest of companies to minimize incidences of cyberbullying on their services.  On the other hand, when advertisers teach about media literacy and critical thinking, especially about advertising, there are strong reasons to question the quality and impact of the educational material.  MediaSmart, for instance, was developed by the World Federation of Advertisers.  It features many lesson plans centered on making children design and develop their own ads, an activity which shines a positive light on ad-making.  Furthermore, the only “real” examples of ads proposed for analysis are all very positive and the “negative” examples of misleading ads are all fictional. COFACE has developed a media literacy education tool for parents, Nutri-médias, which covers to a much greater extent the different advertising techniques and real examples of advertising and the controversies surrounding them (gender stereotypes, healthy eating habits…).

All in all, for certain educational activities such as learning how to code or how to develop empathy, prevent cyberbullying, there are no issues in having civil society, NGOs, private companies and public authorities working together.  On other topics such as Big Data and the potential impact on society or how online business models contribute to shaping the Internet, independence of NGOs and governments is key as private companies have a vested interest in presenting these topics in a certain light.

Self-regulation and co-regulation

To finish, the two day event did convey the message that both Facebook and Google are open to suggestions and welcome any criticism on how to make their online services safer for children.  MEP Anna Maria Corazza Bildt underlined that working with the private companies is key to ensuring that children stay safe online and especially, addressing questions like processing children’s data for commercial purposes or children’s exposure to commercial content.

COFACE welcomes any such initiative but insists that a legal backstop and a proper, independent monitoring and evaluation of progress made are a necessity.

For more information about the event here and on Facebook.

 

 

 

 

 

 

EU Survey on ‘Cyberbullying among young people’

Survey Cyberbullying
The European Parliament has commissioned the law and policy consultancy Milieu Ltd to deliver the ‘Research Paper on cyberbullying among young people’. The aim of this paper is to provide information on the scale/nature of cyberbullying among young people in the EU and on Member States’ legislation and policies aimed at preventing and tackling this phenomenon as well as on good practices in this area.

In the framework of this research, Milieu Ltd has contacted us to help them spread the word and disseminate the EU Survey on ‘Cyberbullying among young people’. The purpose of the survey is to collect the views of young people (between 12 and 21 years old) of cyberbullying and to test the good practices and recommendations identified through research at national level.

If you have between 12 and 21 years old, we invite you to fill in the survey (available in 10 languages):

Bulgarian: http://goo.gl/forms/DwF5CoZ950
German: http://goo.gl/forms/GijeNhPfZN
Estonian: http://goo.gl/forms/AfA9fxxGOw
English: http://goo.gl/forms/LOrGZXWYtb
Spanish: http://goo.gl/forms/uanV6BUQap
French: http://goo.gl/forms/Bi3ujtJoaX
Italian: http://goo.gl/forms/oiCHyfB2e3
Polish: http://goo.gl/forms/JAsG0UqrMv
Romanian: http://goo.gl/forms/djrStLvIMv
Greek : http://goo.gl/forms/7heEEFYzhD

Thanks a lot for your cooperation!

Digital Values: Advancing Technology, Preserving Fundamental Rights

by Martin Schmalzried

On the 18th of January, Carnegie Europe, in partnership with Microsoft and in association with the Dutch Presidency, organized a conference on devising policies which help to maximize the value of technology while preserving our core values. Several high level key note speakers took the floor including Brad Smith, President and Chief Legal Officer of Microsoft, and Věra Jourová, European Commissioner for Justice, Consumers, and Gender Equality.

The key topics discussed, from COFACE’s perspective were balancing privacy with security/safety, the latest developments and implications for the Internet of Things, transparency and user trust and finally, the development of Big Data.

Balancing privacy with security/safety

Given the recent terrorist attacks in France and in many other parts of the world, the “mood” has shifted from a focus on privacy following the Snowden revelations to security and public safety. Privacy is often pitched against security and safety, in the sense that one cannot have both privacy and security/safety. To some extent, this is true. If States were allowed to limitlessly monitor and skim through all data, perhaps some terrorist attacks could be prevented. However, the assumption on which this trade-off is built is a gross over-simplification of the world we live in. Pitching privacy against security/safety fails to address more pressing issues such as rethinking the foreign policies of our National governments and actions at international level which may indirectly or directly encourage terrorism. Should we therefore keep selling weapons all over the world, apply very cynical policies or “real politik” across the world, failing to address the hardships that citizens in Iraq, Syria or Afghanistan are facing, failing also to recognize our governments’ responsibilities in creating the mess and at the same time implement mass surveillance as a means to prevent terrorism? There may be no dilemma or trade-off between privacy and security/safety after all. Privacy simply needs to go along with measures to curb inequalities, bring about more responsible, humanist foreign policies and eliminate discriminations, exclusions and the ghettoization of specific minorities.

The Internet of Things

While for end users, innovations in the field of IoT (Internet of Things) seem very attractive, especially in their potential to make their lives easier (automated heating, managing your fridge’s content…), they also gives rise to a number of challenges:

– A lack of standardization in terms of communication protocols, security, privacy protection, etc could break the very principle of the openness that the Internet was built on, with each actor trying to impose his “standard” for IoT.

– How can users access data generated by IoT and under which format should they receive it?
– How can users control data generated/transmitted by IoT? Will there be a way to switch the connectivity off?

– How much of the devices functionalities should run through the Internet as opposed to run either “offline” or only on the local “house” network? For instance, should a talking pet require permanent connectivity to interact with a child, it may be very restrictive in terms of use. Also, if a toy monitors a child, should the “recordings” be uploaded to the Internet or rather stored on a local computer on the “house” network? The latter point raises the question of privacy and intrusion of IoT into the privacy of minors. This is especially tricky from a legal point of view as IoT devices may monitor and collect data about adults but also about children. This may give rise to a new form of privacy protection model, based on Privacy by Design or Privacy by Default.

– IoT is extremely complex from a liability point of view and includes many “layered legals”. Who is responsible in case something goes wrong? Often, there multiple companies involved to make IoT work. For instance, in the case of an automated heating service, car companies are involved (to send a signal from the vehicle when the user gets close to home), online cloud services, the IoT device itself and potentially even more (if there is a third party software on the IoT device…).

– The combination of Big Data and IoT may also usher in a “Premium” vs. “Freemium” era, where consumers are offered discounts based on their behaviours, at the expense of quality of service. Many examples illustrate how IoT and Big Data can be used for better or for worse. Hotels for instance, can gather data on a consumers’ habits (does he/she usually heat the room, how many towels does he/she use, etc…) to save energy, water… but this data could also be used to propose “Ryanair” type discounts to consumers at the expense of quality (if a consumer agrees not to use the air conditioning, or not use any towels, he/she will get a discount…) or even discriminations (should a consumer take long showers or baths, he/she would be proposed systematically higher prices for hotels).

Transparency and user trust

User trust is absolutely essential if companies wish them to embrace the current revolutions in IoT, Big Data, Artificial Intelligence etc. However, this poses a serious challenge. How can a company be transparent about highly technical and complex issues such as Big Data? To give a concrete example, credit scores, which are automatically calculated using algorithms and Big Data are all but transparent to consumers.

In the end, while transparency is a necessity, ensuring that a quality regulatory framework is in place will also help in securing user’s trust. As Brad Smith has pointed out, regulation should be clear and predictable, rights of people need to be respected and their rights need to have remedies.

Big Data

Big Data is a relatively new phenomenon and users are only getting familiar with its implications and the possibilities such a technology offers. There are many questions that users are struggling with:

– How much is my data worth? Am I getting my “data’s worth” when I’m using a service which relies on the use of personal data in its business model?
– Is there a trade-off between sharing my personal data and the service I am receiving? Can this data be used against me?
– How can I have more control over the data I am sharing and how the data I share influences my online experience?
All of which will need to be addressed if users are to entrust companies with their data.

To finish, COFACE fully agrees with the lead statement of the conference: advancing technology, preserving fundamental rights. This balance will certainly need to be struck in the years to come and COFACE will reflect on all the latest developments to ensure that users get the most out of technology.

ENABLE Hackathon and ambassador training

NewsOct-Hackatonenable On the 12th, 13th and 14th of October, the ENABLE project (European Network Against Bullying in Learning and Leisure Environments) held a Hackathon price ceremony and an ambassador training in London.

Launched in June 2015, the ENABLE Hackathon called on young people to propose initiatives and ideas on how to facilitate conversations about cyberbullying and bullying between young people, help them understand the phenomenon and find creative solutions to deal with this problem.

The Hackathon ceremony brought together 6 winning teams from all over the world to present their initiatives and ideas such as mobile applications, peer support programmes, awareness raising campaigns and many more.

The ENABLE Ambassador Training took place on the 13th and 14th of October, following the ENABLE Hackathon, and was attended by teachers from Greece, the United Kingdom, Croatia and Denmark. During the two day training, teachers were guided through the ENABLE project’s key deliverables and how they can be used directly in schools to help students develop key social and emotional skills such as empathy, increased self-awareness, communication skills, active listening and a sense of responsibility.

The Social and Emotional Learning lesson plans include 10 lessons aimed at combatting bullying in a school environment by increasing the emotional intelligence in young people aged 11-14. They cover fundamental questions such as the construction of an individual’s identity, understanding bullying, enhancing peer support and peer mentoring.

Martin Schmalzried from COFACE attended the training as a member of the ENABLE Think Tank, building on the lessons learned from the #DeleteCyberbullying project to provide advice on the resources developed within the ENABLE project.

For more information about the ENABLE project, visit the ENABLE website here

Keeping children safe from Cyberbullying

News-09 SaferInternet

On the 22nd and 23rd of September, German and Polish Safer Internet Centers jointly held another edition of their major Safer Internet Conference in Warsaw, Poland. The conference revolved around several key topics such as privacy, sexuality, risky content, data ethics, cyberbullying and inappropriate online behaviours. Many key issues were touched upon such as:

-The growing use and exploitation of private data as a business model and the dawn of private data as a currency to pay for online content.

-The impact of sexting and exposure to sexually explicit content on children’s and young people’s sexuality, which borders on cyberbullying as well.

-How to secure your right to privacy online and some concrete tips to keep your data safe from unethical uses.

COFACE, represented by Martin Schmalzried, sat on a panel discussion dedicated to challenges and visions concerning child safety online in the present and in the future. The three main points addressed by COFACE were related to cyberbullying.

How to reach out to parents that are unaware of cyberbullying?

Some parents will always be left out and feel helpless when it comes to dealing with cyberbullying. Just like other topics such as sexuality, not all parents feel ready to discuss certain topics for a variety of reasons. Schools and teachers remain the best way to ensure a common knowledge and awareness about issues such as cyberbullying. That was why a universal schooling system was set up in the first place: to level the playing field and give each child the same chances in life through education. However, this is no reason to give up on parents and we should always try to reach out to them to make them feel more concerned and involved about issues such as cyberbullying. Examples include:
-Organising parents evenings in schools or through organisations such as family associations.
-Presenting them with easy tools and steps to protect their children online.
-Information campaigns via magazines and newsletters from family associations or the provision of easy tools and multimedia resources such as those delivered by the #DeleteCyberbullying project.

How do you explain the difference in awareness about cyberbullying between EU countries?

It all has to do with cultural differences and the environment. For instance, in some countries such as the Scandinavian countries, topics like sexuality, violence or gender roles are openly discussed by the wider public, while in other countries such as the southern Member States, these topics are much less “taboo”. Such cultural differences, among many other factors, may explain the differences in attitudes towards an issue like cyberbullying. For instance, in COFACE’s awareness raising video about cyberbullying, we have received many comments implying that cyberbullying is not such a tragic issue, after all, it’s “just” a few online words that you can easily ignore, especially if “you are a man”.

Parents often don’t come to parent evenings at school. How can they be more inclined to come?

There are many strategies for securing parents’ participation but we would like to put the focus on work-life balance. Parents and teachers are living busy lives. With both parents working, there is little time left for parenting, personal activities, social activities and household responsibilities. Securing a better work-life balance would enable parents to have more time to attend parent evenings and get more involved in their parenting, including digital parenting. COFACE has carried out a full campaign last year on work-life balance.

Connect with respect

mob

 

Today, children connect to the internet with mobile devices at an ever earlier age. Fast evolution in the field of ICT creates new challenges and opportunities.

While some years ago parents could still monitor their children’s use of the internet on the home computer, access to the internet has become ever more mobile. Children have, at their fingertips, access to an unprecedented wealth of information and a way to interact with the whole world. At the same time, a certain set of skills are needed to make the most out of the internet. Challenges such as cyberbullying, exposure to inappropriate or harmful content, exposure to advertising and excessive use/time spent on the internet are real and can have enduring negative effects on the development of children.

What can we do?

In essence, keeping children safe online is the responsibility of all actors. Parents, teachers, service providers, hardware manufacturers, policy makers…

At the same time, parents are the primary educators of children and in the case of young children, parents are virtually the sole reference for establishing healthy habits and adhering to core values such as respect, be it online or offline.

Children need to learn as early as possible about their rights and responsibilities and parents are among the first to initiate this learning process.

Knowledge these two dimensions can help children put into better perspective and react better to issues such as cyberbullying, by knowing what rights they have should they be a victim and by keeping in mind the consequences should they be a perpetrator.

But all parents are not IT-savy and do not feel comfortable or capable to discuss and exchange about the online world with their children. To that end, COFACE has set up a resources page on its website to help reference and share good practices and resources that can help parents in their essential parenting role.

For more information, please have a look at our resources page.

Find out more about COFACE on our website www.coface-eu.org and sign up to receive our news. Find us also on Facebook and Twitter: @COFACE_EU and @dcyberbullying