Category: Safer Internet

Safer Internet Forum 2016

banner_sif_1000x260px

On the 24th of November, the INSAFE/INHOPE/BIK network and the European Commission DG CNECT organised in Luxembourg the 2016 edition of the Safer Internet Forum under the theme “Be the change”.

The conference brought together a variety of stakeholders including young people, parent and teacher representatives, industry and government policy makers and civil society organisations to discuss the ongoing challenges of achieving a “Better Internet for Kids”. As one in three Internet users are children, it is essential to come up with sustainable strategies to tackle such issues as harmful content, commercial exploitation and cyberbullying.

Javier Hernandez-Ros, acting Director of DG CNECT, emphasized the importance of following up on the challenges and ideas identified during the Safer Internet Forum through the Alliance to better protect minors online, a DG CNECT multistakeholder group which will start its work next year and aims at addressing the challenges childrens face online.

Mary Aiken, researcher at the University College Dublin, followed by a key note speech on the basis of her book “the Cyber Effect” which aims at presenting findings from cyber-psychology and research from behavioural and child development studies in relation to technology in an accessible way.

Some of her most powerful messages include:

  • The necessity to inform policy making via quality peer-reviewed studies in the emerging fields of cyber-psychology and child development/behaviour, from independent sources, conducting research for public good/general interest.
  • Develop guidelines for the use of ICT and the Internet based on research. Examples include banning screens for babies aged 0 to 2 years old and creating a “safe space” for young children online.
  • Add “cyber rights” to the United Nations Convention on the Rights of the Child.
  • Reviewing the Internet Governance process to ensure child protection is a priority.
  • There can be no trade-offs between privacy, security and the vitality of the tech industry. All three issues need to be equally addressed without one taking precedence over the other.

The commercialization of childhood

This panel session brought together academia, civil society and industry representatives to discuss the growing exposure of children to commercial content/commercial solicitations online. Two researchers from the University of Ghent underlined a number of important research findings, especially the fact that children have a hard time to recognize certain “new” types of online advertising techniques and that “labels” like the “PP” label to notify about product placements are not very effective in signaling to children that certain contents includes advertising.

John Carr from eNACSO stressed that while regulation has cracked down on many immoral advertising practices in the real world, such as paying children to talk to their peers about a product in real life, regulation has lagged behind on online advertising. While the EU Commission has relied on self-regulation, the results and impact of such self-regulation to limit children’s exposure to advertising is less than convincing. Children should have a charter of economic rights and not be tagged simply as “vulnerable consumers”. This could be achieved, potentially, via a revision of the unfair commercial practices of the EU Commission.

Martin Schmalzried from COFACE-Families Europe shared two recommendations on how to limit children’s exposure to advertising online:

  • New “indicators” need to be developed to enable children/parents to choose services which adopt a fair and responsible advertising policy. One such indicator is the ratio between advertising and “native” content. How many posts out of 10 are advertising? What “surface” of the screen/webpage is covered with advertising? Users need to have some indicators to compare how various online services, platforms or content providers fare in displaying advertising.
  • Regulators should not exclude the ban of certain advertisement techniques. As John Carr has underlined, regulators have banned certain advertisement techniques in the real world on several accounts on the grounds of unethical/unfair practices. There is no reason why online advertising should be exempt from such regulation.

BIK to the future

The final panel session of the Safer Internet Forum looked to the future and the challenges ahead. Our colleague Martin Schmalzried presented 7 key areas of focus that will need to be addressed in the future:

1-Virtual reality

Exposure to VR will increase substantially over the next years as the cost of the technology drops. Some of the issues include:

  • Harassment/cyberbullying: the first instances of “virtual groping” have surfaced on the Internet in the last few weeks. The negative effects of cyberbullying, harassment and any forms of harmful content/contact will be multiplied in VR settings due to the increased realism and immersiveness of VR. Studies have already shown that VR can be used successfully for curing post traumatic stress disorder and boost empathy. The opposite is therefore very likely true (it can enhance trauma and desensitization).
  • Child pornography and child abuse may also move to VR as the combination of VR with connected sex toys and haptic feedback devices will greatly increase “realism”.
  • The collection of data in VR will raise new questions about privacy. The data generated by users could be used for advertising in VR, for instance, as advertising has proven to be more effective in VR environments.
  • Physical problems related to VR are also likely to emerge such as eye strain, impaired depth of vision (if used by young children), or injury by collision against a “real world” object while immersed in VR.

2-Algorithms

There are many controversies which have surfaced about algorithms lately, notably the “filter bubble” effect and the viral nature of “fake news”. Algorithms can help in tackling several problems including cyberbullying, hate speech or identifying fake news, but this requires a willingness of companies to work on developing such solutions.

Algorithms will also require increased accountability mechanisms such as independent audits to avoid discrimination or unfair “humanless” decisions to be carried out. Without human judgment and interpretation, algorithms are useless and may create more problems then they solve. An example is the “predictive policing” algorithm. While it may be successful in fighting crime, as in identifying the neighborhoods where a crime is most likely to happen, the “lessons” learned from such an algorithm need a human interpretation. Are all “blacks and latinos” more likely to be criminals or rather, are all humans struck by poverty, discrimination, desperation, and exclusion more likely to commit crime? The implications of such an interpretation are highly important as in the first case, one may decide that the solution is to build more prisons, in the second case, one may decide that the solution is to fight inequalities and discrimination.

Finally, algorithms deserve their own “liberalization”, moving away from the “monopoly” and control of their current owners. The data kept by Facebook and Google is simply a billion row and column database which could be “searched” and “ranked” by any algorithm, not just the “proprietary” algorithm of Facebook and Google. Allowing third parties to propose “custom” algorithms might help solve many of the issues discussed above such as “filter bubbles” and “fake news”.

 3-Online business models

The current online business models are also much to blame for the “harmful” content or “fake news” available. Fake news heavily rely on advertising revenue, which often takes up more space than the “content” of the fake news article. Users do not understand “new” business models relying on user data or “in app purchases”/”freemium”. In the past, economies of scale provided that the more users bought a good, the cheaper it was to produce and the cheaper it could be sold, thereby greatly benefiting consumers and society as a whole. Online, this system is broken. Normally, with more and more users subscribing to Facebook, the prevalence of advertising should be dropping since Facebook should be able to “sell” its services for less advertising. But the opposite has happened! Instead, there is more and more advertising on both Facebook, Youtube and many other online platforms. Because users do not understand such business models, these services can get away with sucking more and more money from users’ time (since their revenue is generated by wasting people’s time looking at advertising) instead of lowering their “price” (advertisement prevalence) as would normally happen under a healthy “competitive” environment in the real world.

The same holds true for many other forms of digital services or content. Apps don’t get cheaper with more people buying them, although the cost of developing them is the same!

 4-Digital citizenship

More and more, we hear about the term “digital citizen” which is a “sexy” way to describe contemporary Internet users. However, the word “citizenship” and “citizen” are ill-chosen. Rather, the term should be “digital subject”. Indeed, citizenship implies that a person has a right to vote or influence the rules and laws by which he/she is governed. On the Internet, most if not all online service providers do not function as democracies but rather like monarchies, with terms of service and community standards written by the owners and with little to no “rights” for their users, only obligations.

5-Artificial Intelligence

Deep learning and machine learning has had many breakthroughs in the last decade and many more are coming. The impact on our societies should not be underestimated. Some are already talking about “labour displacement” or even permanent loss of available jobs. Humans generate more and more data through their everyday mobile phone use, Internet surfing habits and many emerging technologies such as VR and Internet of Things. All this data, if structured properly, can be used to accelerate the development of AI and machine/deep learning, and the implications should not be underestimated. As the saying goes, “children are great imitators, so give them something great to imitate”, this is even truer for AI and machine/deep learning: AI is only as good as the data it works with!

 6-Terrorism and radicalization

Terrorism, support of terrorism, radicalization, online recruitment have been high on the agenda for policy makers especially since ISIS/ISIL has emerged and social media have been widely used to propagate their messages and rhetoric. The “easy” response has been to ask for increased filtering, take-down or overall censorship of any content promoting/supporting terrorism in one way or another.

But not only is it difficult to fight such messages since new social media accounts from which such messages are being shared are created every day, but also since terrorists move to communication technologies which are harder to trace/monitor or censor such as the private messaging app telegram.

Unfortunately, focusing on censorship is like sweeping dust under a rug. It might help in the short term, but in the long term, be counter-productive. ISIS/ISIL’s emergence is strongly linked to Europe’s colonial history and recent US imperialism. Their propaganda is successful because it builds on accurate historical facts which have been ignored, minimized or even denied in our societies. Terrorism is also linked to poverty, social exclusion and seeking vengeance for the death of loved ones (as is often the case between Palestine and Israel). The priority should be how to prevent terrorism by addressing inequalities, social exclusion and bringing to justice those who are responsible for the death of innocents, often in the name of human rights/democracy but in reality, serving other interests.

 7-Cybersecurity

News about yet another data breach and theft of millions of credit card information, user account details and the likes surface more and more often. Cybersecurity, in order to be successful in the future, will have to be considered as a public good. The open source movement is a model in this respect. With a strong community of voluntary and engaged security researchers, open source software such as Linux/GNU stays highly secure. Other proprietary security solutions rely on hiding code and hoping that no one will be able to find a vulnerability or security flaw. Time and again, this has proven to be wildly ineffective. Even “new” technologies as blockchain are based on crowdsourced security as breaking it would require, among other things, to take control of at least half of the computers on which blockchain technology is running.

For more information about the Safer Internet Forum, please visit the official website here: https://www.eiseverywhere.com/ehome/202903/456936/

For any questions, contact Martin Schmalzried (COFACE-Families Europe): mschmalzried@coface-eu.org

Advertisement

Annual Data Summit “Harnessing the Power of the Digital Revolution”

news10-22-politicoOn the 13th of October, Politico Europe, in partnership with Telefonica, organized the second Annual Data Summit, under the heading “Harnessing the Power of the Digital Revolution”. The focus was on the opportunities that processing data could bring, as opposed to underlining the risks which get too much of the spotlight, according to Telefonica’s representative. Keeping in mind the importance of privacy, he went on to stress the benefits that Big Data could bring such more efficient and smart transportation/logistics, healthcare, education, access to financial services, all of which will contribute to creating economic growth.

COFACE-Families Europe has been very skeptical about the so called “benefits” of Big Data. In an article entitled “Fintechs: Milking the Poor”, Families Europe denounces the illusions of improving access to financial services via Big Data driven innovations, especially in Europe.

Commission Vice-President Andrus Ansip stressed in his keynote the importance to enable free data flows across borders, especially within the EU, and therefore in opposition to the “data localization” trend where in the name of privacy, data about users would have to be hosted at their national level. In the European Union, there are already 50 laws in 21 Member States about data localization and this could greatly increase the costs of hosting/processing data as setting up data centers in each Member State is inefficient as some countries’ climate and environment is better suited for optimizing data centers’ energy use. EU startups would be most penalized by data localization as the “big” players such as Amazon, Google, Facebook have the means to comply with data localization measures.

While data localization laws aim at addressing national security concerns as well as privacy concerns, Andrus Ansip stressed that free data flows and privacy are not incompatible. He went on to underline the necessity for guaranteeing data portability as another precondition for enabling free data flows.

While COFACE-Families Europe does not deny that public data which can be used to serve the common good: healthcare data or transportation data could help prevent or fight more effectively a number of health risks/diseases or prevent road accidents and enable smoother and smarter traffic management. As always, the devil is in the details. The same data sets can also be used to carry out individual risk assessments and price certain citizens out of the insurance market. And this is but one of the most evident dangers. In some countries like China, data has also been used to identify “good” and “bad” citizens. Therefore, a number of conditions need to be fulfilled to ensure that data serves the common good, including the possibility for independent bodies to audit algorithms working with data to check whether they contain any form of human bias or have harmful consequences (social exclusion, discrimination…) on a part of society.

In the debate around the benefits/risks of big data, COFACE Families Europe insists on one key aspect: there is, as of yet, no objective measurement for deciding whether benefits outweigh risks/harm. How many consumers need to suffer detriment before there is a need to become concerned and intervene? 10? 100? 1000? And unfortunately, if we are to look at certain recent cases, it seems as if (macro)-economic considerations trump consumer protection.cFor instance, in the Volkswagen Dieselgate scandal, it seems as if the German authorities will not penalize or fine Volkswagen, and Volkswagen doesn’t plan on compensating consumers, at least in Europe. In essence, it is more important to defend a National “champion” than protect consumers from fraud, which, of course, directly fuels moral hazard, encouraging companies to reach systemic importance to be relatively immune from consequences in case of fraud. Thus it is important to strongly regulate big data or what can be done with consumer data rather than wait for market players to reach systemic importance, at which point it will be too late.

With regards to data localization, COFACE-Families Europe agrees that such policies carry many risks among which limiting freedom of expression, the possibility for governments to target dissidents more easily or forced jurisdiction (by localizing your data in one country, you are forcibly subjected to the laws of that country which may not be to your advantage). At the same time, the risks of data concentration in select countries also has some disadvantages such as creation of monopolies, an unequal share of the economic benefits of data, and the same threats for spying and targeting dissidents, simply concentrated in a few countries instead of being spread across the Internet.

COFACE-Families Europe advocates for a balanced approach where users should have the right to choose whether they want their data to be hosted inside their country or not. While it is true that setting up data centers in certain countries is less economically efficient than in others (linked to energy efficiency concerns mostly), concentrating data centers in a select few countries is simply an extension of the “comparative advantage” economic theory, which has led to massive trade imbalances between highly industrialized/developed nations and under-developed nations solely reliant on raw resource extraction. Data hosting and data centers should be part of the public infrastructure and public services, much like telephone lines, electricity, water supply or roads. In each country, there should be a minimum public service for data hosting, in addition to private data hosting solutions and users could choose, inside the services they use, if they wish their data to be hosted in their country or not. Such debates, however, might become obsolete anyways, since decentralized data hosting solutions are currently being tested and would enable users to host their data directly on their devices, bypassing the need for centralized data centers altogether.

Justin Atonipillai from the US department of Commerce followed Andrus Ansip and stressed, in his speech, the support for the open source movement, which has created many different ways to share and analyze data in an ethical way, respectful of privacy. Families Europe fully agrees with this approach.

The first panel of the conference touched upon key topics such as consumer choice, interoperability, openness and data portability. MEP Julia Reda underlined several important points which Families Europe fully supports:

  • Consumers must have the possibility to access their Internet of Things devices. For instance, the owner of a pace-maker could not access his own device to diagnose it for bugs even though he felt something was wrong.
  • Users should have the right to move their data, but at the same time, the right to data portability shouldn’t be mistaken or mixed up with the concept of “data ownership” and especially, the danger of transforming data into a commodity or a property that can be “transferred” or “sold” like any other good. This goes directly contrary to data protection rights as some types of data, such as highly sensitive data like health related data, should never be “sold” or treated like a commodity.

COFACE-Families Europe addressed the panel during Q&A to insist on including mesh networking capabilities for IoT. Mesh networking would allow users of IoT devices to connect directly to those devices without going through the Internet and the servers of the companies selling these IoT devices. Mesh networking also allows IoT devices to talk directly to each other and enable interoperability. At the moment, the technologies which enable mesh networking include WiFi and Bluetooth, but the upcoming 5G standards, which will equip most IoT devices, also need to allow mesh networking.

For more information about the event, please visit the Politico website here.

 

The digital age as an opportunity to improve youth mental health

news10-20-mhe

On the 10th of October, celebrating the World Mental Health Day, Mental Health Europe held a conference on the issue of mental health in the digital age. Experts from the industry and representatives from civil society including Youth Mental Health Ambassador Nikki Mattocks, gathered to share expertise and experience on how to prevent, protect and improve youth mental health online.

COFACE-Families Europe was represented on the panel by Martin Schmalzried, who presented the #DeleteCyberbullying project and lessons learned.

The #DeleteCyberbullying project ended in 2014 with key deliverables such as an Android app, an awareness-raising video, an online virtual march and the outcomes of a global European conference on the topic of cyberbullying.  Besides the expertise gathered on how to best tackle cyberbullying, one very interesting lesson learned was the comments left by users on its awareness raising video, which reflected the many “myths” surrounding cyberbullying in the minds of regular users/individuals, showing that we are still a long way from ensuring that end users understand the phenomenon and are equipped to adequately respond.

Some of the most important “myths” surrounding cyberbullying include:

  • The belief that you can simply turn off the technology on which you experience cyberbullying or disconnect/close your online accounts. In that event, not only does the cyberbullying continue, but it is even worse as you have no idea how many hateful messages or humiliating pictures about you are being circulated behind your back. Even a child who is not using technology at all can be a victim of cyberbullying, for example if a bully decides to open a “fake” account using some humiliating photos of that child.
  • Over-simplifying the solution to an action like blocking the bully. While blocking is indeed part of the response to cyberbullying, it is by no means an all-encompassing solution. As explained above, cyberbullying can also happen behind a person’s back.
  • “Everyone gets cyberbullied, don’t be such a pussy and toughen up”. The idea that cyberbullying or bullying for that matter are simply part of “life” and one has to toughen up. While it is true that the line between “teasing” and “cyberbullying” are subjective, this belief virtually legitimizes any forms of bullying/cyberbullying, especially the most serious, even criminal forms (like sharing sexual material of underage children to humiliate them). A healthy society shouldn’t be built on the predicament that everyone will get bullied, but rather to strengthen social skills, including social and emotional learning, developing empathy, to prevent such actions in the first place. Finally, it is always easy and convenient for the wolf to recommend sheep to “grow some teeth”.
  • “Asking the bully to stop will only make things worse”. This may very well be the case, unfortunately, if the bullying/cyberbullying is unbearable and the victim seeks external help/assistance from a higher authority like a teacher or the police, the very first thing they will be asked is whether they “have told the perpetrators that their actions are hurtful and that they should stop”. This step is therefore a precondition for seeking further help rather than an end in itself.
    news10-20-mhe

Finally, Martin Schmalzried underlined that as cyberbullying is getting worse, looking at the statistics from the latest LSE study, policy makers need to envisage broader measures than education. The online environment also plays a role in the uptake of cyberbullying. Online service providers treat their users like subjects rather than citizens with no right to agency over the services they are using. Moderation is taken out of users’ hands and managed by an obscure cloud of professional moderators which cannot possibly respond to every cyberbullying situation in a timely fashion, busy as they are taking down the content which might get them in legal trouble (copyrighted material, child abuse/exploitation/pornography…).

COFACE-Families Europe has been calling for community based moderation, where users themselves have a right to act and shape the services they are using. And this might not only help curb cyberbullying, or hate speech, but is a fundamental necessity for cultivating values of democracy, deliberation, participation and compromise as it requires a community to debate and agree on the rules by which they are governed. Successful examples of community based moderation include Wikipedia, which has been built and populated by users themselves.  As a final point, it is to be stressed that community based moderation cannot, by any means, equate to counter speech, which is simply “support” messages to a victim without any right to agency/participation in governing their online services.

More information about the event here.

Facebook and Periscope introduce new strategies to fight hate speech

by Martin Schmalzried

It’s happening. COFACE has been asking for a long time for new strategies to fight hate speech and cyberbullying, including the use of algorithms and Artificial Intelligence to assist human moderators and also the recourse to community based moderation where users can vote on taking content down.

Facebook has been steadily developing/training algorithms to help identify offensive photos. At this point, algorithms report more potentially offensive photos than humans!  Twitter, which has been attacked in the past over its lackluster ability to fight abuse, has also invested in Artificial Intelligence to help weed out abusive content. The help and assistance of AI is welcome, especially since relying on human moderation only comes with many sets of problems such as speed of reaction or even negative psychological consequences for the human moderators which are forced to look at the worst content humanity has to offer.

Progress in the development of such algorithms could benefit all online service providers as Facebook has vowed to share its findings more broadly [1]

On the other hand, Periscope (owned by Twitter) is rolling out another approach to moderation, adapted to live streaming and instant feedback and user interaction: a form of live community based moderation.  Viewers will be able to immediately report a comment they deem abusive during a live streaming.  The app then randomly selects within the audience a “jury” who will vote whether the comment is abusive/inappropriate or not.  Should the “jury” vote to sensor the comment, the author of the comment will not be able to post temporarily.  If the author of the comment repeats the offense, he/she will be blocked from commenting for the rest of the live streaming [2]

Although such initiatives are long overdue, COFACE welcomes their introduction and will closely follow their development, hopefully contributing to create a better online environment for children and for families.

Child Safety Summit in Dublin: Google and Facebook team up

by Martin Schmalzried, Senior Policy and Advocacy Officer @COFACE_EU

News04-Facebook
On the 14th and 15th of April, Google and Facebook jointly organized a Child Safety Summit, providing all participants with an insight of key developments in their respective policies and initiatives around child safety online. Both Google and Facebook presented their community policies, tools which help users stay safe by controlling their privacy settings, blocking, reporting and many other features. A notable development is the growing effort to make it as easy as possible for users to check the safety related settings on their accounts: Facebook thanks to its “privacy check-up” tool which takes the user through an easy review process of his/her safety settings and Google with the “My Account” feature which centralizes settings across multiple Google services.

Both Facebook and Google stressed that rather than more regulation, child protection stakeholders including NGOs should engage more with key industry players in a spirit of self/co-regulation. Google and Facebook underlined that they do not stop at “compliance” but look at constantly innovating and upgrading their safety features, fueled in part by feedback from NGOs (which was also one of the reasons for such as Child Safety Summit).

I attended both days and participated in the panel about the General Data Protection Regulation (GDPR) and the controversy surrounding the “age of consent” for data processing.

GDPR: teenagers’ worst nightmare?

During the panel, several speakers underlined their deception both at the regulatory process through which the current age limit for data processing was set, the negative impact it may have on teenagers (being excluded from social networks, forced to lie about their age or harass their parents for obtaining their consent violating their right to privacy) and the likely end result which will create a fragmented environment for any company or actor processing data in the EU.

From COFACE’s perspective, the issue of consent is only a very small part of the Regulation.  In effect, the debate about which age should be the “limit” for consenting to data processing and for requiring parental consent has been misrepresented and misunderstood. Furthermore, consent as such is controversial as users typically never read Terms of Service (ToS), click away at anything that pops up just to access the service so looking at whether ToS are fair in the first place is more important than debating about the age at which one can give consent… Perhaps the answer is “at no age” given that no one reads ToS!

Many actors said a limit set at 16 is the same as forbidding teenagers from accessing social networks or the Internet without seeking parental consent, or that teenagers and children would be more vulnerable online since they would use services anonymously. These are overly simplistic interpretations of the Regulation. Teenagers below the age of 16 would only require their parental consent for services which process their data; the intention of the law makers in this instance, was protecting teenagers and children from the commercial exploitation of their data and being overly exposed to commercial messages/marketing since online advertising now relies on processing a high degree of data to personalize advertising. Also, anonymity is an issue completely separate from the debate around data processing. A user can very well use his/her genuine name and post genuine pictures of him/herself without any “data processing”. Conversely, it is easy to pretend to be someone else and use a nickname on services which rely on heavy data processing (including Facebook). The proof is simply the number of children under the age of 13 currently on such services! Finally, on the question of being more “safe” in environments where data can be processed, this is also an overly simplistic view. It would depend what the definition of “data processing” is: does it include monitoring, reporting and online moderation?

The end result of the Regulation for teenagers’ access to online services can be illustrated by three scenarios:

The first one where online services refrain from processing data of teenagers and thereby allowing them access without parental consent (meaning that any algorithms processing personal data would be disabled and content would be sorted automatically according to the date posted for instance). This, unfortunately, is highly unlikely, since targeted advertising is the business model on which most of these services rely, so interpreting and applying the Regulation in such a manner would effectively deprive them from around 20% of their revenue.

The second where online services set the “cut off” age for using their services at 16, pretend that no under 16 year old uses their service, and engage in a selective “witch hunt” of underage accounts, closing them at random. This is possibly the worst outcome for both teenagers and online services as teenagers would need to lie about their age, and would therefore not benefit from the “protection” from certain types of advertising or content and would also be subject to having their account closed if they are identified as being under 16.

The third scenario where online services set up a “parental consent” mechanism and where teenagers would need to pester their parents to get access to such online services. This also is a rather negative outcome for teenagers and their right to privacy, freedom…

In the end, the “blame game” has been mostly pinning down the failure of the Council, which should have known that online services would never refrain from processing data from under 16 year olds and thereby renounce to 20% of their revenue…

From COFACE’s side, we underline the necessity to reflect on a key question: how can we strike a better balance between the prevailing business model centered on advertising, data processing and profiling and the necessity to protect children and teenagers from the commercial use of their data and from advertising and marketing?

Besides, perhaps another wish of the Regulators is to ensure that teenagers below the age of 16 experience an unfiltered Internet instead of the Internet “Bubble” which displays only content that users already like.  Services like Instagram used to sort pictures displayed according to the time they were posted, not based on an algorithm.  By the same token, teenagers below the age of 16 and children should have the right to access information with as little “algorithmic bias” as possible. Many users have expressed their discontent at Facebook’s decision to apply an algorithmic sorting on Instagram feeds, so the question of displaying information in a neutral way goes far beyond the debate of teenagers.

Technology will save the World

With the immense success, influence and power that online services like Facebook and Google have, it is no wonder that their actions can greatly affect the world we live in.  Among these we find developments in Virtual and Augmented reality, Artificial Intelligence, Machine Learning, even connecting the poorest regions of the world like Africa to the Internet.

While the benefits such innovations can provide are real and can make a difference for people, it also raises ethical and political questions of private companies impeding on the public interest and the role of States.

To illustrate the issue at stake, we can mention the example of Bitcoin. The Bitcoin virtual currency has been identified, by many academics and technophiles, as a solution to store value for people living in countries where the rule of law is weak and where the local currency’s value suffers from high volatility. At the same time, it can weaken even more the local currency and delay much needed pressure for strengthening the rule of law through political action.

In a similar way, private companies providing and Internet access to people may delay the investment and development in public infrastructure and put private companies in a monopolistic position in these countries, with the power to shape what the people accessing their “version” of the Internet can see (like an access via the “internet.org” portal).

When governments set up infrastructure, it is seen by the population as the realization of their rights, the public good and general interest of all based on the social contract they have with the State, when a private company invests in infrastructure, it is seen as an act of charity for which people should be thankful. In essence, infrastructure has moved from being a human right to a company provided privilege.

Far from arguing that these projects do not make a difference or change people’s lives on the ground, there may be a better balance to strike between the private and public interest, for instance by aligning the private investment with public investment to encourage governments to develop the IT and telecommunication infrastructure and prevent a private company from fully controlling Internet access, and also setting some criteria for providing Internet access, for instance by ensuring that users are not forced to access the Internet via a portal imposed by a private company.

Silence about VR

Although we are at the cusp of a VR revolution, with many implications for child safety, neither company mentioned VR or their initial reflections on child safety in VR spaces.  Even as COFACE asked whether Facebook intended to consult with civil society and child protection NGOs in advance, the response was that it was “too early” for such discussions. A day earlier, Facebook was hosting its F8 conference, presenting social VR which enables two or more people to visit real places in VR as avatars and hinting at “face scanning” technology to create realistic looking avatars

This is very surprising given that the first Oculus Rift devices are set to ship in the coming weeks and that children all over the world might start to experience VR in the households equipped with an Oculus device or even a VR capable smartphone coupled with a compatible Google Cardboard or Samsung Gear VR headset.

So far, VR has shown great promise in clinical and research settings helping to develop empathy or fight addiction.  VR’s so immersive that it successfully tricks the brain into believing it is real, which is also why simulated experiences have a very “real” impact on the users.  Of course, no research or clinical trial will ever attempt to traumatize users as an experiment, but one can only assume that since VR has such a “positive” impact in experiences which are aimed at resolving issues like post-traumatic stress disorder, it can have a similarly powerful “negative” impact.

Artificial Intelligence to the rescue

The ongoing fight against child abuse has already received much help from technologies such as PhotoDNA or Video Hashing, enabling machines to identify and quickly take down known images or videos of child abuse. The advances in artificial intelligence (AI) might, in the near future, be able to recognize previously unreferenced content which might portray child abuse, thus tackling one of the most problematic aspects child abuse: keeping up with the constant upload of “new” child abuse material.

But AI could also have many other applications outside the scope of child abuse and copyright infringement. Terrorism, hate speech, cyberbullying even suicidal thoughts could be picked up and flagged by AI.  One interesting application would be to display a message to potential victims of cyberbullying or hate speech, prompting them to report the content or asking them if they need help. At this stage, there are many challenges for this to happen:

– The priority given to the development of such an AI vs. all the other potential applications (like identifying the contents of pictures…).

– Legal barriers to keeping sufficient amounts of data (such as millions of messages flagged as being cyberbullying or hate speech) to “train” the AI. Under current regulations, companies like Facebook or Google are not allowed to keep any data from users that has been flagged and deleted because it infringed upon their community guidelines (save for very sensitive data such as elements of proof for a criminal investigation).

– The complexity of training such an AI since it has to rely on a very wide number of parameters and learn to differentiate, based on the context, whether a message is meant as a joke or as a real threat or insult.

Nevertheless, given the effort to develop algorithms which display “context appropriate” ads, investing in the development of an AI which could pro-actively prompt victims of hate speech or cyberbullying for help or flag content early for moderators to review could be a game changer in the fight against online harassment, cyberbullying, hate speech etc.

More services designed for kids

As more and more actors including NGOs and policy makers worry about the exposure of children to inappropriate content, commercial exploitation, online abuse and other dangers, especially on large online communities which are designed for adults, services designed for children are starting to emerge.

YouTube Kids is a good example of such a trend. COFACE has been in favor of such a development, especially for very young children, making sure that they have access to quality, positive content which is age appropriate in a safe setting.

However, there are still a number of issues which need to be resolved:

– The choice of the business model behind such services is even more critical and sensitive then for “regular” services. While making parents pay via a monthly fee bears the risk of excluding the most deprived and vulnerable, a model based on advertising has to abide by very strict rules to ensure that the balance between commercial content/advertising and “regular” content is fair and proportional. At the time of writing, YouTube Kids is still struggling with “grey” zones in terms of content.  While “formal” advertisements have to comply with national legislation on what can be advertised to children (no food for instance), the user generated videos themselves feature many unboxing videos of toys or stories/cartoons based on commercial content (for instance, the Hotwheels cars).  Potential solutions to this issue is to “flag” such content as “commercial” to make sure that children understand that there may be a commercial objective behind it.  Creating a separate category for such “commercial” content should also be envisaged, as having a “hotwheels” video in the “learning” category is greatly misleading!

– The “sorting” algorithm which decides what is featured on the home page and through the search feature need to be tweaked to give more visibility to “positive” content based on a certain number of criteria (a possible link could be done with the POSCON project). At the moment, a search for “puppies” for instance returned many videos which feature unboxing of products and toys featuring puppies. Although these videos do indeed correspond to the search term, they should not appear in the first few pages of the search results.

– Parental controls on YouTube Kids could be enhanced further, giving the possibility for parents to select which “types” of videos should be made more visible on the YouTube Kids homepage. For instance, some parents would prefer the “learning” videos to appear more prominently in the homepage as opposed to other categories.

Users in control

Once again, the danger of algorithms restricting user experiences was raised. This is especially important for children as it may prevent their development, feeding them content that “challenges” their views rather than displaying only what they already like, thus creating a “filter bubble” and, in the longer term, a “filter tunnel”.

User control over algorithms is therefore of utmost importance; allowing users to decide whether they prefer to have an “unfiltered” news feed which sorts content by date or a “neutral” search result which does not rely on the user’s previous searches.

User control has already been enhanced in many ways, by boosting control over privacy settings or providing more granularity for parents in setting up their parental control settings.  In that area as well more can be done.  For instance, parents could benefit from parental control restrictions which only display apps without advertising or with a transparent pricing policy.  While Google has added a feature which blocks in-app purchases and requires a parental PIN, the poor cost transparency of games based on in-app purchases may push parents to filter them out completely (COFACE has been advocating for more transparency in pricing by displaying the “average” cost of a game based on player’s spending patterns and time spent playing the game).

Finally, user control is greatly dependent on other things such as the quality of ratings and classification of content. One participant rightfully pointed out that the “Movie Star Planet” app is rated 3 years and above on the Google Play Store, even though such an app allows interactions between users (children) with much “girlfriend” and “boyfriend” talk (bordering on sexual talk) and a potential issue with grooming.  Google sets ratings based on a publisher’s response to a series of questions.  For instance, if the publishers of “Movie Star Planet” said that there is user interaction but that their app includes human monitoring and moderation, the system would automatically assume that the app is safe for kids.  But this relies on the trust and good faith of publishers to correctly and honestly answer the questions.  Further reflection on how to ensure that apps and content in general is classified correctly is thus needed.

Education, not a silver bullet but…

While everyone agrees education is not a silver bullet, it certainly looks very much like one…  The resounding quote from Jim Gamble (INEQE) still sticks in my mind: “it’s not about context, it’s about people”.  In essence, if we “change” people, educate them, inform them, then no matter if the Internet is a right mess, no harm could ever come to them.  The “context” is like a huge mountain, it’s pointless to try and dig a tunnel under it, we should focus on learning how to climb it.  Or should we?

It goes without saying that education is very important, but for every effort to educate children and adults alike, a similar effort needs to be made on safety by design, privacy by design, rethinking how online services operate.

There are also limits to what companies can educate children about.  For instance, teaching prevention about cyberbullying poses no issue at all, since it is in the interest of companies to minimize incidences of cyberbullying on their services.  On the other hand, when advertisers teach about media literacy and critical thinking, especially about advertising, there are strong reasons to question the quality and impact of the educational material.  MediaSmart, for instance, was developed by the World Federation of Advertisers.  It features many lesson plans centered on making children design and develop their own ads, an activity which shines a positive light on ad-making.  Furthermore, the only “real” examples of ads proposed for analysis are all very positive and the “negative” examples of misleading ads are all fictional. COFACE has developed a media literacy education tool for parents, Nutri-médias, which covers to a much greater extent the different advertising techniques and real examples of advertising and the controversies surrounding them (gender stereotypes, healthy eating habits…).

All in all, for certain educational activities such as learning how to code or how to develop empathy, prevent cyberbullying, there are no issues in having civil society, NGOs, private companies and public authorities working together.  On other topics such as Big Data and the potential impact on society or how online business models contribute to shaping the Internet, independence of NGOs and governments is key as private companies have a vested interest in presenting these topics in a certain light.

Self-regulation and co-regulation

To finish, the two day event did convey the message that both Facebook and Google are open to suggestions and welcome any criticism on how to make their online services safer for children.  MEP Anna Maria Corazza Bildt underlined that working with the private companies is key to ensuring that children stay safe online and especially, addressing questions like processing children’s data for commercial purposes or children’s exposure to commercial content.

COFACE welcomes any such initiative but insists that a legal backstop and a proper, independent monitoring and evaluation of progress made are a necessity.

For more information about the event here and on Facebook.

 

 

 

 

 

 

Connect with respect

mob

 

Today, children connect to the internet with mobile devices at an ever earlier age. Fast evolution in the field of ICT creates new challenges and opportunities.

While some years ago parents could still monitor their children’s use of the internet on the home computer, access to the internet has become ever more mobile. Children have, at their fingertips, access to an unprecedented wealth of information and a way to interact with the whole world. At the same time, a certain set of skills are needed to make the most out of the internet. Challenges such as cyberbullying, exposure to inappropriate or harmful content, exposure to advertising and excessive use/time spent on the internet are real and can have enduring negative effects on the development of children.

What can we do?

In essence, keeping children safe online is the responsibility of all actors. Parents, teachers, service providers, hardware manufacturers, policy makers…

At the same time, parents are the primary educators of children and in the case of young children, parents are virtually the sole reference for establishing healthy habits and adhering to core values such as respect, be it online or offline.

Children need to learn as early as possible about their rights and responsibilities and parents are among the first to initiate this learning process.

Knowledge these two dimensions can help children put into better perspective and react better to issues such as cyberbullying, by knowing what rights they have should they be a victim and by keeping in mind the consequences should they be a perpetrator.

But all parents are not IT-savy and do not feel comfortable or capable to discuss and exchange about the online world with their children. To that end, COFACE has set up a resources page on its website to help reference and share good practices and resources that can help parents in their essential parenting role.

For more information, please have a look at our resources page.

Find out more about COFACE on our website www.coface-eu.org and sign up to receive our news. Find us also on Facebook and Twitter: @COFACE_EU and @dcyberbullying