On the 24th of November, the INSAFE/INHOPE/BIK network and the European Commission DG CNECT organised in Luxembourg the 2016 edition of the Safer Internet Forum under the theme “Be the change”.
The conference brought together a variety of stakeholders including young people, parent and teacher representatives, industry and government policy makers and civil society organisations to discuss the ongoing challenges of achieving a “Better Internet for Kids”. As one in three Internet users are children, it is essential to come up with sustainable strategies to tackle such issues as harmful content, commercial exploitation and cyberbullying.
Javier Hernandez-Ros, acting Director of DG CNECT, emphasized the importance of following up on the challenges and ideas identified during the Safer Internet Forum through the Alliance to better protect minors online, a DG CNECT multistakeholder group which will start its work next year and aims at addressing the challenges childrens face online.
Mary Aiken, researcher at the University College Dublin, followed by a key note speech on the basis of her book “the Cyber Effect” which aims at presenting findings from cyber-psychology and research from behavioural and child development studies in relation to technology in an accessible way.
Some of her most powerful messages include:
- The necessity to inform policy making via quality peer-reviewed studies in the emerging fields of cyber-psychology and child development/behaviour, from independent sources, conducting research for public good/general interest.
- Develop guidelines for the use of ICT and the Internet based on research. Examples include banning screens for babies aged 0 to 2 years old and creating a “safe space” for young children online.
- Add “cyber rights” to the United Nations Convention on the Rights of the Child.
- Reviewing the Internet Governance process to ensure child protection is a priority.
- There can be no trade-offs between privacy, security and the vitality of the tech industry. All three issues need to be equally addressed without one taking precedence over the other.
The commercialization of childhood
This panel session brought together academia, civil society and industry representatives to discuss the growing exposure of children to commercial content/commercial solicitations online. Two researchers from the University of Ghent underlined a number of important research findings, especially the fact that children have a hard time to recognize certain “new” types of online advertising techniques and that “labels” like the “PP” label to notify about product placements are not very effective in signaling to children that certain contents includes advertising.
John Carr from eNACSO stressed that while regulation has cracked down on many immoral advertising practices in the real world, such as paying children to talk to their peers about a product in real life, regulation has lagged behind on online advertising. While the EU Commission has relied on self-regulation, the results and impact of such self-regulation to limit children’s exposure to advertising is less than convincing. Children should have a charter of economic rights and not be tagged simply as “vulnerable consumers”. This could be achieved, potentially, via a revision of the unfair commercial practices of the EU Commission.
Martin Schmalzried from COFACE-Families Europe shared two recommendations on how to limit children’s exposure to advertising online:
- New “indicators” need to be developed to enable children/parents to choose services which adopt a fair and responsible advertising policy. One such indicator is the ratio between advertising and “native” content. How many posts out of 10 are advertising? What “surface” of the screen/webpage is covered with advertising? Users need to have some indicators to compare how various online services, platforms or content providers fare in displaying advertising.
- Regulators should not exclude the ban of certain advertisement techniques. As John Carr has underlined, regulators have banned certain advertisement techniques in the real world on several accounts on the grounds of unethical/unfair practices. There is no reason why online advertising should be exempt from such regulation.
BIK to the future
The final panel session of the Safer Internet Forum looked to the future and the challenges ahead. Our colleague Martin Schmalzried presented 7 key areas of focus that will need to be addressed in the future:
Exposure to VR will increase substantially over the next years as the cost of the technology drops. Some of the issues include:
- Harassment/cyberbullying: the first instances of “virtual groping” have surfaced on the Internet in the last few weeks. The negative effects of cyberbullying, harassment and any forms of harmful content/contact will be multiplied in VR settings due to the increased realism and immersiveness of VR. Studies have already shown that VR can be used successfully for curing post traumatic stress disorder and boost empathy. The opposite is therefore very likely true (it can enhance trauma and desensitization).
- Child pornography and child abuse may also move to VR as the combination of VR with connected sex toys and haptic feedback devices will greatly increase “realism”.
- The collection of data in VR will raise new questions about privacy. The data generated by users could be used for advertising in VR, for instance, as advertising has proven to be more effective in VR environments.
- Physical problems related to VR are also likely to emerge such as eye strain, impaired depth of vision (if used by young children), or injury by collision against a “real world” object while immersed in VR.
There are many controversies which have surfaced about algorithms lately, notably the “filter bubble” effect and the viral nature of “fake news”. Algorithms can help in tackling several problems including cyberbullying, hate speech or identifying fake news, but this requires a willingness of companies to work on developing such solutions.
Algorithms will also require increased accountability mechanisms such as independent audits to avoid discrimination or unfair “humanless” decisions to be carried out. Without human judgment and interpretation, algorithms are useless and may create more problems then they solve. An example is the “predictive policing” algorithm. While it may be successful in fighting crime, as in identifying the neighborhoods where a crime is most likely to happen, the “lessons” learned from such an algorithm need a human interpretation. Are all “blacks and latinos” more likely to be criminals or rather, are all humans struck by poverty, discrimination, desperation, and exclusion more likely to commit crime? The implications of such an interpretation are highly important as in the first case, one may decide that the solution is to build more prisons, in the second case, one may decide that the solution is to fight inequalities and discrimination.
Finally, algorithms deserve their own “liberalization”, moving away from the “monopoly” and control of their current owners. The data kept by Facebook and Google is simply a billion row and column database which could be “searched” and “ranked” by any algorithm, not just the “proprietary” algorithm of Facebook and Google. Allowing third parties to propose “custom” algorithms might help solve many of the issues discussed above such as “filter bubbles” and “fake news”.
3-Online business models
The current online business models are also much to blame for the “harmful” content or “fake news” available. Fake news heavily rely on advertising revenue, which often takes up more space than the “content” of the fake news article. Users do not understand “new” business models relying on user data or “in app purchases”/”freemium”. In the past, economies of scale provided that the more users bought a good, the cheaper it was to produce and the cheaper it could be sold, thereby greatly benefiting consumers and society as a whole. Online, this system is broken. Normally, with more and more users subscribing to Facebook, the prevalence of advertising should be dropping since Facebook should be able to “sell” its services for less advertising. But the opposite has happened! Instead, there is more and more advertising on both Facebook, Youtube and many other online platforms. Because users do not understand such business models, these services can get away with sucking more and more money from users’ time (since their revenue is generated by wasting people’s time looking at advertising) instead of lowering their “price” (advertisement prevalence) as would normally happen under a healthy “competitive” environment in the real world.
The same holds true for many other forms of digital services or content. Apps don’t get cheaper with more people buying them, although the cost of developing them is the same!
More and more, we hear about the term “digital citizen” which is a “sexy” way to describe contemporary Internet users. However, the word “citizenship” and “citizen” are ill-chosen. Rather, the term should be “digital subject”. Indeed, citizenship implies that a person has a right to vote or influence the rules and laws by which he/she is governed. On the Internet, most if not all online service providers do not function as democracies but rather like monarchies, with terms of service and community standards written by the owners and with little to no “rights” for their users, only obligations.
Deep learning and machine learning has had many breakthroughs in the last decade and many more are coming. The impact on our societies should not be underestimated. Some are already talking about “labour displacement” or even permanent loss of available jobs. Humans generate more and more data through their everyday mobile phone use, Internet surfing habits and many emerging technologies such as VR and Internet of Things. All this data, if structured properly, can be used to accelerate the development of AI and machine/deep learning, and the implications should not be underestimated. As the saying goes, “children are great imitators, so give them something great to imitate”, this is even truer for AI and machine/deep learning: AI is only as good as the data it works with!
6-Terrorism and radicalization
Terrorism, support of terrorism, radicalization, online recruitment have been high on the agenda for policy makers especially since ISIS/ISIL has emerged and social media have been widely used to propagate their messages and rhetoric. The “easy” response has been to ask for increased filtering, take-down or overall censorship of any content promoting/supporting terrorism in one way or another.
But not only is it difficult to fight such messages since new social media accounts from which such messages are being shared are created every day, but also since terrorists move to communication technologies which are harder to trace/monitor or censor such as the private messaging app telegram.
Unfortunately, focusing on censorship is like sweeping dust under a rug. It might help in the short term, but in the long term, be counter-productive. ISIS/ISIL’s emergence is strongly linked to Europe’s colonial history and recent US imperialism. Their propaganda is successful because it builds on accurate historical facts which have been ignored, minimized or even denied in our societies. Terrorism is also linked to poverty, social exclusion and seeking vengeance for the death of loved ones (as is often the case between Palestine and Israel). The priority should be how to prevent terrorism by addressing inequalities, social exclusion and bringing to justice those who are responsible for the death of innocents, often in the name of human rights/democracy but in reality, serving other interests.
News about yet another data breach and theft of millions of credit card information, user account details and the likes surface more and more often. Cybersecurity, in order to be successful in the future, will have to be considered as a public good. The open source movement is a model in this respect. With a strong community of voluntary and engaged security researchers, open source software such as Linux/GNU stays highly secure. Other proprietary security solutions rely on hiding code and hoping that no one will be able to find a vulnerability or security flaw. Time and again, this has proven to be wildly ineffective. Even “new” technologies as blockchain are based on crowdsourced security as breaking it would require, among other things, to take control of at least half of the computers on which blockchain technology is running.
For more information about the Safer Internet Forum, please visit the official website here: https://www.eiseverywhere.com/ehome/202903/456936/
For any questions, contact Martin Schmalzried (COFACE-Families Europe): firstname.lastname@example.org
On the 21st and 22nd of September, the ENABLE (European Network Against Bullying in Learning and Leisure Environments) partnership held the final event of the European Project. COFACE Families Europe has been involved since the beginning of the project as a member of the Think Tank, bringing some of the expertise gathered from the #DeleteCyberbullying project and promoting the ENABLE deliverables via its website.
ENABLE aims to tackle bullying in a holistic way, helping young people exercise their fundamental rights in the home, school, class and community (i.e. peer group). The project aims to develop social and emotional learning (SEL) skills as a means of building resilience in young people so that they can better understand and become more responsible and effective for their on- and offline social interactions. ENABLE has been implemented in half a dozen countries across Europe, providing teachers with detailed lesson plans on SEL in a number of languages including Croatian. COFACE’s Croatian member, Korak po Korak, which is also highly active in prevention of cyberbullying, was present at the event.
The two day final event offered a mix of plenary sessions and workshops, including a brainstorming session during which participants were invited to develop a “roadmap” on key themes such as privacy and security, e-presence and communications, empathy and ethics, hate speech and extremism. Some noteworthy presentations from the conference included:
– The intervention of American expert Anne Collier who stressed the importance of a whole school approach to tackle cyberbullying and the failure of “zero tolerance” policies assorted to punitive measures such as suspension from school.
– Innovative examples of interactive games carried out by Urko Fernández from Pantallas Amigas (Spain), which consisted in a sorting exercise of social networking comments/messages drafted by psychologists. Children were encouraged to discuss whether the messages were safe or insulting and the video game setting sparked the debate and exchange in a natural way.
– A demonstration of security risks linked to the Whatsapp application which was presented by Mohamed Mustafa Saidalavi, allowing a person to send/receive messages from another person’s Whatsapp account by borrowing that person’s phone for a short moment and linking his/her account via a QR code on their own device.
COFACE, represented by Martin Schmalzried, spoke in a workshop on bullying in the context of child’s play – toys, games, social media and gaming environments.
Playing is important. Children develop a great number of social and technical skills via play. Recently, national education systems have taken an interest in new methods of learning, including the “gamification” of learning or the integration of new technologies in their teaching methods. While there are benefits to these innovative methods, one must be careful since the field of education is a lucrative market. For example, the UK based company Pearson, specialized in education, has a turnover of over £4 billion. The more “gamification” and new technologies are integrated into schools, the more money some of these companies can make. To guard against such vested interests, integrating innovative teaching methods in school should require careful consideration and be based on clear and proven added values. Besides, many of the video games cited as an example for their educational potential have been developed outside the video game industry (Minecraft, Portal…).
As regards bullying in video games, it takes many different forms such as flaming, trolling, griefing, identify theft, all of which may or may not fit the definition of bullying. Some examples include: Hacking into someone’s account and destroying their creations or stealing their virtual objects, and, in-game sabotage, such as ruining a player’s quest or “unreasonable and repeated killing” of a player, which is often referred to as “griefing”.
Bullying in video games is especially targeted towards specific groups, especially women, but also towards the LGBT and people with disabilities. New technologies such as Virtual and Augmented Reality have the potential to exacerbate some of these issues:
– Identify theft with technologies enabling realistic modelling of a person.
– Physical pain, as companies develop tools to increase the realism of VR.
– Increased trauma in case of in game extreme violence or harassment.
– Desensitisation which may contribute to lowering empathy.
Yet researchers have also found a number of benefits to VR which may, in certain simulated settings, increase empathy or help get past trauma. Monitoring developments in this field is therefore key in order to reap the benefits and minimize the risks.
For more information about the ENABLE project, visit the official website
It’s happening. COFACE has been asking for a long time for new strategies to fight hate speech and cyberbullying, including the use of algorithms and Artificial Intelligence to assist human moderators and also the recourse to community based moderation where users can vote on taking content down.
Facebook has been steadily developing/training algorithms to help identify offensive photos. At this point, algorithms report more potentially offensive photos than humans! Twitter, which has been attacked in the past over its lackluster ability to fight abuse, has also invested in Artificial Intelligence to help weed out abusive content. The help and assistance of AI is welcome, especially since relying on human moderation only comes with many sets of problems such as speed of reaction or even negative psychological consequences for the human moderators which are forced to look at the worst content humanity has to offer.
Progress in the development of such algorithms could benefit all online service providers as Facebook has vowed to share its findings more broadly 
On the other hand, Periscope (owned by Twitter) is rolling out another approach to moderation, adapted to live streaming and instant feedback and user interaction: a form of live community based moderation. Viewers will be able to immediately report a comment they deem abusive during a live streaming. The app then randomly selects within the audience a “jury” who will vote whether the comment is abusive/inappropriate or not. Should the “jury” vote to sensor the comment, the author of the comment will not be able to post temporarily. If the author of the comment repeats the offense, he/she will be blocked from commenting for the rest of the live streaming 
Although such initiatives are long overdue, COFACE welcomes their introduction and will closely follow their development, hopefully contributing to create a better online environment for children and for families.
On the 10th of May, the London School of Economics (LSE) hosted a symposium on the topic of families and “screen time”. The event addressed the critical issue of successfully reaching parents with resources which will empower them to better accompany their children in navigating the Internet, taking full advantage of the opportunities, develop resilience and minimize risks which could lead to harm. Much of the debate centered on the tricky contradiction between the necessity to present clear and simple messages and guidance while at the same time addressing the various parent “profiles” and their varying needs based on their parenting styles, socio-economic status or education level.
One take-away is that even if messages like the 2×2 from the American Pediatrics are well known, it is also wildly simplistic and doesn’t make any distinction between “screen time” and “screen context” or “screen content”, in essence, that not all “screen times” are equal and that playing a game with your family or doing homework is the same as watching YouTube alone in your room.
Screen time: still a relevant metric?
As was pointed out by Angharad Rudkin from the University of Southampton, while we do not know a lot on the effects of online media and screens on children’s cognitive developments, we do know about some of the “physical” effects such as risks of obesity or sleep disorder. Even if technologies such as Augmented/Virtual Reality may enhance physical activity, physical side effects may remain (exposure to “blue light, eye problems, sleeping disorders…) which means that even if a distinction needs to be made between “screen time”, “screen context” and “screen content”, it will nevertheless be necessary to exert some control over screen time. In a not so distant future, when screens will become wearables much along the lines of Google Glass, how will this affect recommendations on “screen time”?
What’s good for children?
Another issue of contention was how to distinguish between what is “good” or “bad” for children. Some of the participants raised the point that parents don’t want to be told what is “good” for their child and that parents could rely on peer to peer support to sort out what is good and bad, looking at other parents’ recommendations, reviews etc. One of COFACE’s French members, UNAF, does run such a platform which enables parents to review apps, websites or content for children (panelparents.fr). However, such a platform is heavily filtered and monitored to ensure that there is no conflict of interest in the reviews and especially, ensuring that parents use a consistent set of criteria to assess apps, websites or content.
As is customary on the Internet, any development which reinforces users can also be used against them. User reviews and peer to peer support have proven to be very useful, but at the same time, business models around reputation management have emerged, allowing content providers or developers to pay for positive reviews. Peer to peer recommendations and support also raise issues of “trust”. How can you be sure that the “parent” on the other side of the screen isn’t working for the app or website it is openly promoting or recommending? Furthermore, what are the credentials of the parents posting recommendations? It could very well be that very vocal parents online are not necessarily the “best” placed to objectively review online content/services. So while peer to peer networks will always be helpful, they are certainly not a panacea and need to be accompanied by measures to mitigate the problems raised above.
Classification systems and information given to parents is another way to sort content/services. There have been many advances in standardization of classification, notably via the PEGI rating system and initiatives such as the MIRACLE project which aim at making the existing rating systems and age labels interoperable. In this respect, the role of regulators is key. Only via the right regulatory “push” will the industry agree to get together and agree on a common standard for classification. But classification systems also have their limits. PEGI, for instance, provides age recommendations along with content related pictograms alerting parents about such things as violent content or sexually explicit content. In essence, this classification system warns only about risks but says nothing about opportunities.
One idea would be to further investigate the effect of online content/services on children’s cognitive development. How does a game affect “delayed gratification” or “locus of control”? While it may prove to be very challenging to come up with a scientifically sound and accurate answer to these questions, it is essential that we move forward since the “serious game” industry is booming and since video game developers or online service providers do not hesitate to prominently display (often unsubstantiated) claims about the educational value or benefits of their products.
While we may never come up with a “definite” answer on what’s good for children, what is clear is that the private sector doesn’t hesitate to bombard parents with their take on what’s good for their kids. Therefore, arguably, even a mediocre recommendation to parents on what is good for their kids from independent sources such as from academia, civil society or NGOs will still be better than to leave the coast clear for claims driven by commercial interest.
The POSCON project and network is a good place to start.
Follow the money
The prevailing business model on the Internet now relies on users’ time. The more time a user spends on a service, a content, a game, an app, the more he/she will generate revenue via exposure to advertising, via exploitation/sale of the data generated by the user or via in-app purchases of virtual goods.
At the same time, the Internet is seen by many as a providential tool, a great equalizer which allows for the realization of a number of core human rights including freedom of speech. Many stakeholders, including governments and civil society argue that we should simply apply a ‘laissez-faire’ philosophy to the Internet and it will become a space of freedom. It is certainly tempting to see it in such a light as it allows for all actors to sit back and watch and especially, shield themselves from advancing politically delicate recommendations.
Unfortunately, the Internet is undergoing a transformation by a combination of factors including algorithms and online business models.
Algorithms shape more and more what people see online, enhancing the visibility of certain content inside a social networking newsfeed or a search engine’s result page, thus inevitably skewing a users’ online experience. While there is no way around sorting content given the sheer volume of the Internet, the methods for sorting content (how algorithms are designed) do raise many concerns. Which criteria are used to “boost” the visibility of content? Is there any consideration of “quality” or whether the content is “positive”? Such considerations are especially important for services which are “designed” for children such as YouTube. The videos which are prominently displayed as recommendations on the app’s home screen do not land there “by accident”.
Online business models which rely on users’ time to generate revenue also contribute to corrupting online content. Any content producer looking to make money will seek to enhance the time users spend on their content/service. In extreme cases, this gives rise to click baiting techniques which rely on catchy pictures/videos/headlines to entice users to click and be redirected to pages filled with advertising. More importantly, whenever a content/service provider has to make an editorial choice about content, optimizing viewer statistics, click through rates, bounce rates or stickiness will be a top priority, often at the expense of “quality” considerations.
Some would argue that quality goes hand in hand with a website’s stickiness or viewer statistics but this is highly unlikely. One study found for instance, that brands talk like 10 year olds on Facebook, the main reason being to maximize user interaction and reach. If producing and posting a “funny cat” video will likely generate 5 million views and an educational video about nature will generate 500.000 views, which show will end up being produced? How will such a logic impact on creativity? How would a modern day Shakespeare fare with such a business model which seeks first and foremost to appeal to a mass audiences as opposed to pursuing art for art’s sake?
Again, when seeking to minimize risks and enhance opportunities for children online, one cannot ignore these realities.
If we want children to truly benefit from online opportunities, we need to take a closer look at who/what gets in the spotlight on the Internet or in other words, who’s making “editorial decisions”. Many would chuckle at this very idea since the Internet is now supposed to be a level playing field with user generated content taking over and “editorial decisions” being limited to censorship of content which violates terms of service. But as we have discussed above, algorithms and the prevailing online business model have a massive influence on who/what gets the spotlight.
Digital parenting or just parenting?
Is there such a thing as “digital” parenting? This is another question which was raised and discussed by a number of participants. Parenting does spill over into the “online” world since social skills, sexuality education, a healthy balance in children’s activities, social and emotional learning, values such as respect can all be transposed to online settings.
At the same time, the online world offers “new” sets of challenges. Children may encounter certain “risks” or “inappropriate content” online at a much earlier age in the online world than in the offline world, pornography being the most obvious example, which means parenting clearly needs to “adapt” to this new reality. Also, not all “traditional” parenting can be transposed to online settings. For instance, bullying and cyberbullying are clearly different and require adapted responses: cyberbullying is 24/7, there is a greater chance that the perpetrator(s) remain anonymous, the “signs” are harder to spot (black eye vs. a nasty message/comment) etc. So while a parent might know how to react to bullying (identifying the perpetrator, contacting the relevant authority like a head teacher or staff of a sports club), this does not necessarily apply to cyberbullying. If the perpetrator is identifiable and is a class mate, does the teacher/school have authority to intervene if cyberbullying has occurred outside of school premises/school hours?
The fox and the crow all over again
“If all parents had access to appropriate resources, advice, guidance about the online risks and opportunities, then children’s online experiences would be optimal and all problems would be solved.” Of course, no one would dare to voice such a claim, but some would come close. Private companies have every reason to promote such an idea as it is one of the most powerful arguments for delaying any policy or regulatory measures.
To provide a useful analogy, financial service providers would argue that financial literacy should be the focus to prevent over-indebtedness and agro-business would argue that informing people about healthy eating habits should be a priority to tackle obesity and decrease occurrences of chronic diseases… ignoring of course the fact that both industries engage in wildly counter educational advertising campaigns enticing consumers to act impulsively (take out credit to get the vacation you rightfully deserve) or fulfil their needs for social recognition via food (drink a Fanta and you’ll be popular with your friends).
Private companies essentially resort to the tactic that the fox employed to get the cheese out of the crow’s mouth. By pretending that consumers/users are full of resources, smart and informed, it allows them to better manipulate them into forfeiting control over their data, consenting to unfair terms of service or extracting large sums of money through unethical business models such as “free to play” or “in app purchases”.
The same logic prevails in the issue of advertising to children. Advertisers happily provide “educational material” via programmes like MediaSmart to children and flatter children’s intelligence and resilience to advertising only to overwhelm them with more insidious advertising techniques.
That being said, education is always a necessity in and of itself, regardless of any specific policy objectives, but a balance must be struck between the need to educate/inform/empower and the necessity to protect and shape the environment to be as conducive as possible to positive experiences for all. Education should never be a substitute for necessary policy/regulatory measures.
A provocative metaphor would be the following: should we put more focus on training people how to avoid mines in a mine field or focus on demining the field?
The Internet of Penguins?
All of this boils down to a simple yet very complex question: what kind of Internet do we want? The Internet is said to belong to “no one”, and in the eyes of many it is the “ultimate” incarnation of a public good. At the same time, throughout the Internet’s history, there have been many threats to its openness. At one point, it wasn’t certain whether users would be able to use Internet browsers for free! Before Microsoft released Internet Explorer for free, Internet Browsers were commercial (see Netscape’s history). The Internet has been and still is at risk of government control be it via censorship, blocking and filtering in countries like China or to a lesser extent Turkey, or via control of its governance (the US has had disproportionate control over Internet governance).
Nowadays, it is at risk of corporate capture via “de facto” monopolistic tendencies of online platforms like YouTube, Facebook and Amazon, selective display and filtering of information (control over discoverability) with search engines like Google or control over its core infrastructure (the Internet backbone) by private Telecommunication giants or even companies like Facebook (see their internet.org project).
As a society, we need to decide collectively how much of the Internet do we want to “keep” as a public good, independent of commercial interest: websites like Wikipedia, Open Source software, not for profit initiatives like the popular coding platform for kids, Scratch and many more. To make a comparison, we can think about the Internet as a planet which is slowly being populated. How much of the planet will be covered with shopping malls and how much with public parks and natural reserves? Do we want public parks to be crowd-funded and maintained by the community as a whole (like a Wikipedia) or do we want to privatize them, letting private companies manage these spaces while posting billboards on park benches and stamping adverts on trees?
Looking at piracy statistics and user behaviour online, it would seem that the unformulated answer is quite clear: users do consider the Internet predominantly as a public good, expecting to enjoy all of its content for “free”, a behaviour which has caused massive disruptions in the business models of industries such as music labels. A recent debate is raging around the access to knowledge and academic papers, sparked by the emergence of the search engine Sci-hub. All of these developments are directly linked to various scenarios: a stubborn pursuit of the current intellectual property and copyright laws where right holders will engage in an eternal witch hunt against piracy, the development of alternative business models such as unlimited subscription based models such as Netflix, or even the generalization of concepts like the “universal salary”, currently under experimentation in Finland, which would allow people to work for the community without having to worry about their pay check.
All of this seems far from the debate about “child safety”, but it will have a much deeper impact on children’s online experience than the addition of a new safety feature or reporting button on their favorite (unique) social network.
For more information about the event, visit the LSE blog