On the 10th of May, the London School of Economics (LSE) hosted a symposium on the topic of families and “screen time”. The event addressed the critical issue of successfully reaching parents with resources which will empower them to better accompany their children in navigating the Internet, taking full advantage of the opportunities, develop resilience and minimize risks which could lead to harm. Much of the debate centered on the tricky contradiction between the necessity to present clear and simple messages and guidance while at the same time addressing the various parent “profiles” and their varying needs based on their parenting styles, socio-economic status or education level.
One take-away is that even if messages like the 2×2 from the American Pediatrics are well known, it is also wildly simplistic and doesn’t make any distinction between “screen time” and “screen context” or “screen content”, in essence, that not all “screen times” are equal and that playing a game with your family or doing homework is the same as watching YouTube alone in your room.
Screen time: still a relevant metric?
As was pointed out by Angharad Rudkin from the University of Southampton, while we do not know a lot on the effects of online media and screens on children’s cognitive developments, we do know about some of the “physical” effects such as risks of obesity or sleep disorder. Even if technologies such as Augmented/Virtual Reality may enhance physical activity, physical side effects may remain (exposure to “blue light, eye problems, sleeping disorders…) which means that even if a distinction needs to be made between “screen time”, “screen context” and “screen content”, it will nevertheless be necessary to exert some control over screen time. In a not so distant future, when screens will become wearables much along the lines of Google Glass, how will this affect recommendations on “screen time”?
What’s good for children?
Another issue of contention was how to distinguish between what is “good” or “bad” for children. Some of the participants raised the point that parents don’t want to be told what is “good” for their child and that parents could rely on peer to peer support to sort out what is good and bad, looking at other parents’ recommendations, reviews etc. One of COFACE’s French members, UNAF, does run such a platform which enables parents to review apps, websites or content for children (panelparents.fr). However, such a platform is heavily filtered and monitored to ensure that there is no conflict of interest in the reviews and especially, ensuring that parents use a consistent set of criteria to assess apps, websites or content.
As is customary on the Internet, any development which reinforces users can also be used against them. User reviews and peer to peer support have proven to be very useful, but at the same time, business models around reputation management have emerged, allowing content providers or developers to pay for positive reviews. Peer to peer recommendations and support also raise issues of “trust”. How can you be sure that the “parent” on the other side of the screen isn’t working for the app or website it is openly promoting or recommending? Furthermore, what are the credentials of the parents posting recommendations? It could very well be that very vocal parents online are not necessarily the “best” placed to objectively review online content/services. So while peer to peer networks will always be helpful, they are certainly not a panacea and need to be accompanied by measures to mitigate the problems raised above.
Classification systems and information given to parents is another way to sort content/services. There have been many advances in standardization of classification, notably via the PEGI rating system and initiatives such as the MIRACLE project which aim at making the existing rating systems and age labels interoperable. In this respect, the role of regulators is key. Only via the right regulatory “push” will the industry agree to get together and agree on a common standard for classification. But classification systems also have their limits. PEGI, for instance, provides age recommendations along with content related pictograms alerting parents about such things as violent content or sexually explicit content. In essence, this classification system warns only about risks but says nothing about opportunities.
One idea would be to further investigate the effect of online content/services on children’s cognitive development. How does a game affect “delayed gratification” or “locus of control”? While it may prove to be very challenging to come up with a scientifically sound and accurate answer to these questions, it is essential that we move forward since the “serious game” industry is booming and since video game developers or online service providers do not hesitate to prominently display (often unsubstantiated) claims about the educational value or benefits of their products.
While we may never come up with a “definite” answer on what’s good for children, what is clear is that the private sector doesn’t hesitate to bombard parents with their take on what’s good for their kids. Therefore, arguably, even a mediocre recommendation to parents on what is good for their kids from independent sources such as from academia, civil society or NGOs will still be better than to leave the coast clear for claims driven by commercial interest.
The POSCON project and network is a good place to start.
Follow the money
The prevailing business model on the Internet now relies on users’ time. The more time a user spends on a service, a content, a game, an app, the more he/she will generate revenue via exposure to advertising, via exploitation/sale of the data generated by the user or via in-app purchases of virtual goods.
At the same time, the Internet is seen by many as a providential tool, a great equalizer which allows for the realization of a number of core human rights including freedom of speech. Many stakeholders, including governments and civil society argue that we should simply apply a ‘laissez-faire’ philosophy to the Internet and it will become a space of freedom. It is certainly tempting to see it in such a light as it allows for all actors to sit back and watch and especially, shield themselves from advancing politically delicate recommendations.
Unfortunately, the Internet is undergoing a transformation by a combination of factors including algorithms and online business models.
Algorithms shape more and more what people see online, enhancing the visibility of certain content inside a social networking newsfeed or a search engine’s result page, thus inevitably skewing a users’ online experience. While there is no way around sorting content given the sheer volume of the Internet, the methods for sorting content (how algorithms are designed) do raise many concerns. Which criteria are used to “boost” the visibility of content? Is there any consideration of “quality” or whether the content is “positive”? Such considerations are especially important for services which are “designed” for children such as YouTube. The videos which are prominently displayed as recommendations on the app’s home screen do not land there “by accident”.
Online business models which rely on users’ time to generate revenue also contribute to corrupting online content. Any content producer looking to make money will seek to enhance the time users spend on their content/service. In extreme cases, this gives rise to click baiting techniques which rely on catchy pictures/videos/headlines to entice users to click and be redirected to pages filled with advertising. More importantly, whenever a content/service provider has to make an editorial choice about content, optimizing viewer statistics, click through rates, bounce rates or stickiness will be a top priority, often at the expense of “quality” considerations.
Some would argue that quality goes hand in hand with a website’s stickiness or viewer statistics but this is highly unlikely. One study found for instance, that brands talk like 10 year olds on Facebook, the main reason being to maximize user interaction and reach. If producing and posting a “funny cat” video will likely generate 5 million views and an educational video about nature will generate 500.000 views, which show will end up being produced? How will such a logic impact on creativity? How would a modern day Shakespeare fare with such a business model which seeks first and foremost to appeal to a mass audiences as opposed to pursuing art for art’s sake?
Again, when seeking to minimize risks and enhance opportunities for children online, one cannot ignore these realities.
If we want children to truly benefit from online opportunities, we need to take a closer look at who/what gets in the spotlight on the Internet or in other words, who’s making “editorial decisions”. Many would chuckle at this very idea since the Internet is now supposed to be a level playing field with user generated content taking over and “editorial decisions” being limited to censorship of content which violates terms of service. But as we have discussed above, algorithms and the prevailing online business model have a massive influence on who/what gets the spotlight.
Digital parenting or just parenting?
Is there such a thing as “digital” parenting? This is another question which was raised and discussed by a number of participants. Parenting does spill over into the “online” world since social skills, sexuality education, a healthy balance in children’s activities, social and emotional learning, values such as respect can all be transposed to online settings.
At the same time, the online world offers “new” sets of challenges. Children may encounter certain “risks” or “inappropriate content” online at a much earlier age in the online world than in the offline world, pornography being the most obvious example, which means parenting clearly needs to “adapt” to this new reality. Also, not all “traditional” parenting can be transposed to online settings. For instance, bullying and cyberbullying are clearly different and require adapted responses: cyberbullying is 24/7, there is a greater chance that the perpetrator(s) remain anonymous, the “signs” are harder to spot (black eye vs. a nasty message/comment) etc. So while a parent might know how to react to bullying (identifying the perpetrator, contacting the relevant authority like a head teacher or staff of a sports club), this does not necessarily apply to cyberbullying. If the perpetrator is identifiable and is a class mate, does the teacher/school have authority to intervene if cyberbullying has occurred outside of school premises/school hours?
The fox and the crow all over again
“If all parents had access to appropriate resources, advice, guidance about the online risks and opportunities, then children’s online experiences would be optimal and all problems would be solved.” Of course, no one would dare to voice such a claim, but some would come close. Private companies have every reason to promote such an idea as it is one of the most powerful arguments for delaying any policy or regulatory measures.
To provide a useful analogy, financial service providers would argue that financial literacy should be the focus to prevent over-indebtedness and agro-business would argue that informing people about healthy eating habits should be a priority to tackle obesity and decrease occurrences of chronic diseases… ignoring of course the fact that both industries engage in wildly counter educational advertising campaigns enticing consumers to act impulsively (take out credit to get the vacation you rightfully deserve) or fulfil their needs for social recognition via food (drink a Fanta and you’ll be popular with your friends).
Private companies essentially resort to the tactic that the fox employed to get the cheese out of the crow’s mouth. By pretending that consumers/users are full of resources, smart and informed, it allows them to better manipulate them into forfeiting control over their data, consenting to unfair terms of service or extracting large sums of money through unethical business models such as “free to play” or “in app purchases”.
The same logic prevails in the issue of advertising to children. Advertisers happily provide “educational material” via programmes like MediaSmart to children and flatter children’s intelligence and resilience to advertising only to overwhelm them with more insidious advertising techniques.
That being said, education is always a necessity in and of itself, regardless of any specific policy objectives, but a balance must be struck between the need to educate/inform/empower and the necessity to protect and shape the environment to be as conducive as possible to positive experiences for all. Education should never be a substitute for necessary policy/regulatory measures.
A provocative metaphor would be the following: should we put more focus on training people how to avoid mines in a mine field or focus on demining the field?
The Internet of Penguins?
All of this boils down to a simple yet very complex question: what kind of Internet do we want? The Internet is said to belong to “no one”, and in the eyes of many it is the “ultimate” incarnation of a public good. At the same time, throughout the Internet’s history, there have been many threats to its openness. At one point, it wasn’t certain whether users would be able to use Internet browsers for free! Before Microsoft released Internet Explorer for free, Internet Browsers were commercial (see Netscape’s history). The Internet has been and still is at risk of government control be it via censorship, blocking and filtering in countries like China or to a lesser extent Turkey, or via control of its governance (the US has had disproportionate control over Internet governance).
Nowadays, it is at risk of corporate capture via “de facto” monopolistic tendencies of online platforms like YouTube, Facebook and Amazon, selective display and filtering of information (control over discoverability) with search engines like Google or control over its core infrastructure (the Internet backbone) by private Telecommunication giants or even companies like Facebook (see their internet.org project).
As a society, we need to decide collectively how much of the Internet do we want to “keep” as a public good, independent of commercial interest: websites like Wikipedia, Open Source software, not for profit initiatives like the popular coding platform for kids, Scratch and many more. To make a comparison, we can think about the Internet as a planet which is slowly being populated. How much of the planet will be covered with shopping malls and how much with public parks and natural reserves? Do we want public parks to be crowd-funded and maintained by the community as a whole (like a Wikipedia) or do we want to privatize them, letting private companies manage these spaces while posting billboards on park benches and stamping adverts on trees?
Looking at piracy statistics and user behaviour online, it would seem that the unformulated answer is quite clear: users do consider the Internet predominantly as a public good, expecting to enjoy all of its content for “free”, a behaviour which has caused massive disruptions in the business models of industries such as music labels. A recent debate is raging around the access to knowledge and academic papers, sparked by the emergence of the search engine Sci-hub. All of these developments are directly linked to various scenarios: a stubborn pursuit of the current intellectual property and copyright laws where right holders will engage in an eternal witch hunt against piracy, the development of alternative business models such as unlimited subscription based models such as Netflix, or even the generalization of concepts like the “universal salary”, currently under experimentation in Finland, which would allow people to work for the community without having to worry about their pay check.
All of this seems far from the debate about “child safety”, but it will have a much deeper impact on children’s online experience than the addition of a new safety feature or reporting button on their favorite (unique) social network.
For more information about the event, visit the LSE blog
Cyberbullying is not about technology, but about the way technology is used. Just like a baseball bat’s main purpose is to engage in a sporting activity, if someone uses it to hit another person, it can cause serious damage.
Cyberbullying is not such a new phenomenon, since it is linked to bullying in general. There have always been bullies, who thrive on the mockery and humiliation of others and there always will be. What makes it so unique in its viciousness is that compared to school-yard bullying (or offline bullying) the target has no way to get a break or get away from it. Cyberbullying is open for business 24/7. Nasty text messages, ridiculing e-mails, fake websites or troll Facebook accounts enable the bully to pursue its victim after school hours. Especially since text messages and other form of messages can spread like wildfire.
To make it more specific, imagine an awkward teenager standing in front of his class, reciting a lesson which he/she may not have fully prepared for. A pretty humiliating experience in itself, one that I believe only a few of us have not experienced. Now imagine a classmate filming this on his/her Smartphone and promptly posting it on YouTube, Facebook, Twitter, and/or other social media sites teens are using these days.
This is why there is still a massive need for awareness and education, of parents, of teachers and of children themselves. Even if perhaps a large portion of cyberbullying starts out as casual joking and just having fun from the bully’s point of view, bullying is never ok, and children and young people need to understand its consequences…