In an era when IT giants have become the rulers of the world, when the most advanced users barely keep up with the development of the latest technologies, when the inaccessibility of the world's largest services is perceived as a global catastrophe for the entire civilization, the main global trend has become the protection of users. Many users still do not have the ability to influence a huge number of threats, from theft of personal and payment data to an uncomfortable and unfriendly Internet environment. In recent years, among the global problems of the Internet, unsafe or destructive content that both adults and children face has stood out: everyone can face bullying and slander, with manifestations of hatred or disinformation, which ultimately negatively affects even the most persistent people. How the world is fighting dangerous content on the Internet and how they are trying to protect users of different ages from it, – “Lenta.ru” figured out.
The world in which everyone lives
The protection of users, especially minors, has been actively pursued throughout the world in recent years. Many countries already have laws in place to protect children from dangerous content, and new initiatives on this topic continue to emerge. So, last September, the British authorities imposed restrictions on gaming platforms, social networks and streaming services. Soon, IT companies will not be able to use “enticing” algorithms and technologies that encourage children to consume more content. For example, automatic playback of videos has been banned, which, according to lawmakers, affects perception and encourages young users to spend their free time on the Internet. If the service is nevertheless caught in violation of the rule, the platform will be forced to pay a huge fine – up to four percent of the total global turnover. Some of the big, popular social networks like TikTok, Instagram and YouTube have already introduced restrictions. Popular with young people, TikTok will drop notifications after 9 p.m. for teens under 16. For young people 16-17 years old, this “curfew” will come after 22:00. Instagram has limited contacts between children and strangers: now an adult will not be able to contact a minor user if he is not subscribed to him. YouTube turned off targeted ads and automatic video launch for users under the age of 18.
Photo: Ethan Miller / Getty Images
The innovations in the UK support the course taken by the state several years ago: earlier, the local government decided to make the country's virtual space “the safest place for online communication in the world.” The control over the quality of content on the sites and the problem of protecting users were entrusted to the services themselves.
Platforms were asked to introduce structures called Safety by Design – that is, control malicious content at the level of algorithms for the operation of sites and applications, and users – to increase digital literacy. At the same time, a “white list of online threats” was published – Online Harms White Paper, which described in detail the types of dangerous content: legally defined harmful information (for example, terrorism), content without a certain status (intimidation, trolling, justification of self-harm, etc.) and legal content that is not intended for children. The then Prime Minister of the country, Theresa May, criticized the IT giants for being irresponsible to users: now the sites will have to work hard to earn their trust again.
European countries have been working on social media issues for a long time. For example, residents of Italy from the age of 14 have the right to demand privately to remove destructive content from the site – the platform has 48 hours to fulfill the request. Such a measure was introduced in 2017 – at the same time that The Network Enforcement Act, the law on social networks, came into force in Germany.
Photo: Drew Angerer / Getty Images
This model of working with services is often called voluntary-compulsory: the documentary categories of harmful and prohibited information are not defined, but nevertheless, the platforms themselves must bear responsibility for security. Platforms, public organizations or government officials can block or delete dangerous content (in difficult cases, additional proceedings are possible). Violators are punished with large fines – up to 500 thousand euros, and for some categories of violations – up to 5 million. Therefore, the services diligently monitor the implementation of the rules. “In order to protect children online, today many countries are taking the necessary measures,” confirms Alexander Zhuravlev, chairman of the commission on legal support of the digital economy of the Moscow branch of the Russian Bar Association.
In some Asian countries, the regulation is even stricter: from May 2021, Indian providers are required to manually filter content. If we are talking about content with a restriction of 18+, then each site must additionally check the age of users. This idea is reminiscent of the British initiative for verification by documents: a few years ago, local authorities proposed a system of peculiar “passes” to sites for adults. UK citizens were expected to go through an identity check at the local post office, after which they would be allowed access to such portals. It later turned out that building the infrastructure for such a system would be very expensive.
The Chinese government imposed restrictions on video games for minors a few weeks ago as part of its fight against gambling addiction. Now children can spend time playing games only on holidays, as well as on Fridays and weekends, but strictly for an hour a day. According to the current British Online Harms White Paper, excessively long-term use of gadgets by minors also falls under regulation – in the last category of legal, but undesirable. At the same time, from the beginning of 2021, Chinese users can immediately demand that the site be blocked if the data published on the site may be harmful to children.
Way to yourself
World experience demonstrates that dangerous content lurks in social networks not only for minors, but also for adults. Five years ago, the European Commission, in cooperation with major IT corporations (Microsoft, Facebook, Twitter and others), developed a code that listed obligations to combat online hate. Public incitement to violence or hatred against a group of people or a member of such a group on any basis – on the basis of ethnic or national origin, religion, race or color – was criminalized. The companies that signed the code pledged to promptly remove such illegal content within 24 hours. Then the representatives of the European Union stressed that the Internet is not a space of hatred, but of freedom of speech.
Photo: Alexey Danichev / RIA Novosti
In Russia, since February 1, 2021, a law on self-control of social networks has been in effect. He ordered the site owners to moderate the published content and block prohibited information. For example, posts that defame people on the basis of nationality or race, gender, age, profession or place of residence, any insults and derogatory expressions, calls for extremism or riots, as well as fake data and other information. Platforms with traffic from half a million users per day are subject to the law. In addition to content control, they must establish a representative office in Russia and maintain a register of complaints received from users.
However, experts believe that not all platforms are successful in fulfilling their responsibilities.
The alliance, which was mentioned by the head of ROCIT, Rustam Sagdatulin, was created only last September. This is an association of operators, Internet companies and media holdings (among the members of the alliance – Vimpelcom, Megafon, Rostelecom, Mail.ru Group, Yandex, Kaspersky Lab and others), which set themselves the task of creating a secure digital environment and counter emerging threats. Similar voluntary agreements have already been concluded on the market before: a few years ago, the so-called anti-piracy memorandum became a positive example, thanks to which significant changes were achieved in the consumption of legal content. Members of a new organization for the protection of children expressed the hope that the virtual space will cease to be a source of threats to children – in a world where the line between the physical and the virtual is blurred, this problem is now most acute. Now the established Alliance promises to help in matters of self-regulation and increase digital literacy, to spread the principles of media hygiene, and also to develop a system of guidelines and responsible behavior of children in the network.
# Hidden threat
Probably, the new organization is called upon to supplement the missing elements of the mosaic, parts of which are called upon to form a picture of a more or less prosperous virtual environment. It was assumed that the new law on social networks would force platforms to bear a certain responsibility to the audience, removing it directly from users – it was on them that many services were previously shifting responsibilities. Often the administration and moderators refused to regulate conflicts between users and come to the rescue in difficult situations, even when real bullying began or false information was spread. Obviously, no drastic changes could be expected, but user complaints became more weighty for network owners.
Photo: Vladimir Zivojinovic / Getty Images
At the same time, the security of each user is still in his hands and directly depends on the level of digital literacy – after all, it's not so much about blocking or bans as about the awareness of people who do not publish or share fakes, do not support bullying and try to behave. in the virtual world is correct.
In the meantime, the authorities estimate that the largest number of materials containing dangerous information are posted on Facebook. YouTube took the second place in this negative rating, and Twitter came in third. The study took into account units of destructive content recognized by the courts as prohibited – fakes, calls to suicide and extremist information. According to the published data, Twitter did not remove 192 units of illegal content, and YouTube – 4,624; however, the number of such cases of “non-removal” is clearly decreasing, especially in the area of materials related to drug trafficking and child pornography.
At the end of September 2021, it became known that the Federal Service for Supervision of Communications, Information Technology and Mass Media will form a register of social networks in order to monitor their compliance with laws. The department stressed that innovations in the legislation will allow working with the prompt removal of dangerous information – for example, trash streams. Facebook, Instagram, Twitter, YouTube, VKontakte and Odnoklassniki, as well as TikTok and Likee are already included in the register.