Taranis project scientific report
Rapport scientifique projet Taranis
Camille Baulant () and
Guillaume Sylvestre ()
Additional contact information
Camille Baulant: GRANEM - Groupe de Recherche Angevin en Economie et Management - UA - Université d'Angers - Institut Agro Rennes Angers - Institut Agro - Institut national d'enseignement supérieur pour l'agriculture, l'alimentation et l'environnement
Guillaume Sylvestre: ADIT, LRENSP - Laboratoire de Recherche de l'Ecole Nationale Supérieure de la Police
Working Papers from HAL
Abstract:
TikTok, the fastest-growing social network, massively used by young people in France and Europe, is also the most dangerous online platform for our society. Its algorithm solely decides the flow of videos for users, who are almost no longer confronted with the choice of their subscriptions. Although its operation remains opaque, scientific literature on the subject and user testimonials show that TikTok primarily favors virality. Its algorithm thus highlights problematic posts: dangerous challenges, hateful and discriminatory speech, racial and sexist insults, disinformation, etc. Worse, the algorithm's operation will propose more violent or abusive content to users who watch non-problematic content based on the preferences of other users with similar profiles. TikTok's algorithm fuels abusive behaviors rather than preventing them. Moreover, TikTok's moderation is often found lacking. Massive user reports on problematic content, calls for violence, and the dissemination of pornographic images are often taken into account very late. The means implemented are not up to the challenges and the profits of TikTok's parent company, ByteDance, which is doing very well. TikTok is also a platform under influence. The ambiguous discourse of its owners about the supposed independence of the social network does not stand up to the facts: content about the Uyghurs and against China is censored, and more recently, following Beijing's directives, TikTok has favored the Russian vision of the conflict in Ukraine, manipulating video feeds in Russia. What to do then? Our research has allowed us to outline some paths in conclusion, drawn from scientific literature, expert testimonials, and our analyses: Raise awareness and support young users about the risks, particularly of cyberbullying, by redirecting them to specialized associations that work with social networks to take their reports into account; Require TikTok to provide access to its data to researchers, within the framework of the DSA directive on the transparency obligations of digital platforms, to audit and monitor the drifts of its algorithm; Conduct an investigation into Chinese interference via TikTok to strengthen the obligations of the social network. TikTok should certainly not be a scapegoat for all social networks. Nevertheless, its algorithm has become a model for Facebook and Twitter, and if copied, will exacerbate problems on these platforms. The risks posed by a Chinese platform can turn this threat into an opportunity to legislate this sector before our democracies are too fractured by the spread of hate, harassment, and disinformation.
Keywords: TikTok; BigData; Disinformation Campaigns; Social net-work analysis; Tiktok; Désinformation; Réseaux sociaux (search for similar items in EconPapers)
Date: 2023-09-14
Note: View the original document on HAL open archive server: https://hal.science/hal-04879008v1
References: Add references at CitEc
Citations:
Published in Laboratoire de Recherche de l'Ecole Nationale Supérieure de la Police. 2023, pp.67
Downloads: (external link)
https://hal.science/hal-04879008v1/document (application/pdf)
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:hal:wpaper:hal-04879008
Access Statistics for this paper
More papers in Working Papers from HAL
Bibliographic data for series maintained by CCSD ().