The way we seek, receive and process information has dramatically changed in the last decade. Not only has the amount of information significantly increased, but its sources diversified – from traditional media outlets, to online platforms and individuals sharing news and updates in real time on their social media profiles. All of this requires a more skilled and dedicated information consumer, who needs to double-check almost everything she reads. But let’s be honest – almost none of us do so. We’re either too busy, or too biased to find additional sources of information, compare them and perform an in-depth analysis of the collected. We simply trust some sources more than others, so the question is whether the principle of diversity in media landscape is present in modern life.
The use of artificial intelligence (hereinafter: AI) systems and tools by media platforms and organizations seems to amplify this trend. But, instead of an individual choosing a specific source based on the history of use and the trust built through time, in case of use of AI, the algorithm is set, based on one’s previous behavior, to select, recommend and sometimes even to create personalized content.
There is no unique definition of AI, in general, or of the one used in media and information sphere. According to the Council of Europe Commissioner for Human Rights Recommendation, “AI is used as an umbrella term to refer generally to a set of sciences, theories and techniques dedicated to improving the ability of machines to do things requiring intelligence.”
An AI system is a machine-based system relying on big data, but it considers more than just a machine learning capability – it has abilities, similar to the human ones, to make recommendations, predictions or decisions for a given set of objectives. With its emerging capacity to influence emotions and human actions, it is no wonder that the debate on threats that the use of AI brings to human rights is emerging as well. In the context of freedom of expression, the use of AI can affect both the right to receive and to impart information.
To start with the latter – even though the AI tools are developing quickly and are becoming accessible to various range of stakeholders, small media are still lacking the resources and capacities to utilize them and, therefore, monetize their use. This affects the diversity of media landscape and marginalizes the smaller media organizations even more since they lack both access to these tools, but also the sources for further training of the algorithm, i.e. big data. Ultimately, the users who find the benefits of personalized and tailored content, based on automated selection, will completely turn to larger media outlets as their sources of information. Finally, this increases the imbalance of power between the traditional media and social media platforms that use algorithms not only to inform users in a tailored and personalized manner, but also to drive their engagement. This creates a specific type of “addiction” to social media, over which the traditional ones, who also have a democratic role to play, may lose the battle.
On the other hand, personalized content may be beneficial for many users who are burdened by the volume of information available. However, even though many users tend to accept and appreciate the tailored content, some surveys showed that they also want more engagement and agency, while they also have concerns over privacy and diversity. Personalization and selection of content limits our choices and, as mentioned, may influence our further behavior, shape our opinions and even our actions.
The regulation of these media and social platform practices is still weak. Therefore, some use these regulatory gaps to excessively process users’ data and use advance tools to target them by personalized content. However, the existing regulation, such as GDPR, provides some binding rules that can improve these practices. First of all, there are principles of data processing. In the media context, transparency seems to play the crucial role. It requires the data processors to adequately inform the users on the use of AI, core logic of the algorithm and its consequences. Furthermore, public service media that have a role to serve the public interest must fulfill the highest transparency standards and carefully consider the use of AI. Until this field is regulated (by documents such as the new EU AI regulation), users, civil society, media self-regulatory bodies and others should insist that media and social platforms become more transparent and open about their AI practices, so the users are able to make an informed decision on whether they accept the AI-generated content. Otherwise, the already decreased trust between citizens and media will be further endangered.