A free press or media has an important role to play in democracies as it is responsible for gathering and distributing the information that citizens use to decide how they are going to vote. They then have the further responsibility to monitor and hold those in power accountable.
Free press also plays a role in society by providing a platform for advocacy, expression of opinion and for society to create a shared understanding of the important issues and the best way to go about solving them.
However, the mind-boggling amount of information that we have produced far exceeds any person or organisation’s ability to make sense of or moderate it. This has necessitated the use of automated tools that approximate human insight to produce, curate and moderate content at scale.
How can AI benefit press and consumers?
Artificial Intelligence (AI) can potentially aid a free press in fulfilling its role in society. It can assist journalists in researching and producing content. They can now, with AI, examine and understand vast amounts of unstructured data, quickly translate relevant texts, and transcribe and analyse video or audio content.
The use of AI-generated content (for content that relies heavily on data and is mostly routine) can free up journalists’ time to focus on more complex and impactful content. This can also speed up the distribution of news to consumers, especially time sensitive news such as financial news.
AI can help in delivering curated content to each user based on the information that might be most relevant and important to them. This personalisation can give them an improved news experience.
Finally, AI can also be used to assist in the moderation of social media, where it is helping to moderate vast amounts of public content. AI can make these media platforms more beneficial to society by identifying and moderating legitimate security risks such as fraud, fake news, extremism, and terrorism as well as help to identify instances of hate speech and harassment.
How does AI pose a risk to freedom of expression?
It is very clear that AI needs to form an integral part of the new media world. However, there are real risks in handing over the responsibility of content curation and moderation over to machines.
The European Convention of Human Rights states that each person has the right ‘to hold opinions and to receive and impart information and ideas without interference by public authority and regardless of frontiers’, and AI is likely to have an impact on both the press, as well as individuals when it comes to responsibly exercising this right.
First, let us look at content moderation. This is basically an AI filter on what is allowed to be published. It would be great if we could prevent dangerous or hateful content to be published, however, the biggest problem is that there seems to be no universal understanding of what qualifies as hate speech.
It is notoriously difficult to make judgements on this, even for human judges, and context and intent play a huge role.
If this moderation is not done correctly, we risk silencing legitimate speech and impacting freedom of expression. AI would need to balance limiting fake news with the freedom to discuss controversial ideas, and challenge long-held beliefs. Because of these fears, as far as I am aware, platforms have only been using AI as a tool to flag potentially problematic content, and humans have been in the loop to make these decisions.
There are also some interesting questions that can be raised about the moderation of AI-generated content and to what extent it is protected under the right to freedom of expression, but this is not quite relevant yet.
Next, let us look at content curation. Another way of thinking about it is that we are giving AI the ability to control what information citizens see.
Since we know that the information citizens get exposed to largely informs their world view, beliefs and how they participate in politics, it is critical that we prevent AI from curating biased content or creating so-called ‘filter-bubbles’ where citizens do not get exposed to alternate views.
Media pluralism is an important component for a balanced and nuanced world view and it’s important to manage a balanced curation strategy or risk creating dogmatic or extremist world views.
Also, the data that is collected about citizens to facilitate this curation could be used for targeted manipulation, infringing on their right to receive information.
How do we make AI serve us?
We have already seen that these fears are not just theoretical, and we might ask ourselves what should be done to ensure that the media in the age of AI serves us well.
In March of 2019, the Institute for Information Law (IViR) published a report detailing the potential impact of AI on freedom of expression and how we can go about mitigating this.
Based on this report, first, we should invest in tools that enable journalists to produce high quality content, fulfilling their role in society. Secondly, we need to start thinking about extending the journalistic code of ethics to include how AI should be employed to the benefit of society.
We need to commit to ensuring media pluralism where we use AI to curate content by actively measuring and improving media pluralism metrics. Thirdly, we need to make sure that the targeted distribution of content does not exclude certain groups of people from receiving important information.
Finally, social media companies should give us, as consumers, the ability to review and control how our content is curated. Then we should periodically review the way in which the algorithms have profiled us and take an active role in trying to build a nuanced world view.
After a closer examination of the ways that AI could potentially impact the media, I think that it will have a significant impact but not nearly as big an impact as the internet and social media had as long as we are responsible with it.
It just occurred to me that the algorithms will likely decide whether you will read this article. May the algorithms be ever in your favour.