Artificial intelligence and the labelling obligation
More transparency soon?
When we look around the web, we increasingly find content generated by artificial intelligence. This can be in the form of texts, animations or even images. What’s more, this AI-generated content often appears as deceptively real as if it had been written/created by a human being.
But this is precisely what many consider to be a problem. For example, AI can be used to spread disinformation and misinformation. Sometimes this even happens unintentionally, because even advanced text generators like ChatGPT can occasionally provide false information without users noticing.This begs the question: should such content be labelled?
Because of these challenges, both the European Commission and the US government have started to require internet companies to label AI-generated content as such. The idea behind this: Users should be able to immediately identify whether the content they are consuming was created by an AI. So far, implementation is voluntary in the US and the EU. However, some large companies, including Google and its subsidiary YouTube, Facebook and TikTok, have already signed the EU Code of Conduct against Disinformation of 2022. This commits them to flagging AI content on their platforms.
But how exactly does this labelling of AI content work in practice?
Here are some examples:
Google reacted after political advertising videos with AI manipulations appeared on YouTube during the 2024 US presidential election. The company has expanded its “Manipulated Media Policy”. Those who place political advertisements with AI-generated content will have to indicate this from November 2023. However, the exact manner of labelling remains open, but Google requires that it be “clear and conspicuous” so as not to be overlooked. Compliance with this labelling requirement will be both automated and through human review.
TikTok, a platform where mainly short video content is shared, allows users to label their videos as “AI-generated”. The corresponding note (“Marked as AI-generated by Creator*in”) is displayed directly below the video. However, this labelling is currently voluntary and users who do not disclose the use of AI do not face any consequences. However, TikTok is exploring ways to automatically detect and flag AI-generated content.
OpenAI, together with internet giants such as Google, Meta and Amazon, has committed to developing effective methods for labelling AI content. One idea is to automatically watermark content created by in-house AI applications with a special watermark. However, these technical methods are still in the development phase.
Is a legal labelling requirement for AI-generated content coming soon?
It seems to be only a matter of time before the labelling of AI content becomes mandatory by law. Plans for this are already part of the EU’s “AI Act”, which is currently being drafted. This act aims to regulate the use of AI in order to protect the fundamental rights of the population.
Why do we need mandatory labelling?
The use of AI-generated content has, of course, huge benefits, especially in areas such as content marketing and e-commerce. Companies use AI-generated content to reduce costs and increase their efficiency. However, there is a risk that consumers could be deceived if they cannot tell that the content was created by a machine. The labelling requirement would increase transparency and ensure that readers know what they are consuming. This move is intended to promote transparency and trust in artificial intelligences.
The use of AI-generated content will undoubtedly continue to grow. In this growing era of automation and AI, it is crucial that we think about the ethics and transparency of this content. The introduction of mandatory labelling could be an important step in maintaining consumer trust and preserving the integrity of the digital space.