Edda Humprecht is Professor of Digital Communication and the Public Sphere. Her research covers topics including how disinformation on social media is changing society.

How AI is transforming communication

Prof. Dr Edda Humprecht talks about AI and social media.
Edda Humprecht is Professor of Digital Communication and the Public Sphere. Her research covers topics including how disinformation on social media is changing society.
Image: Jens Meyer (University of Jena)

Communication scientist Prof. Dr Edda Humprecht uses machine learning to analyse social media posts, which AI tools are now also capable of producing. In this interview, she explains the impact of AI on journalistic reporting, how users engage with AI-generated social media content, and the potential impact of AI-produced information.

Interview by Irena Walinda


Where do we encounter artificial intelligence in our communication processes?

Artificial intelligence is ubiquitous in communications: algorithms offer recommendations when shopping online and streaming music; news sites suggest articles that might interest us. Beyond that, there are translation services like DeepL and Google Translate, as well as chatbots, which companies deploy in customer support to solve problems and improve customer service.

How do you apply AI in your research and teaching?

AI plays an important role in research. We work with machine learning. AI helps us to understand the content of digital communication and we train algorithms with manually coded data. However, artificial intelligence is the subject of research because we are interested in how the use of AI affects journalism and news use, and how AI should be regulated.

I use ChatGPT to write letters of recommendation, which students can use in applications. I prompt ChatGPT to produce a draft and then edit it. It lightens my workload and speeds processes up.

Do news desks and media organizations harness AI?

AI is already a vast topic in journalism and news production. A lot of editorial teams are experimenting with it. They are trying to work more efficiently, generate images with Midjourney and Stable Diffusion, or produce simple articles like stock market reports and recipes. So far, these have only been trials, but many newsrooms are working on it.

What are the impacts of the deployment of AI-based technologies?

Communication is changing—but that has always been the case. Generative AI (LINK) systems such as ChatGPT can provide significant advantages. They can assist in or take over certain tasks, accelerating processes and allowing journalists more time to focus on investigative research, for example. Additionally, machine learning (LINK) and deep learning (LINK) tools are used in journalism to analyse large amounts of data. For instance, in the case of leaks, research networks use these tools to decipher vast amounts of data.

Does the use of AI in journalism also present risks?

There are risks, of course, regarding misinformation and deepfakes, i.e. manipulated videos, audio and images generated using AI tools to put information in a false context. This can lead to users developing a general scepticism towards online content. Although it is positive if people don’t believe everything they see, it can also lead to a general loss of trust in online media. Some people then adopt the mindset that everything is relative, or perhaps nothing is true anymore; they feel a sense of powerlessness.

We have been seeing that in our research even before the use of AI in communications. Users are intentionally turning away and no longer want to read or watch the news. This is more prevalent during times of crisis, such as during the pandemic. It is associated with being overloaded to a certain degree with all the negative information we are exposed to. This trend could be further exacerbated if deepfakes become more widespread.

What new skills do people need to learn in dealing with AI?

People need to be able to examine AI output because, time and again, we can see that this output is flawed. The accuracy of AI translations is questionable, and there may be issues with images, text, and other content.

We can currently observe this conflict in the Middle East with great clarity. AI-generated images are being disseminated, with incorrect information being spread or interpreted in the wrong context. There is the example of an image showing a young boy in Gaza, looking towards the camera in search of help. If you examine the image closely, you can notice several discrepancies: some of the shadows do not look right, the Palestinian flag is not accurately printed on the t-shirt, and he has six fingers on one hand. This example clearly shows how important it is for informed people to correct this kind of disinformation.

What can journalists do?

The work of journalists who conduct research, analyse and report on issues, particularly from crisis zones that we only learn about through media, is more significant now than ever. It would be highly risky to publish unverified AI-generated information, as it would lead to even more inaccurate information spreading. Therefore, news desks that focus on fact-checking are essential. Our research indicates that correcting misinformation has a definite impact.

Are there any control mechanisms or models to highlight AI-generated content and expose manipulation?

It can be challenging to detect manipulated images since the technologies used for manipulation are advancing rapidly. One way to safeguard copyrights is by incorporating digital watermarks, especially when AI-generated images are reused.

It is essential to recognize that the attempts at manipulation are made by humans rather than AI tools. Therefore, humans are accountable for the manipulation. At the same time, humans are the only ones capable of interpreting an image and identifying whether it was misused or published in an inappropriate context.

Don't politicians have a responsibility to act?

The European Union’s AI Act proposes a series of regulations. These rules will be extremely important—including for us as researchers, because access to data is a huge problem. Big tech companies like Microsoft and Meta lack transparency in their operations, making it difficult for us to understand how certain algorithms work or what data is used to train them. Additionally, we often face difficulties in obtaining the social media platform data we require for analysis. These regulations in the AI Act not only safeguard users from potential manipulation but also provide us with access to data required for scientific research.