Picture: Freepik

“AI is here to stay, so let’s learn to use it responsibly”

AI in science communication has to be done the right way. How can we ensure ethical and transparent communication practices? Researcher Núria Saladié offers practical examples based on new guidelines she and her team have developed.

Núria Saladié is a science communicator with experience in public engagement activities, educational materials, and science news writing. Currently she is studying a PhD on science communication at Universitat Pompeu Fabra, Barcelona, where she’s also studying the use of AI among science communication professionals. Picture: Science, Communication and Society Studies Centre, Universitat Pompeu Fabra

As a researcher in science communication, how do you primarily use AI in your daily work?

I have tested many different platforms. I use ChatGPT of course, that’s a classic. I mainly use it for background information and when I need nuanced translations. I’ve also used Leonardo.ai for image generation. Another platform I’ve tried is Consensus, which helps with bibliographic references. You ask a question and it provides entries from scientific journals, so it’s useful for researching specific topics and gives good, evidence-based background information. I also use HappyScribe when I need to transcribe audio into text.

The „Good Practice Principles on Science Communication and Artificial Intelligence“ emphasize responsible use of AI. Science communicators use many digital technologies in their work. Why does AI deserve special care and attention?

The document “Good practice principles on science communication and artificial intelligence” has been created by the Spanish-speaking professional community of science communication in an iterative co-creation process. It is an initiative of the Science, Communication and Society Studies Centre, Universitat Pompeu Fabra (CCS-UPF), with the collaboration of the Fundación Española para la Ciencia y la Tecnología – Ministerio de Ciencia e Innovación (FECYT). Picture: Good practice principles on science communication and artificial intelligence

This is a very interesting time for science communication. AI affects almost every profession. Given our focus at the Science, Communication and Society Studies Centre at Universitat Pompeu Fabra, we became very interested in both the benefits and risks of AI for science communicators. We thought it would be valuable to develop guidelines – principles – to guide science communicators in their use of AI. That’s how we started to develop these documents.

We looked at two different aspects: first, how science communicators use AI in their work, and second, how they communicate about AI advances, and whether they do so responsibly. Responsible communication means avoiding hype, not exaggerating risks or benefits, and considering society’s needs, not just commercial interests.

One of the principles is to avoid increasing biases or stereotypes when using AI. What strategies can science communicators employ to ensure they adhere to this principle?

As science communicators, we may not be able to change the algorithms, but we can be aware of these issues. Scientific evidence shows that AI tends to reproduce stereotypes because it learns from existing biased information. It’s also important to remember that AI is not the absolute truth. It can hallucinate and reproduce outdated or biased information. So we need to keep a critical eye and use our brains to verify and interpret AI-generated content.

We should focus on adding „human value“ by making our content accessible, inclusive and relevant. As a collective of science communicators, we should ask ourselves: How can we use AI in a way that benefits our profession, benefits science, benefits communication, and benefits society at large? This is what we mean when we think about it in a responsible way.

"We need to keep a critical eye and use our brains to verify and interpret AI-generated content." Núria Saladié

Transparency in AI tools is another critical principle. Can you share examples of how transparency can be effectively incorporated into science communication?

First, it’s about being clear when we use AI in our work. For example, if we create an image or social media post using AI, we should say so. Even if we edit the AI-generated content, it’s important to say that AI was involved.

Secondly, it’s about the AI tools themselves. Some AI platforms are opaque, meaning we don’t know where their information comes from or how their algorithms work. Others are more open, with accessible code and the ability for users to suggest improvements. Where possible, we should prioritise the use of these more transparent tools. They allow us to better understand how they work and ensure that the information they provide is reliable.

So it’s twofold: we need to be transparent about our use of AI, and we need to choose AI tools that are transparent themselves. This approach will help build trust and ensure responsible science communication practices.

On the topic of trust: How does the integration of AI in science communication influence public perception of scientific research?

That’s a very good question. As science communicators, we have the power to influence and impact policies and regulations about AI and its uses in communication. And so, we can help prioritise transparent, open and ethical tools.

If we use AI responsibly, it shouldn’t have a negative impact on trust. By being transparent about the use of AI and clearly stating what tasks AI is being used for, we can help maintain trust. For example, if we say, „Yes, I’m using AI for this task, but I’m still relying on human creativity and judgement for other aspects“, it shows a balanced approach.

Our goal is to use AI to optimise our work, not to replace human input. We want AI to help us be more creative and effective, not turn us into robots producing cookie-cutter content. By following the principles of responsible AI use, trust should remain intact and even be strengthened by demonstrating our commitment to ethical practices.

"As science communicators, we have the power to influence and impact policies and regulations about AI and its uses in communication." Núria Saladié

What resources or training would you recommend for science communicators to improve their understanding and responsible application of AI?

Just as we had training on how to use Google a few years ago, we now need training on how to use AI. It’s important for science communicators to train themselves to use these tools correctly and responsibly. But it’s not just about specific tools like ChatGPT or DALL-E, it’s also about learning how to write effective prompts. AI can give great answers if we give it good prompts, so training ourselves in prompt engineering is crucial.

At the Science, Communication and Society Studies Centre, we’ve organised some training webinars on AI for science communicators. We’ve done sessions on responsible use of AI and specific tools recommended by different speakers who use AI in their practice. They shared what tools they use, how they use them and why they chose those tools.

There are also many other platforms offering AI training. JournalismAI, for example, is a global network of journalists using AI and promoting its good use, and they offer many courses.

We talked about using AI as a tool as science communicators ourselves. But the principles also call for objective reporting on AI developments. How can science communicators balance reporting potential benefits and risks without exaggeration?

As science communicators, talking about AI responsibly means avoiding extremes. It’s easy to fall into the trap of doom and gloom, like „robots will take our jobs“, which causes panic, or the other extreme of over-optimism, like „AI will make us extra creative and solve all our problems“.

Instead, we should base our information on evidence. The responsible approach is to maintain a critical mind and not be swayed by hype – whether it’s about the benefits or the potential risks.

"The responsible approach is to maintain a critical mind and not be swayed by hype - whether it is about the benefits or the potential risks." Núria Saladié

How did you go about creating the principles?

We used a participatory process, which means that the final result was created by a collective of science communicators, not just us. Every year we organise the „Campus Gutenberg„, a science communication conference, where communicators from Spain and Latin America meet to discuss new developments and best practices. Gema Revuelta, my boss, led the development of the guidelines.

Last year we dedicated a session to the development of these guidelines. We invited conference speakers who had submitted proposals on AI-related topics. Each speaker proposed a few statements that could serve as guidelines for science communicators.

We started with 26 statements and then narrowed them down by combining similar ideas and prioritising the most important ones. We shared these refined statements with a working group of volunteers who helped to further prioritise and reword them. We ended up with a final list of 10 principles.

What happened next?

To ensure broad input, we conducted an open call for feedback from science communicators across Spain and Latin America, receiving over 100 responses. We then formatted and translated the final document into several languages, including Spanish, English, Italian, Catalan and Basque, with translations into Dutch, French, Galician and Portuguese underway. The principles are available on Zenodo, an open science repository, to encourage sharing and use.

Are you open to feedback from the community?

Absolutely. We have a form where people can pledge to use the principles. In this way, individuals can show that they agree with the principles and that they are important for the responsible use of AI.

"For someone who has not jumped in yet and is a bit hesitant, I would say there is nothing wrong with using AI if you do it well." Núria Saladié

We can’t enforce compliance, but if people sign up, it signals that these principles are relevant and valuable for guiding responsible science communication practices. We hope that people will continue to use them and provide feedback.

If anyone suggests new principles, or points out any that are unclear, we would be happy to consider these suggestions and continue to develop the document. It’s a participatory document and we welcome ongoing community involvement to keep it alive and relevant.

Finally, for people who might be hesitant to engage with AI – what advice would you give to them?

For someone who hasn’t jumped in yet and is a bit hesitant, I would say there’s nothing wrong with using AI if you do it well. Doing it well can be a bit vague, so as a guide, think about the bigger picture. Ask yourself: How does this tool help you? Is it necessary? Think about the time it can save you, the benefits it can bring, and how these compare to the risks. Is it worth the risk and time to learn about it? AI is here to stay, so let’s learn to use it responsibly.