World News

Culture

 

British Queen celebrates

 

A growing body of research indicates that between 33% and 50% of Americans struggle to distinguish between real and altered digital content. One leading researcher argues

that this issue is undermining the credibility of scientific research.

Artificial intelligence (AI) has significantly advanced the creation of deepfakes—manipulated images, videos, and audio that are often generated using AI. These forgeries have become increasingly sophisticated, making it harder to differentiate between what is true and what is fake.

This rise in deepfake media poses a serious threat to the integrity of the scientific research we depend on for technological advancements, decision-making, and understanding the world around us.

Sibel Erduran, Professor of Science Education and Director of Research at Oxford University, highlights these concerns in a recent paper.

"Science relies on trust: trust in the data, methods, and findings produced by the scientific community," says Erduran. "That trust is already strained at times, even without the influence of deepfakes."

Erduran points out that fake scientific papers are already flooding academic journals. In 2023 alone, more than 10,000 research articles were retracted due to fraudulent content.

While scientific integrity has long faced challenges, Erduran believes deepfakes add another level of complexity to the problem.

"Deepfakes further erode trust in science by allowing for the manipulation of data, particularly visual data, in ways that can be incredibly difficult to detect," she explains.

Erduran outlines how deepfake technology could be exploited to manipulate images or create fabricated data, leading to the propagation of false scientific conclusions. She also warns that videos of respected scientists could be altered to spread misinformation, amplifying public mistrust.

This manipulation is particularly concerning when applied to critical issues like public health or climate change, where the stakes are high, and accurate information is crucial.

Addressing the deepfake dilemma

Although deepfakes present new challenges, Erduran offers potential solutions. Detecting and identifying deepfakes is the first step toward combating their influence.

"Developing sophisticated detection tools to spot the subtle inconsistencies produced by deepfake technology, such as irregularities in facial movements in videos, can help in the fight against this problem," says Erduran.

Another strategy is implementing stricter regulations. Establishing guidelines and best practices for the ethical use of AI in scientific research and other trust-dependent fields can also reduce the risk of manipulation.

Exploring the positive potential of deepfakes

While deepfakes pose a serious threat to scientific trust, Erduran also sees opportunities for positive application. She suggests that AI-generated content could be harnessed to advance technology and innovation, particularly when used to detect and counter false data.

Deepfakes, she notes, could be used for realistic simulations in educational settings, helping students develop critical medical skills without putting real patients at risk.

"Despite the risks, deepfakes can serve as a valuable learning tool, allowing scientists to explore how trust is built within the scientific community and how ethical principles should guide scientific communication," Erduran states.

Although the potential for harm is evident, deepfakes also offer opportunities to improve scientific understanding and education. According to Erduran, it will be up to the scientific and academic communities to rise to the challenge and find ways to turn these risks into strengths.

Public vulnerability to deepfakes

While deepfakes are alarming for the scientific community, the general public is also at risk. A survey conducted in the U.S. found that 33% to 50% of participants—including students, educators, and adults—could not accurately identify deepfake content from real media.

To address this, Erduran emphasizes the importance of "deep learning," which fosters critical thinking, analytical reasoning, and creative problem-solving skills.

"Learning to spot misinformation is an essential part of developing these skills," she says. "By incorporating effective detection tools, promoting strong ethical standards, and adopting research-based educational methods, we can ensure that deep learning in science thrives despite the rise of deepfakes."

Erduran concludes that while deepfakes may pose significant risks to the integrity of science, with the right tools and approaches, the scientific community can not only mitigate these threats but also benefit from the opportunities they present. Photo by mikemacmarketing, Wikimedia commons.