UK: Women Press Freedom Condemns Deepfake Attacks on Cathy Newman as Part of a Growing Trend Against Journalists

Newman uncovered her own AI-generated "deepfake" pornography during investigation into online abuse of women through manipulated images

Location: United Kingdom
Date: April 17, 2024

Women Press Freedom stands firmly against the disturbing and growing misuse of artificial intelligence to create deepfake pornography, the latest incident being that of Cathy Newman. Deepfake technology has morphed into a weapon against women journalists, undermining their safety, dignity, and professional integrity. The widespread and unchecked proliferation of such content signals a crisis. The UK's steps to criminalize the creation of deepfake pornography are commendable, yet they are merely the beginning of what must be a robust, multi-faceted response. The gaps in legislation and the sluggish response of technology platforms to adequately address and moderate such content have enabled the perpetuation of harm and the erosion of media credibility. Women Press Freedom calls for a comprehensive strategy involving stricter content moderation policies, advanced detection technologies, and decisive legal action against the creation and distribution of manipulated content. Social media companies, in particular, must be held accountable for their role in facilitating the spread of harmful content and should invest significantly in technologies that can detect and prevent the dissemination of deepfakes.

Read latest reports from the globe

“I’ve become accustomed to watching disturbing footage and, having been repeatedly trolled online, I consider myself pretty resilient,” Cathay Newman, a prominent presenter and investigations editor at Channel 4 News, writes about the experience of finding deepfake pornographic videos of herself online. “I therefore thought that I would be relatively untroubled by coming face to face with my own deepfake. The truth was somewhat different.”

The discovery came as her team delved into the burgeoning abuse of women online through artificially manipulated images. Newman was astounded to find herself among the millions globally who had been exploited without consent.

The Channel 4 News investigations team initially aimed to explore the scale and consequences of deepfake abuse and potential countermeasures. Their research highlighted the extent of the issue, identifying nearly 100 million views of such content on just five popular sites over three months. Of approximately 4,000 public figures featured in these manipulated videos, at least 250 were British, including Newman herself.

Despite reaching out to multiple celebrities similarly affected, none consented to participate in the project, likely due to the fear of further exposure. Newman, demonstrating considerable bravery, chose to serve as her own case study, consenting to be filmed as she viewed the offensive content for the first time. 

“The longer I watched, the more disturbed I became. I felt utterly dehumanized,” Newman writes. 

The UK government has responded to this issue by making the creation of deepfake pornography a criminal offense. However, the legislation and technological responses are still struggling to keep pace with the rapid proliferation of this technology. All while the emergence of deepfake videos has introduced a formidable new threat to the credibility and safety of women reporters worldwide.

Women Press Freedom has documented a surge in these incidents, highlighting that deepfake videos are not only targeting individual journalists but are also being used to promote scams and other malicious activities. Such attacks pose a direct threat to the professional integrity of the journalists involved and contribute to the broader erosion of trust in the media.

The situation is exacerbated by the inadequate responses from major social media platforms. YouTube's policy, which puts the onus on uploaders to disclose AI alterations, and Facebook's lack of specific guidelines for AI-generated audio, demonstrate a significant gap in the regulatory framework necessary to combat this issue. These gaps allow harmful content to spread virtually unchecked, leaving journalists to deal with the consequences to their reputations and personal safety.

The longer I watched, the more disturbed I became. I felt utterly dehumanized
— Cathy Newman

In response, some journalists have taken proactive steps, such as disavowing manipulated content through their channels or collaborating with their newsrooms to produce segments that clarify and refute the disinformation. However, these efforts are often insufficient to counteract AI-fueled disinformation campaigns' speed and reach fully.

 

Selection of Deepfake Cases Documented by Women Press Freedom in 2023 and 2024

  • Jomayvit Gálaga and Verónica Linares: In March 2024, media channel Perú21TV revealed that an AI-manipulated video promoting an investing scheme featuring its reporter Jomayvit Gálaga and other Peruvian celebrities, including América TV’s Verónica Linares, was being shared online.

  • Susanne Daubner: In February 2024, during Monday demonstrations in Dresden, fake AI-generated audio clips from the Tagesschau news program were played. These recordings contained false apologies purportedly from Tagesschau presenters for alleged "lies”  and "deliberate manipulations" in their reports for the ARD broadcasting network. The audio addressed various topics, including the Ukraine conflict, the Covid-19 pandemic, and protests. They were played at a Wilsdruffer Street demonstration, where thousands gathered for so-called Monday Demonstrations to protest government policies.

  • Sian Norris: the renowned investigative journalist and advocate for women's rights was subjected to a malicious online attack through the creation of false pornographic profiles and the potential use of deepfake technology in January 2024.

  • Colette Fitzpatrick: A deepfake Instagram ad, discovered in January 2024, manipulates footage of Virgin Media news anchor Colette Fitzpatrick and Prime Minister Leo Varadkar to promote a dodgy investment scheme.

  • Nair Aliaga: An AI-manipulated photograph depicting Golperú journalist Nair Aliaga without clothing was shared online in December 2023. 

  • Bongiwe Zwane and Francis Herd: The South African Broadcasting Corporation (SABC) was compelled in November 2023, to clarify that their anchors Bongiwe Zwane and Francis Herd were impersonated in deepfake videos circulating online. These videos, promoting a fraudulent investment scheme, amassed significant attention, with one featuring Herd garnering over 123,000 views on YouTube since its appearance.

  • Gayle King: In October, 2023, CBS This Morning co-host Gayle King revealed she was the subject of an AI-generated clip that featured her promoting a product she had never used.

  • Anne-Marie Green: In October 2023, Forbes reported on a popular TikTok account that creates and spreads fake news segments featuring AI-generated appearances from renowned American journalists, including a deepfake video of CBS News anchor Anne-Marie Green discussing a school shooting.

  • Ksenia Turtova: In October 2023, VOA’s Russian Service discovered a deepfake video featuring its journalist Ksenia Turkova, seemingly presenting a news segment on cryptocurrency. The video used AI-generated content, imitating Turkova's voice and appearance convincingly.

  • Monika Todova: The Slovakian journalist found herself the victim of a deepfake audio clip in September 2023 that circulated online during a critical pre-election period, underscoring a disturbing trend in which AI technology is being weaponized to undermine public trust in journalism by distorting the truth and spreading falsehoods through seemingly credible yet entirely fabricated audio and visual content.

 

The rise of AI-generated content targeting female journalists demands urgent and coordinated action. This includes more robust content moderation policies from social media companies, improved AI detection tools, and greater collaboration between the tech industry and journalism professionals. Without these measures, the proliferation of deepfakes could have devastating effects on journalists' credibility and the press's fundamental role in a democratic society.

As AI technology continues to evolve, the challenge of distinguishing between real and manipulated content will only grow, making it imperative for all stakeholders involved — journalists, tech companies, and policymakers — to work together to safeguard the truth and protect those who report it. The battle for press freedom in the era of digital journalism is not just about protecting individual journalists but preserving the integrity of our information ecosystem.

Lawmakers and regulators lag, and big tech companies falter in their response. The urgency for more stringent controls grows. The Online Safety Act has made sharing such content illegal, but the broader measures necessary to combat the creation and dissemination of deepfake porn are yet to be fully implemented.

Deepfake Threats Globally - 2023/24

Newman's ordeal highlights the broader societal implications of AI misuse and emphasizes the need for immediate and effective action to safeguard individuals' dignity and privacy. The delay in addressing these concerns means more victims suffer each day, underscoring the necessity for swift legislative and technological interventions.

Women Press Freedom stands in solidarity with Newman and countless other victims, affirming the urgent need for comprehensive action against this malicious use of artificial intelligence. Deepfake technology, while innovative, has been weaponized to undermine the safety, privacy, and credibility of women across various professions, notably in journalism. The case of Newman, and others like Slovak journalist Monika Todova, represent not mere isolated incidents but a concerning trend where deepfakes are utilized to distort truths, manipulate public perception, and inflict profound personal harm.

The emotional and professional toll on those affected by deepfakes is immeasurable. Journalists, tasked with upholding truth and integrity, find their own images being used against them, creating a chilling effect on free speech and reporting. The profound impact of witnessing oneself manipulated in deeply disturbing ways cannot be overstated —  it is an act of psychological violence that leaves indelible scars.

Current legislative and technological responses to this crisis are woefully inadequate. While the UK's Online Safety Act marks a step forward, much remains to be done. We advocate for stronger legal penalties for the creation and distribution of deepfake content and improved regulatory frameworks for online platforms where such content proliferates.

Moreover, social media giants must be held accountable for their role in the dissemination of these harmful materials. The policies of platforms like YouTube, which currently place undue burden on content creators to flag AI alterations, and Facebook, which lacks stringent guidelines for AI-generated audio, are insufficient. We demand comprehensive content moderation policies that can quickly identify and mitigate the spread of deepfake content.

In tandem with policy reform, technological advancements in detecting and countering deepfakes must be prioritized. Collaboration between the tech industry and journalistic entities is crucial to developing effective solutions that protect individuals and uphold the integrity of journalism.

Women Press Freedom is committed to working tirelessly to ensure that the rights and safety of women journalists are not compromised in our increasingly digital world. We stand for stringent controls, proactive measures, and a collective resolve to combat the misuse of AI, protecting the cornerstone of democratic society — press freedom. The time for action is now to ensure our information ecosystem's safety and safeguard individuals' dignity and privacy against the relentless tide of technological abuse.

 
 

Women Press Freedom is an initiative by The Coalition For Women In Journalism

The Coalition For Women In Journalism is a global organization of support for women journalists. The CFWIJ pioneered mentorship for mid-career women journalists across several countries around the world and is the first organization to focus on the status of free press for women journalists. We thoroughly document cases of any form of abuse against women in any part of the globe. Our system of individuals and organizations brings together the experience and mentorship necessary to help female career journalists navigate the industry. Our goal is to help develop a strong mechanism where women journalists can work safely and thrive.

If you have been harassed or abused in any way, and please report the incident by using the following form.

Previous
Previous

Israel/Palestine: Threats and Attacks against Women Journalists Covering the Conflict

Next
Next

Turkiye: Duygu Kıt Investigated for Reporting on Concerns About Quarry Construction