Introduction
In 2023 Australia's policymakers sounded regulatory alarm bells about the growing sophistication of deepfake videos and the need for controls on the generative artificial intelligence (AI) tools used to create them. Policy discussion papers highlighted the risks posed by deepfakes (
AHRC, 2023;
Bell et al., 2023;
DISR, 2023a), but said little about their creative and commercial advantages. With STEM disciplines leading the AI policy charge, it is vital that Canberra engages more effectively with the screen and media industries to ensure an effective approach to regulating AI that does not limit its benefits and supports innovation.
Internationally, screen content producers are deeply invested in synthetic media technology, including the Generative Adversarial Network (GAN)-created audio-visual simulations colloquially known as ‘deepfakes’ (
Murphy et al., 2023). Film studies scholars position deepfakes as novel and persuasive special video effects (VFX) (
Bode, 2021;
Holliday, 2021), with deepfake applications vastly reducing screen production costs (
Gosier, 2022) and driving innovations in drama, documentary and creative online content that extend to commerce, health and education.
Despite these benefits, deepfakes are most often associated with their destructive uses. The European Union (EU) positions deepfake-producing GANs as a risk ‘subject to transparency obligations’ (
European Parliament, 2023: 5), while the UK Bletchley Declaration on AI omits creative screen industry deepfake applications from the positive use-cases it highlights (
UK Government, 2023b). In its June 2023 AI discussion paper, the Australian Department of Industry, Science and Resources (DISR) listed deepfake video's potential to ‘influence democratic processes or cause other deceit’ as a leading AI-risk (
DISR, 2023a: 7).
The Guardian translated this ranking as a proposal to ban deepfakes (
Karp, 2023) while a computer science researcher subsequently stated, ‘Is there ever a good reason to create a deepfake video? I couldn't really think of one.’ (
ABC, 2023).
Our concern is that while the DISR is seeking external feedback on the proposal to ban certain AI technologies due to their ‘implications for Australia's domestic tech sector and… export activities’ (
DISR, 2023a: 26), nowhere in the government's interim response does the future of the screen industries explicitly figure. AI screen applications, aside from those used in game production, are not on DISR's list of ‘Critical Technologies in the National Interest’ (
DISR, 2023b) – and are only briefly mentioned in the Australian Communications and Media Authority's AI consultation paper (
ACMA, 2020).
With the government now consulting more widely on the future of AI, we seek to move the policy debate beyond the deepfake ‘problem’ (
Gosse and Burkell, 2020) to consider deepfakes from two creative screen industry perspectives. First, we identify the diverse risks and benefits of deepfake applications in film, media, politics, commerce and education; and second, we compare current and emerging international deepfake regulations with their Australian equivalents to conceptualise the potential impact of a deepfake ban on domestic screen content producers. In doing so, we aim to demonstrate that a STEM-focused approach to deepfake regulation is insufficiently attuned to the cultural, industrial and educational benefits of synthetic media in the AI communications economy.
The deepfake ‘problem’
Since deepfakes emerged as a pornographic novelty on Reddit in 2017 (
Hao, 2021), they have been variously situated as an epistemological threat to governing systems (
Fallis, 2020); a political threat to democratic processes (
Agarwal et al., 2020;
Diakopoulos and Johnson, 2021;
Vaccari and Chadwick, 2020); a reputational threat to consumers and businesses (
Westerlund, 2019); and the harbingers of a collective ‘media nihilism’ (
Vincent, 2018) that will destroy civic trust, leading to the ‘end of reality as we know it’ (
Leary, 2017).
Cinema is replete with simulations of people doing and saying things they never said or did (
Bode, 2017) but deepfake technology differs from its precursors in democratising the tools of screen deception. Unlike the expensive VFX of 1990–2000s franchises
The Lord of The Rings and
Harry Potter, deepfakes can be made by anyone with access to multiplying, open-source AI applications (
Hwung, 2024), for a fraction of the cost and time of conventional, studio-produced screen illusions (
Gosier, 2022). The ease and speed with which users can extort, harm and manipulate others and the proliferating criminal, human rights and political violations being perpetrated by malign deepfakes are key drivers of global initiatives to police them (
Kop, 2021;
Langa, 2021).
It is important to delineate deepfakes from another popular screen deception, ‘cheap fakes’ (
Paris and Donovan, 2019). While the former harnesses AI's superior data-processing capabilities (
Kaynak, 2021), the latter relies on use of basic editing tools. Two viral examples are
Acosta/Trump video (
Jimmy Kimmel Live, 2018), in which Jim Acosta was sped up to make him appear to shove a Whitehouse aid; and
Pelosi ‘drunk’ video (
O'Connell, 2019), in which US Speaker Nancy Pelosi was subtly slowed to make her seem inebriated. These fabrications are also easily detectable, with viewers uploading split-screen videos revealing their construction (
Harwell, 2018;
Sadiq, 2019).
In contrast, even experienced filmmakers find skilfully executed deepfakes difficult to identify (
Broinowski, 2023;
Dagar and Vishwakarma, 2022). GANs produce deepfakes by operating two networks in a mutually critical relationship similar to that of a ‘forger’ and ‘detective’ (
Giles, 2018). The first scrapes audio-visual data from the web to generate counterfeits; the second compares these with their authentic equivalents, providing iterative feedback until the fake is indistinguishable from the real (
Langa, 2021). As deepfake technology improves, it supersedes the faults of its earlier forgeries, including inconsistent movement; lack of blinking; and variations in physiological signals (
Masood et al., 2023). Deepfake simulations are strengthened when viewed on hand-held devices rather than the big screen: the low-resolution of small-screen playback helps to conceal the illusion.
It is no accident, then, that deepfake-assisted crime is on the rise. The World Economic Forum estimates deepfake technology was employed in 66% of cybercrimes committed during 2022 (
Bueermann and Perucica, 2023). In 2021, criminals used deepfake audio of a company CEO to convince a branch manager to transfer US$35 million to their accounts (
Brewster, 2021). In 2022, the FBI warned that cyberhackers were wearing deepfake masks in online tech-security job interviews to gain access to sensitive systems (
Dobberstein, 2022). In 2023, rogue advertisers deployed a slew of celebrity deepfakes: a Tom Hanks forgery sold dental plans without his consent (
Guardian, 2023); Scarlett Johansson sued a start-up for deepfaking her into an AI app promotion (
Roth, 2023); an Australian infomercial used a badly dubbed deepfake of federal treasurer Jim Chalmers to endorse a bogus trading platform (
Edberg, 2023).
In the political sphere, deepfakes are increasingly being harnessed to spread disinformation, both by extremists (
Busch and Ware, 2023), and mainstream actors. In 2020, a Zionist Facebook page uploaded video ‘sock puppets’ (synthetic humans generated from deepfake photos) who claimed to be left-wingers now united behind conservative Israeli Prime Minister Netanyahu (
Benzaquen, 2020); progressive lobby group RepresentUS produced a Kim Jong Un deepfake gloating about the demise of American democracy (
Elliott, 2020). In 2021, then Russian dissident Alexei Navalny's chief-of-staff conducted video calls with Dutch politicians, who later realised they'd spoken with an AI forgery (
NL Times, 2021). In 2022, deepfakes were weaponised for war propaganda, when a clumsily doctored President Zelensky urged Ukrainians to surrender one month after the Russian invasion (
Telegraph, 2022). In 2023, deepfakes escalated the polarisation of political discourse in the United States, and are predicted to influence the 2024 election (
Painter, 2023). A deepfake of a Cable News Network (CNN) anchor announced Republican candidate Donald Trump ‘ripp[ed] us a new asshole’ on Truth Social (
Mastrangelo, 2023); Governor Ron DeSantis broadcast compromising deepfake audio of Trump (
Isenstadt, 2023); a Bellingcat investigator deepfaked photos of Trump being arrested (
Stanley-Becker and Nix, 2023); and the Republican Party released a campaign video showing a deepfake President Biden celebrating his 2024 win, followed by its imagined consequences: the invasion of Taiwan, the closure of banks, and San Francisco destroyed by crime (
Binder, 2023).
Pernicious, unethical uses of deepfake technology are expanding in the commercial, political, media and entertainment sectors, with a 900% rise in circulating deepfakes between 2019 and 2022 (
Bueermann and Perucica, 2023). However, perhaps the most egregious and under-researched use of deepfake technology can be found where it began: in the sexual exploitation of women and girls (
Maddocks, 2020). AI researchers
Sensity (2020) have found that malicious deepfakes double every 6 months and of the 85,047 deepfakes online in 2020, 90–95% were non-consensual porn, 90% of which targeted women (
Hao, 2021). By 2023, AI-abuse of women and girls had increased: one US identity-theft review found 98% of the 95,820 deepfake videos circulating last year were pornographic, with 99% of these targeting females (
HSH, 2023).
Deepfake porn depicting UK broadcaster Helen Mort (
Hao, 2021), US politician Alexandria Ocasio-Cortez (
Helmore, 2024), Australian lawyer Noelle Martin (
Melville, 2019) and, most famously, Taylor Swift (
Mahdawi, 2024), is explicit, convincing and not just created for private gratification. It is also shared to humiliate, disempower and silence the women it targets, particularly those in influential positions (
Posetti and Shabeer, 2022). In 2018, Indian journalist Rana Ayyub was the subject of a widely circulated deepfake porn video produced by Indian nationalists to discredit her coverage of a child rape case (
Chatterjee, 2018). Ayyub was hospitalised and stopped writing to avoid further, similar campaigns (
Ayyub, 2018). Such attacks have long-lasting psychological, physiological, professional and socio-economic effects on their victims, deterring women from pursuing certain professions and speaking out on issues of public importance (
Laffier and Rehman, 2023). In these respects, deepfake porn is a form of serious gender discrimination, with disturbing consequences for media freedom and diversity globally.
The deepfake advantage
Given the above cases, the introduction of some form of deepfake ban might seem justified. However malicious deepfakes are not proof that the technology used to create them is inherently malign. Generative AI remains in what
Natale and Balbi (2014) dub the ‘interpretative flexibility’ phase, with different groups interpreting and using it for competing purposes. Deepfakes, when separated from the moral panic historically accompanying mass-media innovation (
Cohen, 1972) – from cinema (
Biltereyst and Winkel, 2013), radio (
Yadlin-Segal and Oppenheim, 2021) and television (
Lowery and De Fleur, 1995) to the internet (
Orben, 2020) – can also be viewed pragmatically: as a persuasive VFX with significant social, educational and commercial benefits. Mental health researchers envisage using deepfakes to de-age the relatives of Alzheimer's sufferers to strengthen emotional bonds; to help gender reassignment patients visualise their future selves; and to aid bereaved families by digitally resurrecting their deceased (
Westerlund, 2019). In commerce, customisable deepfake ‘assistants’ are evolving as the audio-visual successors of Siri (
Seymour et al., 2021); along with multilingual ‘role models’ to enhance online communication (
Hancock and Bailenson, 2021); interactive ‘mannequins’ to model virtual outfits (
Murphy and Flynn, 2021); and synthetic ‘influencers’ such as lilmiquela, an AI ‘robot’ with 2.6 million followers (
Instagram, 2024) and China's AI live-streamers, who sell one billion dollars’ worth of products daily on marketing channels Douyin and Kuaishou (
Yang, 2023).
On social media, news and entertainment platforms, synthetic media is booming. Deepfake applications are now used to insert virtual commentators into newscasts (
Tait, 2023), live TV debates (
Den Blanken, 2020) and TV competitions (
Marr, 2022); to face-swap stage actors with deepfake masks (
Harmon, 2021); to replace online influencers with deepfake ‘twins’ (
Cheong, 2023); and to de-age performers in music videos (
Travis, 2021), commercials (
Hamilton et al., 2022) and concerts – such as Abba's ‘Voyage’ spectacular, in which the band performed as their youthful ‘ABBAtars’ (
Kaufman, 2021). On YouTube and TikTok, a flourishing community of VFX enthusiasts (
Bode, 2021) have deepfaked celebrities into comedy shows (
Ctrl Shift Face, 2019); awards ceremonies (
birbfakes, 2019); hoax shorts (
Corridor, 2019;
Fisher and Ume, 2021) and movies (
Shamook, 2020); while user-friendly deepfake programmes like Zao, Wombo and DeepFaceLab enable users to ‘star’ in their favourite movies (
Gilbey, 2019).
The persuasive power of deepfakes is also being harnessed for education and advocacy: in
Parkland Victim (
Diaz, 2020), a high-school shooting victim appears as a posthumous deepfake, urging viewers to act against gun violence; in
Malaria Must Die (
Zero Malaria, 2019), deepfake dubbing techniques enable UK soccer star David Beckham to communicate his malaria-awareness campaign in nine languages; in
You Won't Believe What Obama Says (
BuzzFeed, 2018), comedian Jordan Peele puppeteers a synthetic Obama to warn against the deceptive dangers of deepfakes; in
Bill x Socrates (
ChatGPT ai, 2023), deepfakes of Microsoft CEO Bill Gates and Socrates discuss the educational possibilities of AI; in
la Compil des Bleues (
Marcel, 2023), the faces of male soccer players are deepfaked onto, then dissolved off, the bodies of France's female World Cup team to highlight the women's equal athleticism and skill.
Museum curators and screen artists have proved early deepfake adopters, extending cinema's long-standing obsession with human simulacra (
Manovich, 2016), into the realm of digital art. The Samsung/Skolkovo Institute's
Mona Lisa ‘talking portrait’ and the Florida Dali Museum's
Deepfake Dali (
Mihailova, 2021) are deepfake automata which can interact with visitors. Vaccari and Chadwick's malevolent ‘political deepfake’ (2020) – in which the simulated words or actions of a political leader threaten to erode civic stability and trust – is radically reframed by satirical deepfakes which critique the machinations of power. Bill Posters's
Spectre (
2019) features a deepfake of Facebook CEO Mark Zuckerberg boasting about his technological supremacy; Stephanie Lepp's
Deepfake Reckonings (
Synthesis Media, 2020) reimagines US Supreme Court Justice Brett Kavanaugh and other controversial figures as their more ‘morally courageous selves’. James Coupe's gallery installation,
Warriors, utilised deepfake and facial recognition technology to classify and insert viewers into the 1979 cult film
The Warriors, raising crucial questions about AI systems that, Coupe stated, ‘reproduce historical biases on a mass scale’ (
Mihailova, 2021).
Notwithstanding these innovative and, arguably, culturally enriching applications of deepfake technology, its advantages are perhaps most evident in the mainstream screen industries, where AI tools are vastly reducing production and post-production costs. Deepfake applications have expedited dubbing and VFX processes, replacing deceased actors with believable simulations (
Velasquez, 2023); transforming actors into their younger selves (
Gosier, 2022); and expanding the creative possibilities of the documentary form.
Welcome to Chechnya (
Welcome to Chechnya, 2020) eschews conventional pixelation disguises for deepfake masks to protect the identities of gay Chechnyan refugees (
Hight, 2022).
Gerry Anderson: A Life Uncharted (
GAALU, 2022) conducts intimate interviews with a deepfake of its deceased, eponymous subject (
Jeffery, 2021).
In Event of Moon Disaster (
Burgund and Panetta, 2020) features a deepfake of US President Nixon delivering a speech that was written in case the 1969 Apollo space mission failed. On TV and streaming platforms, deepfakes are being used both as VFX spectacle and narrative plot-drivers: ITVX's
Deepfake Neighbour Wars (2023) relocates Kim Kardashian to unglamorous British suburbia; Netflix reality series
Deepfake Love (
2023) shows its cast deepfakes of their partners being unfaithful; crime series
The Capture (
The Capture, 2019) and
Fool Me Once (
Fool Me Once, 2024) revolve around incriminating deepfakes; Netflix's dystopian drama
Joan is Awful (
Joan is Awful, 2023) is set in a Matryoshka-like future, where characters are deepfakes of deepfakes.
The deepfake ‘advantage’ in mainstream film production is not without ethical and economic risk:
Roadrunner (
RaFAAB, 2021) used deepfake audio of chef Anthony Bourdain speaking words he’d never said; while Netflix's 2024 true-crime doc
What Jennifer Did inserted deepfake photos of its jailed subject, raising concerns around the legitimacy of AI manipulation in the nonfiction cannon (
Rosner, 2022;
Tangermann, 2024). In 2023, the Hollywood writers’ and actors’ strikes signposted the job losses and erosion of creative agency being caused by generative AI technologies (
Lawler, 2024). These cases contribute to the problem-framing of deepfakes, adding political pressure to protect individuals’ privacy and economic interests, as well as mitigating harms.
International deepfake regulation
Over their 6-year lifespan, deepfakes have triggered global, well-resourced initiatives to detect, prevent and control them: some technological, such as Facebook's Deepfake Detection Challenge (
Dolhansky et al., 2020); some regulatory, such as the UK's 2023 AI safety summit, which invited world leaders to formulate uniting policies on frontier AI models including synthetic media (
Browne, 2023); and some legislative, such as China's synthetic content laws, the EU's AI Act and the UK's Online Safety Act. In the United States, legal moves to govern deepfakes have focused largely on protecting privacy, national security and election integrity (
Chesney and Citron, 2019;
Langa, 2021); preventing fraud (
van der Sloot and Wagensveld, 2022); and protecting citizens from intimate image abuse (
Umbach et al., 2024).
Outside these state-backed initiatives, media literacy programmes like the Google-sponsored
Witness Media Lab (2012) are also gaining traction, partly because deepfake detection technologies remain unreliable and difficult to scale (
Masood et al., 2023), with most models only able to identify forgeries with 65.18% accuracy (
Tech Desk, 2020;
Yu et al., 2021). The conundrum facing digital forensics researchers is that GANs used to spot deepfakes can also produce more convincing deepfakes, in an accelerating machine-learning loop that sees detectors repeatedly ‘outgunned’ by new deepfake applications (
Farid, 2018;
Harwell, 2019). Two recent deepfake detection models are Intel's ‘fake catcher’, which analyses blood flow, a characteristic deepfake faces do not exhibit (
Clayton, 2023); and Pindrop, which uses voice-authentication software to identify deepfake audio (
Hutton, 2023). Given the speed with which GANs learn from, then overcome, their mistakes, both models are likely to be outsmarted by AI. Research with professional fact-checkers indicates that consumer-focused approaches to deepfake identification are, in fact, more effective than expensive detection software (
Weikmann and Lecheler, 2023); while Ahmed's deepfake viewer studies (
2021,
2023) suggest that more comprehensive literacy and source verification strategies are needed, to improve everyday users’ awareness of deepfake manipulation.
As technological innovation continues to outpace legislation in the AI communications economy, big tech platforms are opting to self-regulate deepfake deception. These measures enjoy institutional support, despite corporations’ dubious track record of self-governance (
Broinowski, 2022). In 2023, US President Biden reached an agreement with Meta, Microsoft and other leading AI players that they would ‘watermark’ synthetic media on their platforms (
Straub, 2023). In January 2024, Meta announced that users uploading political advertisements must ‘self-disclose’ if they were made with AI and restricted the use of its own AI tools, with Yahoo and Google adopting similar policies (
Hays, 2023). Google has launched SynthID, an AI identifier which works with Google's text-to-image generator, Imagen, to embed digital watermarks in synthetic content (
Guinness, 2023). The
US Government Accountability Office (2023) promotes blockchain as another useful authentication technology. To protect creatives’ copyright, the University of Chicago has released Nightshade, which ‘poisons’ pixels in digital art, corrupting the ability of GANs to scrape them for data (
Ortiz, 2023) – one possible approach to preventing deepfakes from cannibalising audio-visual material in the future. In China, the Douyin platform requires users to label AI-generated content; prohibits deepfakes that violate intellectual property rights; and requires the registration of virtual human constructs (
Zhang, 2023).
Yet these labelling and metadata measures may be quickly circumvented, given the rapid evolution and accessibility of AI software.
Saberi et al. (2023) found that AI can be easily deployed to remove watermarks, and can also watermark authentic images, ‘triggering false positives’ (
Knibbs, 2023). The Global Internet Forum to Counter Terrorism, a tech platform-driven initiative to curtail extremist online content, recognises the limitations of synthetic media detection as a ‘cat and mouse’ game between IT developers and bad actors (
GIFCT, 2023: 9). It promotes an AI safety by design approach, but also notes that open-source software-sharing communities have allowed extremist groups to harness these tools with damaging effects. Problematically, once a deepfake is released, the Streisand effect (
Jansen and Martin, 2015) ensures that targeted individuals can do little to stop it being shared, minimising the chance of effective redress.
Against these market-led responses to deepfake control, government regulation is more directly harm mitigation focussed. China, the world's earliest mover on deepfake legislation, has introduced a suite of AI controls, including the ‘Administration of Deep Synthesis Internet Information Services’, which makes synthetic media developers responsible for the safety of their products, subjecting them to state security assessments before release (
Zhang, 2023). The UK's Online Safety Act (
UK Government, 2023a) introduces criminal charges for non-consensual deepfake porn, deepfake video scams and other synthetic media designed to cause humiliation, alarm or distress. In the United States, several state laws prohibit deepfake porn and political advertising (
Holliday, 2021), with a federal prohibition against the use of ‘false impersonation’ to obtain something of value (
Delfino, 2023). Eight proposed federal US bills seek to regulate synthetic media use (
Brennan Center for Justice, 2024), including the Deepfake Accountability Act (DAA) (
Congress, 2023), which criminalises any failure to identify and remove malicious deepfakes involving non-consensual sexual content, foreign election interference, criminal conduct, fraud or the incitement to cause violence; the Defiance Act (
Congress, 2024a), which provides redress for non-consensual intimate forgeries; and the No AI Fraud Act 2024 (
Congress, 2024b), which creates property rights in individuals’ likeness and voice.
The Bletchley Declaration, a UK brokered agreement between 28 countries to address AI risks, promotes collaboration on AI safety research and ‘context-appropriate’ transparency about AI's ‘potentially harmful’ capabilities (
UK Government, 2023b). However, while the statement frames the importance of a pro-innovation, ‘proportionate’ regulatory approach that ‘maximises the benefits’ of AI, screen industry deepfake applications are not amongst the use-cases it highlights. In contrast, the United States and EU legal reform frameworks contain significant provisions for the use of generative AI by screen content creators. The DAA 2023 contains exemptions for deepfakes based on recordings of real persons that have not been substantially altered; deepfakes created with consent for legitimate film, TV, music and other production uses; and synthetic media produced for parodies, satire, historical re-enactments and fictionalised radio or screen media (
Congress, 2023).
The EU's 2024 Artificial Intelligence Act (
European Parliament, 2024), the most systematic Western AI regulation to date, seeks to prevent AI harms by regulating the behaviours that may cause them (
van der Heijden, 2021), subjecting deepfake producers to existing restrictions on the collection of personal data for ‘lawful, fair and transparent’ purposes (
European Parliament, 2016), and including new biometric data-collection prohibitions. While the Act requires users to employ watermarking, metadata identification and cryptographic verification to demonstrate their work ‘has been generated or manipulated by an AI system and not a human’ (
European Parliament, 2024: 133), it also stresses these requirements should not impede creators’ rights to freedom of expression or artistic/scientific license (p. 134). Indeed, subject to safeguards for the rights of any third parties portrayed, the Act allows the makers of ‘evidently creative, satirical, artistic or fictional’ works to disclose their use of AI ‘in an appropriate manner that does not hamper the display or enjoyment of the work, including its normal exploitation and use’ (p. 134).
It is likely that the EU's AI Act, which mandates transparent, accountable codes of practice, and obligates tech platforms to mitigate against deepfake risks to democracy, will become a benchmark for synthetic media legislation – just as the EU's General Data Protection Regime influenced business practices across the globe (
Ryngaert and Taylor, 2020). Importantly for the screen and media industries, the EU Act, along with the US DAA, provides a supportive framework for creative AI use based on human rights protections, and an understanding that freedom of expression ‘includes the right to shock, offend and disturb’, and to create fictitious interview material to convey an opinion (
van der Sloot and Wagensveld, 2022: 10).
The screen industries’ expanding use of deepfake tools has catalysed a parallel legal debate about how to shield actors, celebrities and citizens from exploitation or harm caused by their AI-generated personas. The United States and EU personality and publicity rights protect public figures from the unlicensed use of their name or likeness in commercial pursuits (
Heugas, 2021); while performers’ rights protect a recorded performance and its reproduction (
Pavis, 2021). The US SAG AFTRA actors’ union has rejected the term deepfake in favour of ‘AI generated’ or ‘digital double’ (
Bedingfield, 2022), with some performers now negotiating ‘simulation rights’ for their avatars (
Schomer, 2023). However, publicity rights do not prevent ordinary people from having their digital personas manipulated by non-commercial deepfakes (
Boyd, 2022). For this reason, legal scholars argue for the inclusion of synthetic media and a reframing of publicity rights as a form of intellectual property in existing legislation – providing creators’ freedom of expression rights remain enshrined (
Farish, 2020;
Preminger and Kugler, 2023).
These international responses to generative AI use in the creative screen industries present a useful framework within which to gauge how deepfake regulation is evolving in Australia, and the inherent tensions between innovation and risk that policymakers will need to address, to support AI adoption and innovation by domestic screen practitioners.
Australia's deepfake response
In mid-2024 Australia had no laws specifically addressing deepfakes, although a federal ban on non-consensual deepfake pornography had been proposed. Privacy laws prohibited the collection and processing of sensitive personal information and the Online Safety Act regulated harmful intimate depictions. Defamation law provided reputational damage remedies and trade practices law protected consumers from investment scams (
Paterson, 2024;
Rad and Christie, 2024), although these mechanisms depended on the perpetrator being identified and held to account.
The difficulty is that the majority of deepfakes are produced and distributed anonymously, with no standard of proof for what makes them fake, or an established evidentiary process to trace deepfake content to its original source (
Delfino, 2023). Further, with no personality rights in Australian law, citizens have no explicit protections against the use of their digital personas in deepfake video.
The Australian Human Rights Commission submission to the DISR AI inquiry (
AHRC, 2023) demonstrates how comprehensively the deepfake problem is driving regulatory concerns. It recommends that the federal government fund deepfake detection and digital literacy programmes, assess whether existing laws ‘are capable of effectively combatting harmful deepfake content, and … [introduce] specific laws if regulatory gaps are identified’ (p. 30). The submission acknowledges the importance of addressing AI risks in neurotechnology, the metaverse and other contexts, while also protecting expression rights. However, it fails to discuss the professional contexts where these rights might be most significant, such as the media, film and art.
In this respect, Australian screen creators using generative AI to impersonate powerful public figures for satirical, artistic or political purposes may become low hanging fruit for those intent on establishing a legal precedent for a ban on deepfakes. The fact that such a ban was mooted in the DISR's 2023 AI briefing paper puts Australia at odds with the EU and US, where filmmakers and artists using synthetic media in political and creative critiques are protected by resilient freedom of speech and human rights laws. Controls on deepfake production by domestic screen practitioners, who do not enjoy constitutional free speech rights but rather an implied freedom of political communication, could narrow artistic opportunities already limited by Australia's stringent defamation laws.
An outright ban on deepfakes would make little economic, legal or political sense. Digital media manipulation and VFX are fundamental filmmaking processes (
Broinowski, 2022) and the domestic screen industry, which contributes over $6 billion to the economy annually (
Screen Australia, 2023), is already benefitting from the creative and commercial advantages of generative AI. Global platforms hosting deepfakes operate beyond the purview of Australian law and the threats posed by abusive deepfakes also focus media and political attention on the need for greater AI accountability and transparency (
Langa, 2021). Should legislators succeed in obligating tech companies to geo-block deepfakes on Australian platforms, legitimate artistic, educational and political deepfakes could also be censored, chilling freedom of expression (
Ray, 2021): previous legal attempts to force platform-removal of hate speech, such Germany's Network Enforcement Act, resulted in the censorship of art and satire (
Goggin et al., 2017).
At the same time, as
van der Sloot and Wagensveld (2022) note, questions about the legality of deepfakes address the tension between individual privacy and freedom of expression rights, raising two ongoing concerns: whether public figures should be protected from deepfake forgeries; and whether the harms caused by malign deepfakes can be causally established or prosecuted. Australian filmmakers using deepfakes for political purposes will, like their EU and US counterparts, be on the frontline of these, and future, negotiations around creative uses of AI.
In the year since deepfakes generated Australian media alarm, prompting calls for their removal (
ABC, 2023;
Karp, 2023), policy discussions have followed the UK's risk-based approach to the deepfake problem. Legal controls are set to be imposed on deepfake use in political advertising and intimate image-based abuse. While Paech et al. (
2021) argue these controls should be extended to cover violent political and gender-based deepfake harassment, it is more likely that existing hate speech rules will be seen as sufficient deterrents for extremist political attacks (
Barker and Jurasz, 2019), leaving much AI misogyny unrestricted. The Federal Communications Minister has indicated the Online Safety Act (
Australian Government, 2021) – which prevents damage caused by bad actors, offers victim redress and promotes safety by design to minimise harm – will be updated to cover generative AI communications, requiring social media platforms to limit the amplification of hate speech and extremist material; and restrict ‘unlawful or harmful’ user activity (
Butler, 2023). These mechanisms are poised to follow the UK's Online Safety Act's ‘duty of care’ obligations on big tech companies (
Woods and Perrin, 2019), which mandate product development safeguards; the protection of children's interests; and the prohibition of user-to-user services hosting illegal and harmful content (
UK Government, 2023a).
The effect that such moves will have on platforms’ handling of legitimate deepfake uses in Australia remains unclear. What is critical is that policymakers now consult extensively with domestic screen practitioners who depend on, and work with, generative AI to ensure that regulation does not limit beneficial applications and supports innovation. In March 2024, the Federal Senate established a Select Committee to ‘report on the opportunities and impacts for Australia arising from the uptake of AI technologies’. Its terms of reference include the ‘potential threats to democracy and trust in institutions’ posed by generative AI, as well as its potential benefits to ‘citizens, the environment and/or economic growth’. Concerningly for screen and media practitioners, the only benefits listed are in ‘health and climate management’ (
Australian Government, 2024).
Conclusion
For now, deepfakes circulate within a patchy regulatory framework of inconsistent detection models, unverifiable disclosure rules, evadable labelling tools, unenforceable self-governance and policymaking that struggles to keep abreast of generative AI's escalating capabilities.
Australia's deepfake regulation proposals have, to date, been driven by a risk-based approach to preventing their mendacious, unethical and criminal uses. However, the expanding body of creative, educational and commercial deepfake uses, and the economic and artistic benefits AI tools are bringing to the screen production sector, are significant.
Concentrating solely on the deepfake problem misses the opportunity to address the more novel regulation challenges posed by AI communications: the escalation of misogyny as a hate crime; the growing importance of individual digital persona rights; the need to support new AI-generated forms of artistic expression and innovation; the difficulty of determining fact from forgery now that established audio-visual evidentiary standards have eroded (
Rini, 2020); and the possible emergence of a truth-agnostic viewer in the post-truth AI economy.
Our survey of the deepfake advantage establishes the importance and value of generative AI technologies for the future of the screen and media industries. It also highlights the positive applications of AI screen applications in educational, social and cultural contexts, exposing a lack of policy awareness about legitimate deepfake uses and the inadequacy of STEM-dominated responses to their regulation.
To ensure the commercial and cultural potential of current and emerging AI screen technologies is protected, legislators need to formulate policies that both shield citizens from deepfake abuse, and support innovation. If we consider deepfake technology as a new special effect in film's history of audio-visual manipulation, it is vital to take a ‘human’ rather than ‘tech-centred’ approach to its regulation (
Bode, 2021), one that is based on a nuanced and informed understanding of the diversity of deepfake users and their intentions.