Where do we draw the line on using AI in TV and film?

<span>Kirsten Dunst in Civil War, posters for which have been created using AI.</span><span>Photograph: Murray Close/AP</span>
Kirsten Dunst in Civil War, posters for which have been created using AI.Photograph: Murray Close/AP

Though last year’s writers’ and actors’ strikes in Hollywood were about myriad factors, fair compensation and residual payments among them, one concern rose far above the others: the encroachment of generative AI – the type that can produce text, images and video – on people’s livelihoods. The use of generative AI in the content we watch, from film to television to large swaths of internet garbage, was a foregone conclusion; Pandora’s box has been opened. But the rallying cry, at the time, was that any protection secured against companies using AI to cut corners was a win, even if only for a three-year contract, as the development, deployment and adoption of this technology will be so swift.

Related: Artists’ AI dilemma: can artificial intelligence make intelligent art?

That was no bluster. In the mere months since the writers’ and actors’ guilds made historic deals with the Alliance of Motion Picture and Television Producers (AMPTP), the average social media user has almost certainly encountered AI-generated material, whether they realized it or not. Efforts to curb pornographic AI deepfakes of celebrities have reached the notoriously recalcitrant and obtuse US Congress. The internet is now so rife with misinformation and conspiracies, and the existence of generative AI has so shredded what remained of shared reality, that a Kate Middleton AI deepfake video seemed, to many, a not unreasonable conclusion. (For the record, it was real.) Hollywood executives have already tested OpenAI’s forthcoming text-to-video program Sora, which caused the producer Tyler Perry to halt an $800m expansion of his studios in Atlanta because “jobs are going to be lost”.

In short, a lot of people are scared or at best wary, and for good reason. Which is all the more reason to pay attention to the little battles over AI, and not through a doomsday lens. For amid all the big stories on Taylor Swift deepfakes and potential job apocalypse, generative AI has crept into film and television in smaller ways – some potentially creative, some potentially ominous. In even just the past few weeks, numerous instances of AI legally used in and around creative projects are testing the waters for what audiences will notice or take, probing what is ethically passable.

There was a small social media flare-up over AI-generated band posters in the new season of True Detective, following some viewer concern over similarly small AI-generated interstitials in the indie horror film Late Night With Devil. (“The idea is that it’s so sad up there that some kid with AI made the posters for a loser Metal festival for boomers,” the True Detective showrunner, Issa López, said on X. “It was discussed. Ad nauseam.”) Both instances have that uncanny lacquer look of AI, as in the AI-generated credits of the 2023 Marvel show Secret Invasion. Same, too, with promotional posters for A24’s new film Civil War, depicting American landmarks destroyed by a fictional domestic conflict, such as a bombed-out Sphere in Las Vegas or the Marina Towers in Chicago, with trademark AI inaccuracies (cars with three doors, etc).

There’s been blowback from cinephiles over the use of AI enhancement (different from generative AI) to sharpen – or, depending on your view, oversaturate and ruin – existing films such as James Cameron’s True Lies for new DVD and Blu-ray releases. An obviously and openly marked AI trailer for a fake James Bond movie starring Henry Cavill and Margot Robbie – neither of whom are part of the franchise – has, as of this writing, over 2.6m views on YouTube.

And arguably most concerning, the website Futurism reported on what appear to be AI-generated or enhanced “photos” of Jennifer Pan, a woman convicted of murder-for-hire of her parents in 2010, in the new Netflix true crime documentary What Jennifer Did. The photos, which appear around the film’s 28-minute mark, are used to illustrate Pan’s high school friend Nam Nguyen’s description of her “bubbly, happy, confident, and very genuine” personality. Pan is laughing, throwing up the peace sign, smiling widely – with a noticeably too long front tooth, oddly spaced fingers, misshapen objects and, again, that weird, too-bright sheen. Film-maker Jeremy Grimaldi neither confirmed nor denied in an interview with the Toronto Star: “Any film-maker will use different tools, like Photoshop, in films,” he said. “The photos of Jennifer are real photos of her. The foreground is exactly her. The background has been anonymized to protect the source.” Netflix did not respond to a request for comment.

Grimaldi does not explain which tools were used to “anonymize” the background, or why certain features of Pan look distorted (her teeth, her fingers). But even if generative AI was not used, it’s still a troubling disclosure, in that it suggests a muddling of truth: that these are old photos of Pan, that there is a visual archive that does not exist as such. If it is generative AI, that would tip into straight-up archival lie. Such use would go directly against a suite of best-practice guidelines just put forth by a group of documentary producers called the Archival Producers Alliance, which rules in favor of using AI to lightly touch up or restore an image but advises against new creation, altering a primary source, or anything that would “change their meaning in ways that could mislead the audience.”

Related: The Bourdain AI furore shouldn’t overshadow an effective, complicated film | Adrian Horton

It’s this final point – misleading the audience – that I think is the growing consensus on what application of AI is or is not acceptable in TV and film. The “photos” in What Jennifer Did – absent a clear response, it’s unclear with what tools they were altered – recall the controversy over bits of Anthony Bourdain’s AI-generated voice in the 2021 documentary Roadrunner, which overshadowed a nuanced exploration of a complicated figure over an issue of disclosure, or lack thereof. The actual use of AI in that film was uncanny, but revivified evidence rather than created it; the issue was in how we found out about it, after the fact.

And so here we are again, litigating certain small details whose creation feels of utmost importance to consider, because it is. An openly AI-generated trailer for a fake James Bond movie is strange and, in my opinion, a waste of time, but at least clear on its intent. Creation of AI posters in shows where an artist could be hired feels like a corner cut, an inch given away, depressingly expected. AI used to generate a fake historical record would clearly be ethically dubious at best, truly manipulative at worst. Individually, these are all small instances of the line we’re all trying to identify, in real time. Collectively, it makes finding it seem more urgent than ever.