2 Messages

 • 

84 Points

Wednesday, April 1st, 2026

No Status

1

AI Transparency in IMDb Contributions

Dear IMDb user community,

I hope this message finds you well.

I am writing to suggest a potential improvement to IMDb’s contribution system that reflects the evolving landscape of film and media production. As artificial intelligence becomes increasingly integrated into the creative process (whether in screenwriting, visual effects, voice generation or even fully AI-generated films) it may be valuable for IMDb to introduce a way to identify and document this involvement.

Currently, there is no clear or standardized option within contribution forms to indicate whether a project has been partially or significantly created using AI technologies. Adding such a feature (perhaps in the form of a tag, checkbox or dedicated field) could provide greater transparency for users, researchers and industry professionals alike.

This addition could:

- Help audiences better understand how films are made.

- Support academic and industry analysis of emerging creative technologies.

- Ensure IMDb remains a comprehensive and forward-looking database.

I understand that defining the level of AI involvement may present challenges, but even a flexible or optional field would be a meaningful first step.

Thank you for your time and for the continued work you do in maintaining such a valuable resource for the global film community.

Kind regards,

John French

Oldest First
Selected Oldest First

1.2K Messages

 • 

13.3K Points

29 days ago

Define AI?

Where do you propose drawing the line?

A brief history of AI (in the loosest possible sense) in cinema:

Early 2000s — procedural intelligence: MASSIVE (Multiple Agent Simulation System in Virtual Environment) debuted in Lord of the Rings (2001-2003). Each digital agent in the battle scenes had its own decision tree — they could "see" nearby agents, evaluate threats, choose to fight or flee. Peter Jackson's team found some agents running away from battle on their own. That's AI by any reasonable definition, just not ML-based. Procedural, rule-based, but autonomous decision-making.

Mid 2000s — digital face territory: The Curious Case of Benjamin Button (2008) used early facial performance transfer — capturing Brad Pitt's performance and mapping it onto a digitally aged/de-aged face. Not ML, but the computational decision-making about how skin folds, how light interacts with facial geometry — that's automated inference, not hand-animation.

2009-2015 — the physics simulation era: Films like Gravity (2013) and Interstellar (2014) used increasingly sophisticated physics simulations. The black hole in Interstellar was generated by feeding actual relativistic equations into rendering software — the computer "decided" what a black hole looks like based on the physics. Kip Thorne published papers from the results. The tool produced novel scientific output. Where's the line?

Avatar (2009) pushed performance capture further — the virtual camera system was making real-time decisions about how to translate physical performance into digital space.

2016-2019 — the deepfake precursor era: Rogue One (2016) resurrected Peter Cushing as Grand Moff Tarkin and de-aged Carrie Fisher. Still primarily traditional VFX with some early ML components for facial mapping. The uncanny valley was visible.

First major ML-based AI moment in production: The Irishman (2019) — Scorsese de-aged De Niro, Pesci, and Pacino throughout the entire film using ILM's proprietary system. This was a hybrid — ML trained on younger footage of the actors to inform the de-aging, combined with traditional VFX. The ML component was doing something a human artist couldn't do manually at that scale: learning what "young De Niro" looked like from thousands of frames and applying that understanding consistently across a three-and-a-half hour film.

2020-2022 — ML enters the pipeline quietly: Most audiences didn't notice, but ML-based tools started handling rotoscoping (separating foreground from background), frame interpolation, noise reduction, and upscaling. Mundane work, but work that previously required hundreds of artist-hours. The ML wasn't creative — it was replacing the most tedious human labour.

Deepfake technology entered mainstream awareness. The Mandalorian (2020) used a combination of traditional VFX and ML-assisted de-aging for Luke Skywalker's appearance, then notably hired the YouTuber Shamook whose deepfake version looked better than ILM's original work. That's a landmark — an amateur with ML tools outperforming a professional VFX house.

2022-2023 — generative AI arrives: Everything Everywhere All At Once (2022) used some AI-assisted VFX to stretch its indie budget. The film won Best Picture.

Text-to-image models (Midjourney, Stable Diffusion, DALL-E) started being used for concept art, storyboarding, and pre-visualization. This is where the IMDb poster's question actually becomes relevant — because now the AI isn't automating human decisions, it's generating creative output from prompts.

Marvel's Secret Invasion (2023) used AI-generated art for its opening title sequence and got significant backlash. First major controversy about ML-generated creative content displacing human artists in a major production.

2024-2025 — the contested territory: The SAG-AFTRA and WGA strikes of 2023 were substantially about AI — protecting actors' likenesses and writers' roles from ML displacement. The resulting contracts drew legal lines.

De-aging became routine and nearly invisible. Harrison Ford in Indiana Jones 5 (2023), Tom Hanks in Here (2024). The technology crossed the uncanny valley for most audiences.

Generative AI for VFX backgrounds, crowd generation, and environment design became standard in mid-budget productions — often without disclosure. This is happening now and nobody's labelling it.

The distinction that matters:

Pre-ML AI in film was deterministic — rule-based agents, physics simulations, procedural generation. The human defined the rules, the computer executed them. Creative authorship was never in question.

ML-based AI in film is probabilistic — trained on existing data, generating outputs that no human specifically designed. When Midjourney produces concept art from a prompt, who's the artist? When an ML model de-ages an actor based on training data from their younger performances, who made the creative decision about what "young" looks like?

(edited)

2 Messages

 • 

84 Points

Hi.

Thank you for your thoughtful reply, which provides valuable context about the evolution of technology in cinema.

I would like to clarify, however, that my proposal is not intended as a criticism of AI, nor as a challenge to its use. Cinema has always evolved through new tools, continuously enhancing what could be described as an “augmented illusion” of reality.

My point is more specific: I am not suggesting that all AI-assisted or computational tools should be labeled, but rather that we begin to consider a distinct form of creation. This concerns generative tools, particularly prompt-based workflows, where a significant part (or even the entirety) of the content is produced by models.

In this sense, it may be more relevant to think not in terms of a general “AI” label, but of a category or set of fields dedicated to these emerging practices. This would allow contributors, and ideally creators themselves, to transparently indicate whether a work relies partially or predominantly on generative processes.

I am not referring to AI-assisted visual effects or technical tools within the production pipeline, but to a shift in the creative act itself.

The goal is not to judge, but to document — in order to better reflect the ongoing transformations in artistic practices.

1.2K Messages

 • 

13.3K Points

A sound argument.

I would suggest "generative AI" or similar be added as a keyword to such titles.This follows the vein of the "cgi" keyword attached to titles that use CG Imagery.