Imagine an actor who never ages, never walks off set or demands a higher salary.
That’s the promise behind Tilly Norwood, a fully AI-generated “actress” currently being courted by Hollywood’s top talent agencies. Her synthetic presence has ignited a media firestorm, denounced as an existential threat to human performers by some and hailed as a breakthrough in digital creativity by others.
But beneath the headlines lies a deeper tension. The binaries used to debate Norwood — human versus machine, threat versus opportunity, good versus bad — flatten complex questions of art, justice and creative power into soundbites.
The question isn’t whether the future will be synthetic; it already is. Our challenge now is to ensure that it is also meaningfully human.
All agree Tilly isn’t human
Ironically, at the centre of this polarizing debate is a rare moment of agreement: all sides acknowledge that Tilly is not human.
Her creator, Eline Van der Velden, the CEO of AI production company Particle6, insists that Norwood was never meant to replace a real actor. Critics agree, albeit in protest. SAG-AFTRA, the union representing actors in the U.S., responded with:
“It’s a character generated by a computer program that was trained on the work of countless professional performers — without permission or compensation. It has no life experience to draw from, no emotion, and from what we’ve seen, audiences aren’t interested in watching computer-generated content untethered from the human experience.”
Their position is rooted in recent history: In 2023, actors went on strike over AI. The resulting agreement secured protections around consent and compensation.
So if both sides insist Tilly isn’t human, the controversy, then, isn’t just about what Tilly is, it’s about what she represents.
I would be shocked if any diffusion model could do that based on a description. Most can't overfill a wine glass.
Rendering over someone demonstrating the movement, as video-to-video, is obviously easier than firing up Blender. But: that's distant from any dream of treating the program like an actress. Each model's understanding is shallow and opinionated. You cannot rely on text instructions.
The practical magic from video models, for the immediate future, is that your video input can be real half-assed. Two stand-ins can play a whole cast, one interaction at a time. Or a blurry pre-vis in Blender can go straight to a finished shot. At no point will current technologies be more than loose control of a cartoon character, because to these models, everything is a cartoon character. It doesn't know the difference between an actor and a render. It just knows shinier examples with pinchier proportions move faster.