Runway AI is changing the animation. How to use it to tell untold stories

This week, AI video generator Runway launched Act-One, heralding a new era for animation. The feature addresses two key challenges in AI video generation: creating nuanced facial expressions and maintaining character consistency.

All of this can be achieved without using complex facial motion capture software to map expressions to digital avatars.

How does Runway Act-One work?

“Traditional facial animation pipelines often involve complex multi-step workflows. These can include motion capture devices, multiple footage references, manual facial manipulation, among other techniques… Our approach uses a completely different pipeline, driven directly and solely by an actor’s performance and not requires additional equipment. Runway Research explains the advances in Act-One as lowering the barriers to creating animated content.

Runway’s introductory video featured guests sharing everyday moments. They talked about breaking a phone, cheering on a team project, choosing groceries, meeting a boss, or telling a funny story.

Their faces spoke louder: frowns of disappointment, gasps of shock, grunts of doubt, bursts of laughter, guffaws of disappointment.

Act-I then transferred these expressions to animated, cartoonish or realistic characters, including witches, kings, foxes and even a Roman sculpture, appearing in confusion, joy, surprise, anger, sadness or indifference.

The aspiration is to allow users to animate faces that respond with lifelike expressions, making characters more believable and stories more engaging.

How does it change the animation?

Walt Disney just formed a new business unit to evaluate and develop the use of artificial intelligence and mixed reality in its film, television and theme park divisions.

To see why AI technologies matter, look at the evolution of animation.

In 1937, Disney’s Snow White and the Seven Dwarfs wowed audiences as the first full-length animated film. Each frame was hand drawn and painted on transparent sheets to coherently capture facial movements and expressions. He set the bar high for decades.

Then, in 1995, Pixar’s Toy Story marked a milestone in animation by fully utilizing computer-generated imagery (CGI). Animators built 3D characters that moved with depth and detail.

Avatar (2009) marked another milestone in animation. By combining CGI with performance capture, Avatar brought hyperrealistic characters to life. A great example is when Jake Sully, played by Sam Worthington, tames his banshee. The scene reveals every flicker of ambivalent emotion—courage and determination mingled with caution and secret fear.

This blend of actor-driven motion capture and CGI sets a new standard for animated films.

Runway’s Act-One presents the potential for a new era for animation by democratizing creation without the need for complex equipment or extensive expertise.

In many hallucinating AI-generated videos, characters transform unpredictably. Viewers are not convinced when the faces change in the middle of the story.

Act-One fixes this by letting users film themselves or actors and map those movements directly to a character.

In this way, the character remains intact, anchoring the story.

Use Runway AI for immersive storytelling

Runway Act-One can impact a variety of industries from e-commerce and entertainment to social media content development and corporate training.

Advertising and marketing can become more compelling and games can become more interactive.

But should we turn every technological advance into a profit-driven one?

Instead, creators should take advantage of Act-One to tell unheard stories. This is a great opportunity to advocate for social and racial justice, as well as sustainable practices with dynamic looks and low financial costs.

We can interview people around us whose life experience is deeply moving.

Imagine capturing the stories of your grandmother from a small village, recounting her life, the social changes she witnessed and the struggles she overcame.

Document a friend’s account of efforts to combat racial violence – stories of resilience, perseverance, community support and hope.

Imagine interviewing a member of the local community who has dedicated his life to protecting the nearby forests.

Or consider visualizing the story of a volunteer helping to rebuild homes in a hurricane-stricken area.

An expression needs rich meaning and experiences to move people’s hearts.

The real value of apps like Act-One lies in the freedom to choose your characters based on real-life experience.

Youtubers can use Act-One to illuminate life in their communities filled with twists, perseverance and strength.

Non-profit organizations can document oral histories of marginalized social groups and environmental degradation in animated educational videos.

The animation and entertainment industry is already tilting. Filmmakers such as James Cameron and Jia Zhangke are planning to use AI in their films.

But a wider range of creators outside the filmmaking profession will help create more inclusive and diverse animation, where voices and stories from all walks of life can shine.

Limitations and potential risks of AI animations

Despite its promise, Act-One and other AI video generators have limitations.

  1. Head movements are mostly limited to a single plane, so characters can’t easily turn to talk to another character or move dynamically while talking.
  2. Spatial and narrative contexts and interactions between characters are difficult to create with AI.
  3. Creating complex scenes requires multiple images and frames of reference. More guidance and examples from maker communities can help.
  4. When one can construct an identity, the line between authentic expression and constructed personas blurs.

Young people in particular may find themselves navigating a world filled with AI-generated faces and voices.

As these tools become part of our daily lives, we must ask: How do we separate fiction from reality? What stories will we choose to tell? And what does that mean for who we are?

Full-length, detailed animations may still be a way out. But we can actively shape the development of AI video generators by creating content that cares about the public good and evokes genuine human emotion.

It’s about reshaping how we see ourselves and how we can promote stories that make positive social change.

Leave a Comment