Meta has announced the launch of its new AI model, Movie Gen, on Friday, a technology designed to generate realistic video and audio clips based on user prompts.
The company claims that Movie Gen is poised to compete with industry leaders such as OpenAI and ElevenLabs.
In demonstrations shared by Meta, Movie Gen showcased its capabilities by creating videos featuring animals swimming and surfing, as well as realistic clips of individuals—using actual photographs—engaged in various activities like painting.
The AI model not only generates visual content but also produces background music and sound effects that sync seamlessly with the video, according to a blog post from Meta. For example, one demonstration included adding pom-poms to a runner’s hands in a desert scene, while another transformed a dry parking lot into a splashing puddle for a skateboarding video.
Meta stated that videos created by Movie Gen can last up to 16 seconds, with accompanying audio extending to 45 seconds. In blind tests, Movie Gen reportedly outperformed several competing models from startups including Runway and ElevenLabs.
This announcement comes amid ongoing discussions in Hollywood about the potential of generative AI technologies in filmmaking, sparked by OpenAI’s earlier demonstration of its product Sora, which can create feature film-like videos from text inputs.
While some in the entertainment industry are eager to adopt such tools to streamline production, there are significant concerns about the implications of using AI systems that may have been trained on copyrighted material without permission. Lawmakers have also raised alarms regarding the use of AI-generated deepfakes in global elections.
Meta representatives indicated that the company is unlikely to make Movie Gen available for open developer use, as it has done with its Llama series of language models, citing the need for careful consideration of risks associated with each model. Instead, Meta plans to collaborate directly with content creators and the entertainment industry to explore applications for Movie Gen, with an integration into Meta’s products anticipated next year.
According to Meta’s blog and accompanying research, the model was developed using a combination of licensed and publicly available datasets. Meanwhile, OpenAI has engaged with Hollywood executives regarding potential partnerships for its Sora technology, although no agreements have been finalized. Concerns about the company’s practices escalated earlier this year when actress Scarlett Johansson accused OpenAI of using her voice without consent for its chatbot.
In related developments, Lions Gate Entertainment, the studio behind major franchises like “The Hunger Games” and “Twilight,” recently announced a partnership with AI startup Runway, granting access to its film and television library for training AI models, which will in turn support filmmakers in their creative processes.