OpenAI is launching Sora, an advanced AI system capable of generating realistic 20-second videos from text prompts.
Announced during a livestream, the tool will be available to paid ChatGPT users in the U.S. and other markets starting Monday.
Users can create multiple variations of each clip, showcasing OpenAI’s continued expansion into generative video technology.
With Sora, OpenAI aims to compete with a growing number of startups offering tools that generate short video clips quickly and affordably, posing a challenge to traditional video production and potentially disrupting Hollywood careers.
Rivals like Pika, Haiper, and Runway, the latter of which has partnered with a studio to develop AI models for film production, are also advancing in this space.
After previewing Sora in February, OpenAI began engaging with film studios, media executives, and talent agencies through a series of meetings. The company also granted access to several artists, along with notable actors and directors, to experiment with the service.
Sora initially faced some limitations. In early tests, it was reported that it struggled with issues like misplaced body parts and a poor grasp of physics.
Additionally, the tool took noticeably longer and required more computing power to generate videos compared to image generators or chatbots, which users are more accustomed to.
During Monday’s demonstration, OpenAI showcased new features of Sora.
The tool can now generate videos not only from text prompts but also from still images and existing video clips. For example, a user can upload a video, and Sora will extend it by generating additional footage.
Sora also features a tool called Storyboard, which lets users outline specific actions or events at different points in a video, helping to guide the sequence of clips.
Additionally, there’s an option to browse a feed of videos created by other users, allowing for exploration and inspiration.