Meta unveiled its new AI model, Meta Motivo, which aims to control the movements of human-like digital agents, potentially enhancing the Metaverse experience.
This is part of Meta’s broader push, investing billions into AI, augmented reality, and Metaverse technologies, according to Reuters.
As a result, the company has raised its capital expenditure forecast for 2024 to a record $37 billion to $40 billion, reflecting its ongoing commitment to these advanced technologies.
Meta has been releasing many of its AI models for free to developers, believing that an open approach will foster the creation of better tools for its services, ultimately benefiting its business.
“We believe this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, democratization of character animation, and new types of immersive experiences,” the company said in a statement.
Meta Motivo aims to solve body control issues often seen in digital avatars, allowing them to perform more realistic, human-like movements.
The company also introduced a new training model for language modeling, called the Large Concept Model, designed to “decouple reasoning from language representation.”
“The LCM is a significant departure from a typical LLM. Rather than predicting the next token, the LCM is trained to predict the next concept or high-level idea, represented by a full sentence in a multimodal and multilingual embedding space,” Meta said.
Meta has also released the Video Seal, an AI tool that embeds an invisible watermark into videos, making it undetectable to the naked eye but traceable.