Meta at the Meta Connect 2024 developer conference in Menlo Park on Wednesday morning, announced an upgrade to its AI capabilities.
Meta AI can now respond verbally to user questions across Instagram, Messenger, WhatsApp, and Facebook. Users can choose from several voices, including AI clones of celebrities such as Awkwafina, Dame Judi Dench, John Cena, Keegan-Michael Key, and Kristen Bell.
While this new voice feature enhances user interaction, it differs from OpenAI’s Advanced Voice Mode for ChatGPT, which is known for its expressive and emotive tones. Instead, Meta’s offering is more akin to Google’s Gemini Live, which transcribes speech and then reads responses aloud using synthetic voices.
Meta has invested heavily in securing these celebrity likenesses, reportedly paying millions for their use. However, some industry observers remain skeptical about the effectiveness of this strategy, with many eager to see the feature in action before forming a conclusion.
In addition to the voice updates, Meta AI can now analyze images, allowing users to share photos and receive information about them.
Meta is also piloting a translation tool designed to automatically translate voices in Instagram Reels. This tool will dub a creator’s speech into another language while synchronizing lip movements, enhancing the viewing experience for multilingual audiences. Currently, the tool is undergoing small tests with select creators’ videos from Latin America, focusing on English and Spanish translations.
As Meta continues to enhance its AI offerings, the potential impact on user engagement and content creation remains to be seen.