Vimeo has followed TikTok, YouTube, and Meta in introducing measures for creators to disclose AI-generated content.
In an announcement on Wednesday, the video hosting service updated its terms of service and community guidelines to mandate that creators indicate when realistic content is produced using AI tools.
These changes aim to prevent viewers from mistaking AI-generated or synthetically manipulated videos for genuine footage of people, places, or events. Vimeo exempts clearly unrealistic content, such as animations or videos with obvious visual effects, from this requirement.
However, any content depicting altered footage of real events or individuals, or portraying celebrities in scenarios they didn’t actually participate in, must be labeled as AI-generated.
Vimeo’s AI content labels will be prominently displayed at the bottom of videos where creators have disclosed the use of AI. When uploading or editing videos, creators can now select a checkbox to indicate whether AI was used for audio, visuals, or both.
Currently, creators are responsible for voluntarily labeling their AI-generated content, although Vimeo is developing an automated system to detect and label such content in the future. CEO Philip Moyer emphasized this transparency initiative, stating, “Our goal is to implement automated labeling systems that can reliably detect AI-generated content, enhancing transparency and easing the burden on creators.”
Since assuming his role in April, Moyer has underscored Vimeo’s commitment to protecting user-generated content from misuse by AI technologies. He has stated that Vimeo prohibits generative AI models from being trained on videos hosted on its platform, aligning with similar policies enacted by other major platforms like YouTube.