YouTube
YouTube is cracking down against deepfakes and AI use in videos
YouTube is doubling its efforts in their fight against AI-generated videos, with a new set of rules forcing creators to tag any video that contains AI content as “altered content”.
The unique and potentially harmful nature of artificial intelligence has made YouTube take new measures to prevent the proliferation of fake news and deepfake content. The company owned by Google has doubled its efforts to fight against disinformation on their platform.
YouTube sees dangers in the uncontrolled use of AI and reinforces its policies
On March 19, 2024, YouTube updated its policies as part of its continued fight against misinformation on its platform. From now on, it is mandatory to mark all “altered content” as such. This means any modified videos that can be mistaken for something real. On their website they give several examples:
However, there are certain uses that don’t need to be disclosed by creators, such as the use of beauty filters, enhancement of previously recorded audio, or AI-generated animations inside other creative videos.
Failure to comply with these new regulations may result in receiving penalties on the YouTube account of violators., ranging from not being able to monetize the content to closing the account itself. With this measure, YouTube wants to combat misinformation as much as possible, a very real and growing danger in the digital age, with the proliferation of artificial intelligence that makes it easier for users to create modified content.
Curiously, there is a very striking exemption to these new policies. Animated content aimed at minors —one of the main pillars of YouTube— is exempt from following these new regulations. That is to say, children’s cartoons may use AI-generated content and will not be required to be marked as “altered content.”