YouTube Updates Policies to Prohibit AI Videos of Deceased Minors

In a disturbing turn of events, YouTube has been forced to update its policies to explicitly state that creating AI videos of dead children for true crime content is not allowed. The change comes after a series of videos surfaced on social media platforms, including TikTok, featuring simulated voices of real child murder victims narrating their own gruesome deaths.

The videos, which gained millions of views, sparked widespread attention and outrage. YouTube’s updated policy, set to take effect on January 16, will result in strikes against content that realistically simulates deceased minors or victims of well-known violent events describing their death or the violence they experienced.

While TikTok already has policies in place to address this type of content, with labels required on AI-created videos and deepfakes of individuals under 18 or non-public figures being prohibited, YouTube has faced criticism for allowing such disturbing content to circulate on its platform.

Not only are these videos deeply disturbing for viewers, but they also cause immense pain for the survivors. Denise Fergus, whose son James Bulger was abducted and killed in 1993, expressed her disgust in an interview, describing the AI videos featuring her child as “beyond sick” and “bringing a dead child back to life.”

The rise of these unsettling AI videos is not only a reflection of the dark side of the internet, but it also highlights the inherent flaws in a system that incentivizes content creators to exploit our attention, no matter how depraved or disturbing the content may be.

As platforms like YouTube continue to grapple with the challenges of moderating content and ensuring the safety and well-being of their users, it remains to be seen if these policy updates will effectively combat the proliferation of such disturbing videos or if more stringent measures will be necessary in the future.

The source of the article is from the blog be3.sk

Privacy policy
Contact