YouTube announced a new tool requiring creators to disclose when content made with altered or synthetic media, including generative AI, could be mistaken for a real person or event. The goal is to prevent deception as generative AI tools make it harder to differentiate between real and fake. This initiative addresses the potential risks posed by AI and deepfakes, particularly with the upcoming U.S. presidential election. While the new policy doesn't apply to clearly unrealistic or animated content, creators must disclose digitally altered faces, synthetic person voices, or altered footage of real events or places. The labels will soon roll out across all YouTube formats, with enforcement measures for non-compliance.