AI Content Controversies: From Horror Novels to Explicit Deepfakes
Publishers are pulling horror novels due to AI concerns, while platforms are intensifying efforts to purge explicit AI-generated content, marking a turning point in AI ethics.
Publishers are pulling horror novels due to AI concerns, while platforms are intensifying efforts to purge explicit AI-generated content, marking a turning point in AI ethics.
AI developers are recruiting improv actors to train models on human emotion, a practice known as affective computing. However, legal experts and researchers in *Frontiers in Psychology* warn that highly anthropomorphic AI can cause emotional over-attachment and potentially trigger mass casualty risks through psychological manipulation. Concurrently, a black market for AI face models has emerged on Telegram, fueling advanced deepfake scams.
Grammarly is facing a class-action lawsuit led by journalist Julia Angwin over its "Expert Review" feature, which allegedly cloned the identities of writers and academics without consent. The feature presented AI suggestions as being inspired by specific human experts, leading to charges of identity theft and violation of the Right of Publicity. Grammarly has since disabled the tool, marking a pivotal moment in AI ethics and legal boundaries.
OpenAI's announcement of a classified technology deal with the U.S. DoD triggered a near-300% surge in ChatGPT app uninstalls. Users and tech workers are protesting the militarization of AI, leading to a massive migration toward rivals like Anthropic and sparking a debate on tech neutrality.