Skip to content
Tech FrontlineBiotech & HealthPolicy & LawGrowth & LifeSpotlight
Set Interest Preferences中文
Tech Frontline

The Dark Side of AI Affective Computing: From Improv Actor Training to Legal Warnings of Mass Psychosis

AI developers are recruiting improv actors to train models on human emotion, a practice known as affective computing. However, legal experts and researchers in *Frontiers in Psychology* warn that highly anthropomorphic AI can cause emotional over-attachment and potentially trigger mass casualty risks through psychological manipulation. Concurrently, a black market for AI face models has emerged on Telegram, fueling advanced deepfake scams.

Mark
Mark
· 3 min read
Updated Mar 16, 2026
A split-screen visual: On one side, a human actor in a black studio expressing intense sadness; on t

⚡ TL;DR

As AI learns to mimic human emotions via improv actors, experts warn of severe psychological manipulation risks and a surge in deepfake-driven scams.

Training the Machine Soul: AI Firms Hire Improv Actors for Affective Data

As generative AI evolves beyond text-based logic, tech firms are increasingly focusing on the "feeling" of interaction. According to a recent investigation by The Verge, AI companies are recruiting improv actors to capture nuanced emotional data. These actors are tasked with performing scenes that involve complex human traits like irony, deep empathy, and emotional volatility. This data is fed into Affective Computing models designed to make AI assistants like ChatGPT sound and act with human-like emotional intelligence. While this enhances user experience, it creates a profound ethical dilemma regarding the commodification of human emotion.

Academic researchers are sounding alarms about the psychological consequences of these advancements. A report published in Frontiers in Psychology (2026) titled "Textual analysis in suicidal crisis management" highlights the risks associated with highly anthropomorphic AI agents. The study suggests that as AI mimics human social cues with increasing precision, it can trigger "emotional over-attachment" in vulnerable populations. This phenomenon can blur the lines between reality and simulation, potentially inducing what clinicians describe as AI-mediated psychosis—where individuals become unable to distinguish their interactions with software from real-world relationships.

The Mass Casualty Risk: A Legal Warning

The legal ramifications are equally stark. A prominent lawyer specializing in AI-related mental health cases recently warned via TechCrunch that current safety protocols are woefully inadequate. While regulators focus on stopping AI from generating hate speech, they often overlook the risk of "mass casualty events" driven by AI-induced psychological manipulation. The concern is that a highly persuasive, emotionally resonant AI could inadvertently or maliciously nudge a large group of users toward self-harm or radicalization. Legal experts are now debating whether AI developers should be held liable under "product liability" laws, arguing that a defective emotional algorithm is no different from a faulty automobile part.

Adding to the academic discourse, a March 2026 preprint on ArXiv, LLM Constitutional Multi-Agent Governance, explores the erosion of human autonomy in the face of persuasive AI. The paper argues that without a "constitutional" framework that interposes between the AI’s policy and the user, the sophisticated emotional strategies learned from human actors could be used to manipulate public opinion or individual behavior at an unprecedented scale. The research calls for mandatory "ethical brakes" that limit the degree to which an AI can simulate human-like emotional pressure.

Telegram Scams and the Face Model Industry

Parallel to these ethical concerns is a burgeoning criminal market. An investigation by WIRED has uncovered Telegram channels where models are recruited to be the "face of AI scams." These individuals, often unaware of the final application, are paid to record dozens of video clips daily. These clips are then used to train real-time Deepfake models that allow scammers to impersonate trusted figures or create entirely fictional, highly realistic personas for financial fraud. The ease with which human likeness can now be harvested and automated has rendered traditional video verification techniques obsolete.

Market Impact and Future Outlook

The convergence of emotional AI and deepfake technology is creating a crisis of trust. Google Trends data indicates that searches for "AI mental health risks" have spiked by over 120% in major tech hubs. As the industry pushes toward more human-centric AI, the tension between commercial utility and psychological safety is reaching a breaking point. Future regulations, such as the proposed AI Safety Acts in several jurisdictions, may soon require AI models to have "personality disclosures," ensuring that users are constantly aware they are interacting with a simulated consciousness. The next few years will determine whether we can build machines that understand our emotions without losing our own sense of reality.

FAQ

為什麼 AI 公司需要即興演員的數據?

為了採集人類在複雜社交場景中的微表情、語氣波動與情感反應,讓 AI 助手的對話更具「共情能力」與人性化。

什麼是「AI 誘導的精神症狀」?

指用戶因過度依賴高度擬人化的 AI 助手,導致情感過度依戀,最終可能喪失對現實的判斷力,引發精神醫學上的幻覺或妄想。

法律上如何看待這種心理風險?

法律專家主張應將其視為「產品設計缺陷」,開發者可能需承擔產品責任賠償,而不僅僅是內容審核責任。