Skip to content
Tech FrontlineBiotech & HealthPolicy & LawGrowth & LifeSpotlight
Set Interest Preferences中文
Tech Frontline

Anthropic Expands Claude AI: Gaining Control Over User Desktops

Anthropic has released a new research preview that allows its Claude AI to control Mac computers, marking a major step toward autonomous AI agents. Concurrently, the company is facing legal challenges regarding a Department of Defense 'supply-chain risk' designation.

Jason
Jason
· 1 min read
Updated Mar 25, 2026
A digital depiction of a human-like cursor interacting with a complex digital workspace, artificial

⚡ TL;DR

Anthropic has enabled its Claude AI to control computers to perform tasks, while it simultaneously navigates a legal dispute regarding a Pentagon 'supply-chain risk' label.

Scaling Autonomy: Claude’s New Operating Capability

Anthropic has taken a significant leap forward in the race to build truly capable AI agents with the launch of its latest research preview for Claude. The AI can now interface directly with macOS, allowing it to perform tasks such as clicking buttons, opening applications, entering data into fields, and navigating through complex software environments—all without requiring constant human supervision. By transitioning from a chatbot to a remote digital operator, Claude is being positioned as a powerful, autonomous assistant capable of executing multi-step workflows.

Safety and Legal Scrutiny

With this increased agency comes a heightened risk profile. Anthropic’s engineers have implemented safeguards, but the company has been transparent in labeling this update a "research preview," openly admitting that its safety measures "aren't absolute." This acknowledgment underscores the ongoing tension between rapidly deploying advanced features and ensuring the security of user systems.

Simultaneously, Anthropic is navigating a complex legal environment. The company is currently embroiled in a dispute with the U.S. Department of Defense (DoD), which has designated Anthropic as a "supply-chain risk." This classification threatens the company’s ability to secure federal contracts. During recent hearings, a district court judge voiced concerns over the transparency of the Pentagon's motivations, suggesting that the logic behind the designation may be problematic. This legal uncertainty could prove pivotal as Anthropic attempts to solidify its standing as a premier provider for both the private and public sectors.

The Road Ahead

As Anthropic and its peers race to imbue AI with the power to control user computers, the industry faces a critical crossroads. The promise of productivity—automating complex, tedious tasks—must be weighed against the genuine risks of unauthorized actions and data misuse. How Anthropic handles the delicate balance between expanding Claude’s operational scope and ensuring ironclad security will likely determine the success of its agent-based model. Observers will be closely watching both the outcome of the Pentagon litigation and user feedback on the reliability of Claude’s newly empowered autonomy.

FAQ

Claude 現在可以做什麼?

Claude 能夠直接模擬人類操作電腦,例如開啟軟體、點擊按鈕、輸入文字,這使得它能自動執行複雜的跨程式流程。

Anthropic 與國防部有什麼衝突?

國防部將 Anthropic 列為「供應鏈風險」公司,這限制了其承接聯邦合約的能力。法官對此決策的透明度表示懷疑。

讓 AI 接管電腦安全嗎?

Anthropic 目前將此功能定位為「研發預覽」,並承認安全保護機制並非完美,建議用戶在使用時保持謹慎。