Federal Court Rules Trump Administration's ICE-Tracking Ban Unconstitutional
A federal court ruled that the Trump administration's pressure on private tech companies to remove ICE-tracking apps and groups violated the First Amendment.
A federal court ruled that the Trump administration's pressure on private tech companies to remove ICE-tracking apps and groups violated the First Amendment.
Federal surveillance programs face intense pushback as courts rule against government interference with protected speech and Congress rejects extensions of warrantless surveillance authorities.
A jury found Live Nation-Ticketmaster an illegal monopoly; the company is appealing the verdict, signaling a long and complicated legal battle ahead.
A jury has found Live Nation/Ticketmaster to be an illegal monopoly that systematically overcharged consumers. The case was brought by a coalition of 33 states after federal authorities opted out of the trial.
A federal jury in Manhattan has found Live Nation and Ticketmaster guilty of illegal monopolization for overcharging fans and unfair business practices. This verdict may pave the way for a potential breakup, despite previous tentative settlement efforts with the DOJ.
IBM has agreed to pay a $17 million settlement to the U.S. DOJ to resolve allegations regarding its DEI programs. While IBM admits no misconduct, the case marks a significant enforcement action under the "Civil Rights Fraud Initiative."
Daniel Moreno-Gama is facing federal felony charges, including attempted murder, following a violent attack on OpenAI CEO Sam Altman’s home and an attempted breach of the company’s headquarters.
IBM settled a DOJ lawsuit regarding its DEI programs for $17 million without admitting wrongdoing. As the first penalty under the administration's 'Civil Rights Fraud Initiative,' this sets a significant precedent for tech industry DEI policies.
OpenAI faces rising pressure from physical threats to leadership, a lawsuit alleging AI-fueled stalking, and intense criticism over its lobbying efforts for AI liability protection.
OpenAI is under fire following a stalking victim's lawsuit over AI-fueled harassment, a state investigation into potential links to a shooting, and physical attacks on CEO Sam Altman, raising urgent questions about AI safety and legal liability.
John Deere settled a $99 million class-action lawsuit for allegedly monopolizing repairs by locking out farmers and independent mechanics from essential equipment tools.
OpenAI faces a multi-front struggle: a probe by Florida's AG over product safety, lobbying for liability limits in Illinois, and the suspension of a major UK data center project due to energy and regulatory hurdles.
OpenAI is lobbying for an Illinois bill to shield AI developers from liability for critical harm, sparking intense debate. Meanwhile, Florida has launched an investigation into the company, highlighting growing regulatory concerns over public and national security.
John Deere has settled a class-action lawsuit for $99 million, agreeing to provide more repair resources and tools to farmers, marking a win for the Right-to-Repair movement.
Florida Attorney General James Uthmeier has opened an investigation into OpenAI following reports that ChatGPT was used to plan a shooting at Florida State University, raising questions about AI liability.
Anthropic is trapped in legal uncertainty due to conflicting federal court rulings regarding the use of its Claude model by the US military. Despite these challenges, the company is continuing its enterprise expansion by launching new managed AI agents and a restricted-access cybersecurity model called Mythos.
Anthropic is facing legal uncertainty due to conflicting court rulings regarding the military use of its Claude models, creating a 'supply-chain risk' that complicates its federal government and enterprise expansion efforts.
Generative AI is mired in controversies over copyright and trust, highlighted by legal challenges against Suno and Microsoft Copilot's liability disclaimers. Tech giants' "entertainment-only" stance leaves users and creators vulnerable as courts debate whether AI training on existing works constitutes infringement.
A federal judge has issued an injunction blocking the Trump administration's attempt to blacklist Anthropic, ruling that the administration lacked the legal authority to impose restrictions based on supply-chain-risk designations.
A federal judge has granted a preliminary injunction against the U.S. government's attempt to label Anthropic a supply chain risk, temporarily halting restrictions on the AI company as its lawsuit against the DoD proceeds.
A New Mexico jury has ordered Meta to pay $375 million in penalties, finding the company liable for misleading users about child safety on its platforms. This represents the first jury verdict of its kind against Meta regarding harm to young users.
A New Mexico jury found Meta liable for misleading users about product safety for minors, resulting in a $375 million penalty. This case marks a major legal precedent for social media platforms.
A jury has ruled that Elon Musk's tweets regarding platform bots during the Twitter acquisition constituted fraud, leaving him liable for significant damages to investors.
A jury has found Elon Musk liable for securities fraud related to his Twitter acquisition tweets. The decision could lead to multi-billion dollar damages and sets a new precedent for executive social media accountability.
Anthropic has filed a lawsuit against the U.S. DoD challenging its 'supply-chain risk' designation. Court filings suggest the Pentagon had recently indicated alignment on security compliance before abruptly blacklisting the company, which Anthropic claims is based on technical misunderstandings.
The US Supreme Court has declined to hear an appeal regarding AI copyright, effectively confirming that works created solely by AI cannot be copyrighted. The decision reinforces human authorship as a mandatory requirement for legal protection.