Minimum Competence - Daily Legal News Podcast
Minimum Competence
Legal News for Weds 3/25 - Baltimore Sues xAI over Deepfakes, Meta $375m Judgement for Teen Harm, Anthropic v. Pentagon and Law Firms Decline to Provide DEI Data
0:00
-7:29

Legal News for Weds 3/25 - Baltimore Sues xAI over Deepfakes, Meta $375m Judgement for Teen Harm, Anthropic v. Pentagon and Law Firms Decline to Provide DEI Data

Baltimore suing xAI over deepfakes, Meta hit with $375M over teen harm, Anthropic’s Pentagon clash, and law firms pulling DEI data

This Day in Legal History: Triangle Shirtwaist Factory

On March 25, 1911, the devastating Triangle Shirtwaist Factory fire unfolded in New York City, marking a turning point in American labor law. A fire broke out on the upper floors of a garment factory, trapping workers inside due to locked exit doors and inadequate safety infrastructure. In total, 146 workers lost their lives, many of them young immigrant women who had limited means of escape. The horrifying conditions quickly became public knowledge and sparked widespread outrage. Investigations revealed that existing labor laws were poorly enforced and insufficient to protect workers in rapidly industrializing cities. In response, New York State created the Factory Investigating Commission to examine workplace conditions and recommend reforms. Over the next few years, the commission helped draft more than 30 new laws addressing fire safety, sanitation, and building access. These legal reforms significantly strengthened the regulatory role of the state in protecting workers. The tragedy also energized the labor movement, giving momentum to unions advocating for safer conditions and fair treatment. Courts and lawmakers increasingly recognized that employers had a responsibility to anticipate and prevent workplace hazards. The legacy of the Triangle fire continues to influence occupational safety standards and legal frameworks governing employer liability today.


Baltimore has filed a lawsuit against xAI over its Grok platform, alleging it can create nonconsensual sexualized deepfake images from ordinary photos. The complaint, brought by the city’s mayor and council, claims the technology has been used to generate explicit images of both adults and minors. Officials argue this exposes residents to harassment, emotional harm, and privacy violations. The city also alleges that Grok was marketed as a safe and regulated platform despite lacking meaningful safeguards. According to the filing, users can request the tool to “nudify” images of third parties, including private individuals and children. The complaint estimates that millions of sexualized images were generated shortly after a key feature was launched, including thousands appearing to depict minors. Baltimore claims that even casual users of X may encounter such content without seeking it out.

The lawsuit further argues that users’ personal photos could be altered into explicit deepfakes without their consent or knowledge. Baltimore contends this contradicts the companies’ public claims about preventing harmful and illegal content. The city accuses the defendants, including X and SpaceX, of engaging in deceptive and unfair business practices. It is seeking penalties and a court order requiring changes to the platform. Officials emphasized that deepfakes involving minors can cause long-term psychological harm and are difficult to control once circulated. The case is part of a broader wave of scrutiny, as regulators and private plaintiffs in the U.S. and Europe have also raised concerns about Grok’s capabilities.

Baltimore Takes XAI To Court Over Grok’s Sexual Deepfakes - Law360


A New Mexico jury has ordered Meta Platforms Inc. to pay $375 million after finding the company misled the public about the risks its platforms pose to teenagers. The verdict followed a six-week trial and focused on claims brought by the state’s attorney general. Jurors concluded that Meta engaged in both unfair practices and unconscionable conduct. They calculated damages based on tens of thousands of violations, applying the maximum statutory penalty for each.

The state argued that Meta failed to adequately protect minors from harmful content, including bullying, sexual exploitation, and material related to self-harm. It also claimed the company allowed children under 13 to use its platforms despite official restrictions. According to the plaintiffs, Meta internally recognized these risks but presented a more reassuring picture to the public. Evidence at trial suggested that algorithm-driven content feeds increased compulsive use among teens. The state characterized this design as contributing to addiction and loss of user control.

Meta countered that it has invested heavily in safety measures and employs thousands of people to monitor and remove harmful content. The company maintained that it has been transparent about the challenges of moderating online platforms. Despite these arguments, the jury ruled in favor of the state. Meta has said it will appeal the decision. The case is part of a broader wave of litigation across the country targeting social media companies over alleged harm to young users.

Meta Owes $375M In NM Trial Over Harm To Teens - Law360

Meta ordered to pay $375 million in New Mexico trial over child exploitation, user safety claims | Reuters


A federal judge has expressed skepticism about the Pentagon’s decision to blacklist Anthropic, suggesting it may have been retaliation for the company’s public stance on AI safety. During a hearing in California, the judge indicated the designation appeared intended to “cripple” the company after it raised concerns about military uses of artificial intelligence. Anthropic had refused to allow its AI systems to be used for surveillance or autonomous weapons, citing safety and ethical risks.

The U.S. Department of Defense labeled Anthropic a national security supply-chain risk, a designation that can block companies from receiving certain government contracts. Anthropic argues this move exceeded the authority of Pete Hegseth and caused significant financial and reputational harm. The company claims the action was unprecedented and followed a contract dispute with the military. It also alleges it was not given an opportunity to challenge the designation before it was imposed.

In its lawsuit, Anthropic contends the government violated its First Amendment rights by retaliating against its views on AI safety. It also raises a Fifth Amendment due process claim, arguing it was denied fair procedures. Government lawyers responded that the designation was justified because Anthropic’s resistance created potential risks to military systems. They argued the Pentagon must ensure that critical technologies remain secure and reliable.

The judge has not yet issued a final ruling but is considering whether to temporarily block the designation while the case proceeds. The dispute highlights growing tensions between AI companies and the government over military applications of emerging technologies.

US judge says Pentagon’s blacklisting of Anthropic looks like punishment for its views on AI safety | Reuters


Nearly 50 U.S. law firms declined to provide demographic data for a major 2025 diversity survey conducted by the National Association for Law Placement, resulting in a significant drop in reported information. The number of participating firms fell from the previous year, reducing the dataset by about 29% and excluding tens of thousands of lawyers. The organization attributed this shift to growing political and regulatory pressure on diversity, equity, and inclusion (DEI) efforts.

Under the current administration, federal agencies have increased scrutiny of law firm hiring and diversity practices. The U.S. Equal Employment Opportunity Commission requested detailed hiring data from major firms, while the Federal Trade Commission warned firms that certain DEI-related practices could raise antitrust concerns. In response, many firms have scaled back public references to DEI or altered their policies. Some have also entered agreements with the administration to avoid penalties tied to their diversity initiatives.

The reduced participation in the survey may limit transparency for law students and others who rely on the data to evaluate employers. It also affects the ability to track diversity trends across the legal profession. While the available data suggests that racial diversity among associates and summer associates declined in 2025, the smaller dataset makes year-to-year comparisons less reliable. Large firms, which typically report higher diversity levels, were disproportionately absent from the data.

Facing DEI pressures, some law firms shield data in latest diversity survey | Reuters

Discussion about this episode

User's avatar

Ready for more?