U.S. AI Regulations in 2026: What U.S Laws Change
Federal AI regulation in the United States remains unresolved in early 2026. The most consequential changes to AI governance are coming from state-level laws now in force in California and New York. According to legal experts and safety researchers, these first-of-their-kind regulations signal growing momentum for AI oversight. Even as the Trump administration renews efforts to curb state-led initiatives.
California’s SB-53 and New York’s RAISE Act are currently the most binding AI safety laws in the U.S. Both require AI developers to disclose how they mitigate major risks and to report safety incidents. Introducing enforcement mechanisms into an otherwise lightly regulated landscape. SB-53, effective January 1, mandates incident reporting within 15 days and allows fines of up to $1 million, while the RAISE Act requires notification within 72 hours and raises potential penalties to $3 million after repeat violations. Both laws apply primarily to companies with more than $500 million in annual revenue, exempting many smaller startups.
Legal experts note that these laws emphasize transparency and documentation over prescriptive technical controls. Lily Li, founder of Metaverse Law, told ZDNET that the revenue threshold reflects political compromise rather than differences in AI risk, particularly as smaller firms can now deploy powerful models. Compared with the earlier, failed SB-1047—which would have required costly safety testing and kill switches—SB-53 takes a lighter approach focused on reporting catastrophic risks such as cyber, biological, or radiological harm.
At the federal level, President Trump signed a December executive order reaffirming opposition to what the administration calls “excessive State regulation.” The order established an AI Litigation Task Force to challenge state laws deemed unconstitutional. However, Li argues that, absent federal AI legislation, states retain authority under the Tenth Amendment.
Key takeaways for AI developers and policymakers:
- California SB-53 and New York’s RAISE Act are now the strongest AI laws in the U.S.
- Both prioritize transparency, incident reporting, and governance over direct model controls
- Federal efforts may challenge state laws, but preemption remains uncertain
- Researchers see progress, but warn current rules fall short of addressing long-term AI risks
Experts agree these laws are an initial step rather than a final solution. While governance and transparency demands are already reshaping enterprise expectations, both lawyers and researchers emphasize that far more robust AI safety measures will be needed as advanced and potentially high-risk systems continue to scale.
Source:
https://www.zdnet.com/article/state-ai-safety-laws-california-new-york/
Ready to Build Your Next Product?
Start with a 30-min discovery call. We'll map your technical landscape and recommend an engineering approach.
Engineers
Full-stack, AI/ML, and domain specialists
Client Retention
Multi-year partnerships with global enterprises
Avg Ramp
Full team deployed and productive


