Balancing Innovation and Human Rights: Federal Preemption and The Future of AI Regulation
The One Big Beautiful Act (OBBA), signed into law in July 2025, introduced sweeping policy changes aimed at strengthening domestic industries and reducing federal regulation. The OBBA has significant implications for AI governance because its deregulatory provisions constrain agency enforcement powers, discourage state-level limits on emerging technologies, and condition federal funding on states’ adoption of “innovation-friendly” regulatory approaches. During Senate floor consideration following the OBBA’s introduction, Senator Marsha Blackburn introduced an amendment to strike a proposed ten-year moratorium on state AI laws, allowing states to continue developing their own rules without federal obstruction. Interestingly, Senator Blackburn had initially supported the moratorium but later reversed her position. The adoption of the amendment highlights the growing tension between innovation-driven deregulation and the urgent need for oversight, particularly in safeguarding fundamental human rights in AI governance.
While debates over AI regulation have focused largely on questions of economic competitiveness, the more substantial concern should be the protection of human rights amid rapid technological advancement. Algorithmic systems are used to create deepfakes, voice clones, and discriminatory hiring tools. Critically, the fragmentation of state AI regulation makes it increasingly difficult to build clear and consistent baseline safeguards nationwide. This is driven by divergent definitions, audit requirements, and disclosure obligations that require companies to devote significant resources to navigating compliance rather than preventing AI harms. Although some degree of variation in state enforcement and innovation is inevitable and appropriate, these disparities currently exist at the threshold level, where no shared federal minimum standards guide state regulation. As a result, individuals’ protection against algorithmic discrimination, data misuse, and opaque decision-making depends largely on state jurisdiction.
A coordinated federal-state framework for AI regulation could address this problem by establishing uniform procedural baselines through congressional action. These rules would ensure that companies and developers comply with clear transparency requirements, risk assessments, and appeal mechanisms, while still preserving space for states to experiment with stronger protections where needed. Moreover, adopting a federal baseline would require clear and accessible standards governing how automated decisions are explained, how information is handled, and what risks must be reviewed. Unlike proposals that respond to the rise of state AI regulation by advocating temporary federal preemption or a national standard without enforceable safeguards, this Legal Impression argues for a framework that establishes binding federal baselines to prevent algorithmic harms before they occur rather than deferring protection to post hoc remedies.
The Current Push to Accelerate AI Through Federal Deregulation
Within the same month as the OBBA’s passage, the White House doubled down on its deregulatory approach through the AI Action Plan (“Action Plan”), a set of executive orders and policy directives aimed at accelerating U.S. dominance in AI. The Action Plan seeks to scale back oversight to prioritize industry growth, even if doing so weakens technical safeguards against bias, privacy, and misinformation. Specifically, the Action Plan rescinds the Biden Administration’s 2023 Executive Order 14110, which established a federal framework for AI oversight, risk assessment, and equity-based safeguards. The Trump administration claimed that the Biden order served as an onerous regulation that discouraged innovation.
The Trump Administration’s deregulatory agenda further gained momentum in December 2025 with the signing of an executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which directed the Justice Department to establish an AI Litigation Task Force to challenge state AI laws and condition federal funding on states not enforcing AI regulations. Crucially, while the executive order acknowledges the patchwork of fragmented state regulations, it responds by using federal authority to dismantle state protections rather than establish baseline safeguards. Thus, the executive order imposes stricter limits on regulation rather than delegating focus to protecting human rights.
Meanwhile, in the first four months of 2025 alone, states introduced more than one thousand AI-related bills, illustrating how the absence of a federal framework has accelerated state-level responses, even as many proposals remain in preliminary stages. Some proposals aim to study the use of AI or promote innovation, while others adopt preemptive regulatory schemes. For example, New York’s RAISE Act and Colorado’s SB 24-205 would impose different audit requirements, definitions, and compliance standards across state lines, creating compliance fragmentation for developers operating nationwide. Thus, this patchwork fails on both fronts: it neither protects consumers from algorithmic harms nor provides developers with coherent compliance standards.
Recommendations for a Unified Federal–State Approach to AI Regulation
To address these issues, Congress should legislate a coordinated federal-state framework that harmonizes existing and emerging state initiatives by incorporating guidance from three established governance tools—the U.S. Department of State’s Risk Management Profile for Artificial Intelligence and Human Rights (“State Department Profile”), the National Institute of Standards and Technology’s AI Risk Management Framework (“NIST Framework”), and the UN Guiding Principles on Business and Human Rights (“UN Guiding Principles”). Instead of replacing state action, such models would set a floor of minimum safeguards that are evenly enforced across jurisdictions. States would then adjust enforcement based on regional demands or industry challenges. In this way, states would retain meaningful regulatory flexibility to implement stronger protections in high-impact areas, while the federal baseline would ensure that fundamental human rights protections are consistently upheld nationwide.
The NIST Framework provides a foundational structure for AI governance by organizing oversight around four core functions: Govern, Map, Measure, and Manage. Through this lens, agencies and developers are encouraged to establish internal accountability mechanisms (Govern), assess context-specific risks (Map), apply measurable evaluation criteria (Measure), and implement ongoing mitigation strategies (Manage). Building on the structure of the NIST Framework, the State Department Profile applies a risk-based approach through an explicit human-rights lens. It highlights AI’s contribution to the reproduction of biases and misuse of AI systems for surveillance, censorship, and discrimination. Both frameworks result in a system with operational standards oriented toward precaution, specifically prioritizing oversight that anticipates and addresses risks before harm occurs.
In sectors like law enforcement, employment, and healthcare, heightened scrutiny is crucial because algorithmic systems in these domains directly shape lives. However, most individuals lack access to the information necessary to review, appeal, or otherwise challenge such decisions. AI systems can determine employment outcomes, medical eligibility, and criminal justice interventions, yet operate without transparency. Thus, uniform federal standards make sense because they anchor accountability across systems. By integrating the UN Guiding Principles, which emphasize due diligence in preventing human rights abuses and ensuring remedies for harm, we can strengthen the ethical foundations of AI systems and promote more equitable outcomes. Oversight is most effective when national guidance sets minimum expectations, leaving room for local flexibility when regional context matters.
Conclusion
Without a cohesive federal framework for AI regulation, uneven enforcement will fail to defend and protect basic civil liberties. Although existing frameworks such as the State Department Profile, NIST Framework, and the UN Guiding Principles lack enforceability, they provide critical direction for congressional action to establish indispensable protections. When these efforts align, human rights protections are not subject to geographic disparity; instead, consistency emerges across regions and encourages ethical development without sacrificing oversight.
Pilar Julia Pascual
GLTR Legal Impressions Editor; Georgetown University Law Center, J.D. expected 2027; University of California, Berkeley, B.A. 2024.