The trajectory of artificial intelligence policy in 2023 revealed a set of strategies that are likely to persist and expand in the coming years. In the United States, the federal government began to articulate best practices following President Biden’s executive order, while signaling that AI companies would retain significant responsibility for self-policing. Across Europe, regulators and industry prepared to implement the AI Act’s risk-based framework, a system that categorizes applications by potential harm and imposes corresponding obligations. Early indications suggest that applying these frameworks will be complex, with debates over enforcement and interpretation inevitable.

The pace of generative AI’s integration into daily life surprised even seasoned observers. Historically, emerging technologies—from blockchain to autonomous vehicles—have advanced more rapidly than the regulatory systems meant to govern them. This dynamic forces lawmakers to adapt quickly, balancing the need for oversight with the risk of stifling innovation. In 2023, AI policy in the United States was characterized by concentrated influence from major technology firms, bipartisan legislative engagement, geopolitical competition, and the rapid deployment of systems still in their formative stages.
One of the most visible developments occurred in May, when OpenAI’s CEO Sam Altman began meeting with lawmakers, just six months after the release of ChatGPT. Altman’s testimony underscored the potential existential risks posed by his own technology, framing the national conversation around AI’s societal impact. These engagements were followed by President Biden’s public remarks on AI, the convening of congressional AI insight forums designed to accelerate lawmakers’ technical literacy, and the introduction of additional large language models into the market. Notably, the guest lists for these forums leaned heavily toward industry representatives, reinforcing the close relationship between policymakers and corporate stakeholders.
As legislative interest grew, AI became a rare point of bipartisan focus on Capitol Hill. Lawmakers from both parties expressed support for establishing guardrails, even as they approached regulation with caution. Parallel activity at the state level and in the judiciary addressed narrower concerns, such as age verification for online platforms and content moderation standards. This distributed policymaking environment began to coalesce into what some described as a distinctly American approach to AI regulation: industry-friendly, reliant on best practices, and tailored to individual economic sectors rather than a single overarching statute.
The October executive order formalized this approach. It tasked different federal agencies with crafting rules specific to their domains, creating a patchwork regulatory architecture that depends heavily on cooperation from AI developers and operators. This model mirrors earlier regulatory responses to complex technologies, where sector-specific expertise is considered essential for effective oversight.
From an engineering and innovation perspective, the implications are significant. In aerospace, automotive, robotics, and advanced materials, AI systems are increasingly embedded in design, manufacturing, and operational processes. A regulatory environment that emphasizes collaboration over coercion may encourage rapid adoption of AI-driven optimization, predictive maintenance, and autonomous control systems. However, it also places responsibility on engineers and companies to ensure safety, transparency, and ethical compliance without waiting for prescriptive mandates.
The European AI Act’s risk-based methodology offers a contrasting model, one that could influence global standards. High-risk applications—such as AI in critical infrastructure or medical devices—face stringent requirements for data governance, explainability, and human oversight. Lower-risk uses encounter fewer constraints, allowing for experimentation and iterative development. For industries where AI intersects with safety-critical systems, understanding and aligning with these requirements will be essential for market access.
The events of 2023 underscored that AI regulation is not a monolithic process but a negotiation between technological capability, political will, and societal values. The coming phase will test whether distributed, industry-engaged policymaking can keep pace with the accelerating integration of AI into engineering disciplines and everyday life.
