Global AI Rules Tighten as Legal and Ethical Stakes Rise

The accelerating integration of artificial intelligence into sectors as varied as aerospace engineering, automotive design, robotics, and advanced manufacturing is reshaping not only technical capabilities but also the legal frameworks that govern them. By late 2023, multiple jurisdictions had advanced regulatory agendas that will shape how AI is developed, deployed, and maintained in 2024 and beyond.

Image Credit to depositphotos.com

In the European Union, policymakers reached a political agreement on the EU Artificial Intelligence Act after intensive negotiations. This legislation adopts a risk-based framework, imposing stricter obligations on high-impact foundational models and introducing transparency requirements for general-purpose AI systems. Certain applications are deemed unacceptable, including scraping facial images from the internet for biometric databases, emotion recognition in workplaces or schools, cognitive behavioral manipulation, social scoring, and some predictive policing practices. The act exempts open-source models and research-focused AI, and explicitly avoids encroaching on national security domains. In parallel, the Artificial Intelligence Liability Directive aims to harmonize civil liability rules, introducing a rebuttable presumption linking provider fault to harmful AI outputs.

The United Kingdom has opted for a sector-led approach, as outlined in its March 2023 white paper. Regulatory bodies in finance, healthcare, competition, and employment will issue tailored guidance, with the government evaluating whether dedicated AI legislation or a central regulator is necessary.

In the United States, no comprehensive AI statute exists, but the October 2023 AI Executive Order directs agencies to assess safety, security, and risk. It sets deadlines for testing and reporting obligations, complementing earlier White House AI policy initiatives. The California Privacy Protection Agency has drafted rules for automated decision-making technology under the California Consumer Privacy Act, granting individuals rights to pre-use notice and opt-out options. The U.S. Copyright Office and Patent and Trademark Office maintain that human authorship is required for copyright and patent eligibility, a stance upheld in litigation. The Securities and Exchange Commission has proposed rules to address conflicts of interest from AI use in financial services.

Internationally, over 37 countries have proposed AI-related laws. The United Nations has convened an AI advisory board to produce governance recommendations by mid-2024, and the Bletchley Declaration, signed by 28 nations, calls for trustworthy AI and global cooperation.

Litigation over AI’s use of copyrighted material is intensifying. Cases such as Getty Images v. Stability AI and Authors Guild v. OpenAI Inc. challenge the ingestion of protected works into training datasets. Defendants often invoke “fair use,” but these disputes are only beginning to move through the courts.

Cybersecurity and privacy risks are also mounting. AI can be weaponized for advanced phishing, malware deployment, and model poisoning attacks. U.S. policy now calls for rigorous testing and reporting of certain AI tools. Privacy laws like the EU’s GDPR and the CCPA impose constraints on data collection, purpose limitation, and transparency, which can conflict with large-scale model training. Biometric privacy suits, such as those against Clearview AI, highlight the legal hazards of scraping personal images. The Federal Trade Commission has exercised its authority to mandate deletion of models trained on improperly obtained data, a measure termed “algorithmic disgorgement.”

Labor and employment law intersects with AI where automated tools influence hiring, promotion, and termination decisions. Anti-discrimination statutes, including Title VII of the Civil Rights Act, apply to AI-driven HR processes. Local measures, such as New York City’s Local Law 144, require bias audits for automated employment decision tools. Illinois’ Artificial Intelligence Video Interview Act and similar proposals in other states further regulate AI in recruitment.

As these regulatory landscapes evolve, internal governance becomes critical. Organizations integrating AI into products or workflows are advised to align public terms of use with internal policies, restrict unauthorized use of proprietary data in AI systems, and ensure vendor contracts address AI-specific risks. Incorporating AI risk assessments into compliance programs, conducting training, and running tabletop exercises can prepare teams to navigate the operational and ethical challenges posed by rapidly advancing AI capabilities.

spot_img

More from this stream

Recomended

Discover more from Aerospace and Mechanical Insider

Subscribe now to keep reading and get access to the full archive.

Continue reading