EU AI Act Sets Risk-Based Rules for Emerging Technologies

The European Union’s AI Act marks the first comprehensive regulatory framework for artificial intelligence within its jurisdiction, setting obligations for providers and users according to clearly defined risk categories. While many AI systems present minimal risk, all must undergo assessment to determine their classification. The legislation draws sharp boundaries around certain applications deemed unacceptable, prohibiting cognitive behavioural manipulation targeting individuals or vulnerable groups—such as voice-activated toys that encourage dangerous behaviour in children. It also bans social scoring systems that classify people based on behaviour, socio-economic status, or personal characteristics, along with biometric identification and categorisation technologies. Real-time and remote biometric identification, including facial recognition in public spaces, is prohibited except under narrow law enforcement exceptions. “Real-time” systems may be deployed only in a limited number of serious cases, while “post” remote biometric identification—conducted after a significant delay—will be permitted solely for prosecuting serious crimes and only with court approval.

Image Credit to Wikimedia Commons | License details

AI systems that could negatively affect safety or fundamental rights fall into the high-risk category, divided into two distinct groups. The first encompasses AI embedded in products covered by EU product safety legislation, such as toys, aviation systems, automobiles, medical devices, and lifts. The second includes AI used in specific domains that must be registered in an EU database: management and operation of critical infrastructure; education and vocational training; employment, worker management, and access to self-employment; access to essential private and public services and benefits; migration, asylum, and border control; and assistance in legal interpretation and application of the law. All high-risk systems will undergo assessment prior to market entry and throughout their operational lifecycle. Citizens will have the right to file complaints with designated national authorities regarding such systems.

Generative AI platforms, including conversational models like ChatGPT, are not classified as high-risk under the Act. However, they must adhere to transparency rules and comply with EU copyright law. Providers must disclose when content is AI-generated, design models to prevent the creation of illegal material, and publish summaries of copyrighted data used in training. High-impact general-purpose AI models with potential systemic risk—such as advanced iterations like GPT-4—will be subject to rigorous evaluation, with any serious incidents reported to the European Commission.

The Act also addresses synthetic media. Any content generated or modified using AI—whether images, audio, or video—must be clearly labelled to inform users of its origin. This requirement directly targets phenomena such as deepfakes, ensuring that consumers, regulators, and downstream systems can distinguish between authentic and artificially produced material.

For engineers and technologists working in sectors like aerospace, automotive, robotics, and critical infrastructure, the implications are significant. AI integrated into flight control systems, autonomous driving platforms, or industrial robotics will face stringent conformity assessments before deployment. Lifecycle monitoring will demand robust documentation, testing protocols, and incident reporting mechanisms. In aerospace, for example, AI-based predictive maintenance tools or autonomous navigation algorithms will need to meet both aviation safety standards and the AI Act’s high-risk provisions. Similarly, automotive manufacturers incorporating AI into driver assistance or vehicle-to-infrastructure communication systems must ensure compliance with both product safety directives and AI-specific transparency obligations.

The emphasis on transparency in generative AI also intersects with engineering disciplines that rely on simulation and modelling. When AI assists in producing design schematics, performance predictions, or visualisations, clear disclosure of AI involvement becomes a regulatory requirement. This not only affects public-facing applications but also internal workflows where AI outputs inform critical design decisions.

By structuring obligations around risk levels, the AI Act creates a tiered compliance landscape. Low-risk applications benefit from lighter oversight, while high-risk systems face rigorous scrutiny. For developers, the framework demands early integration of compliance considerations into design and deployment strategies. For users, particularly in regulated industries, it offers clearer guidance on acceptable AI use and mechanisms for redress when systems fail or infringe on rights.

spot_img

More from this stream

Recomended

Discover more from Aerospace and Mechanical Insider

Subscribe now to keep reading and get access to the full archive.

Continue reading