Systemic Governance for Human-Centered AI

Human-centricity has become a prominent goal in artificial intelligence governance, yet its current application in policy frameworks risks diluting its transformative potential. While the term draws from human-centered design (HCD), its adaptation to public governance often lacks the necessary reform to address the broader socio-political and ethical dimensions inherent in AI deployment. Policy discourse frequently equates human-centric AI (HCAI) with safeguarding fundamental rights—a vital foundation, but insufficient for fostering technological emancipation and human flourishing.

Image Credit to depositphotos.com

AI’s promise lies in its capacity to process vast datasets, enhance public services, and improve human performance. However, governance must grapple with who benefits, who bears the risks, and how to balance competing values. As noted, governance “includes various frameworks, processes, and tools designed to maintain and promote cooperative possibilities to formulate shared values for AI, as well as to make and implement decisions regarding desirable directions in the development and use of AI.” This demands legitimacy while navigating diverse national, corporate, and societal interests.

Inclusiveness in governance—engaging stakeholders and the public—improves decision quality, legitimacy, and trust. It also enriches the informational basis for policy, reduces asymmetries, and enhances flexibility. Multidisciplinary AI research and education further integrate community expectations into design, helping identify societal challenges early.

The EU’s Ethics Guidelines for Trustworthy AI state that systems “need to be human-centric, resting on a commitment to their use in the service of humanity and the common good, with the goal of improving human welfare and freedom.” Yet, without operational clarity, such ambitions risk becoming symbolic. Narrowing human-centricity to user experience overlooks broader systemic impacts—digital economy power structures, environmental costs, and effects on mental health and democracy.

A robust approach integrates three perspectives: user-centeredness, community-centeredness, and society-centeredness. This reframing positions citizens not just as users but as active participants in collective decisions on AI’s role in public life. Achieving this requires governance grounded in social sustainability—addressing societal, economic, and environmental impacts alongside diverse human values.

Trust is a cornerstone. As Sutrop distinguishes, trust in developers fosters social trust, while reliable processes and values build systemic trust. Technical robustness, explainability, transparency, and accountability are essential for trustworthy AI. Humans must understand, at least in part, how systems function and why decisions are made, including mechanisms to detect and prevent undesirable outcomes.

Mutuality—reciprocity and interdependence among stakeholders—supports interdisciplinary communication and collective decision-making. It demands equity, autonomy, solidarity, and participation, ensuring all stakeholder needs are considered. Effective communication must bridge technical and ethical domains, enabling shared language and joint goals.

Collaborative tools, particularly civic tech, can operationalize inclusive governance. Platforms like Taiwan’s vTaiwan project demonstrate how open consultation, crowdsourcing, and deliberative tools can align citizen input with legislative processes. Such tools can aggregate preferences and foster large-scale engagement, though alignment with institutional decision-making is critical to avoid stagnation.

A systemic approach views AI governance as a socio-technical assemblage, where actors and technologies interact to produce societal impacts. Balancing diverse perspectives with long- and short-term consequences strengthens trust and decision-making. Transparency, communication, and mutuality between authorities and ecosystem agents underpin this model.

Novel participation technologies can enhance deliberation and collaboration, changing expectations of democratic processes. Yet, governance must evolve from hierarchical structures to inclusive modalities that question rationale, accountability, and transparency. Without considering AI’s wider socio-technical and political context, governance risks superficiality, undermining emancipatory potential.

Human-centricity, if narrowly focused on individuals, misses the interconnected nature of societal and environmental systems. Extending the concept to community and society—and potentially to planet-centricity—aligns AI with sustainable development goals. Empowering citizens as co-developers marks a paradigm shift, requiring flexible, adaptive governance that integrates ethical, sustainable, and trustworthy AI principles through enlightened communication and inclusive design.

spot_img

More from this stream

Recomended

Discover more from Aerospace and Mechanical Insider

Subscribe now to keep reading and get access to the full archive.

Continue reading