“AI Governance”: A Black Gen Z-er’s Two Cents on The Conversation

Jun 30, 2021

The emerging position is for governments to be at the apex of the governance regime when it comes to AI. This is being signaled by initiatives such as the recently announced Artificial Intelligence Act proposed by the European Commission (EC). However, a truly sustainable framework in the digital age requires an omni-stakeholder approach to questions of AI governance and policy. It is critically important that a voice be given to the most vulnerable in such a process. It is also critical that there be an alignment of the interests of “big tech” with human interest, particularly in ensuring that AI does not undermine human rights. This would lead to a better society; one which is built on trust and receptive to innovation, and which advances the welfare of both the technology companies and the society they serve. Below are three recommendations for how this might be advanced.

Firstly, to align the interests of tech companies and the desires for a fair and just society, we must utilize the foundational principle of demand and supply. We would have seen an example in the aftermath of the murder of George Floyd when the 21st century “cancel culture” swept in demands for the human rights of black and brown people to be valued and the threat of severe economic punishment if such demands were not met. Major corporations had an awakening to how risky it was to be unresponsive to matters of social justice and ineffective public policy. That alignment of the awakening of people to deprivation of their liberties and the economic cost of injustice resulted in a rapid supply of corporate attention to governance through the lenses of accountability, responsive authority, and inclusivity. It is still early days, but there is a sense that these changes will be a permanent centerpiece of corporate strategy as well as for future social activism. It is therefore imperative that people are able to recognize how their human rights, like that of privacy, for instance, are being threatened more than ever in the digital space and that they respond with a similar level of outrage as they would to violations of privacy such as “peeping toms" and stalking in the physical world.

Secondly, the users of digital technology must take responsibility for how their own biases feed into the inequalities that they complain about. Our biases are reflected in our searches and the terms that we use, and the clicks we make. These choices are training algorithms to perceive what we value and perpetuate our biases in ways that work against our own interests in the everyday usage of the technology.

Thirdly, and in furtherance of the other two points, is the imperative that digital literacy be a priority in the formal education system and form a part of life-long learning. People must be capable of enjoying the benefits of AI-driven technologies whilst able to detect and militate against harms such as algorithmic bias and manipulation. Digital literacy must also be seen as a pillar of inclusive AI governance. Otherwise, the involvement of uninformed and vulnerable people at the governance table would be mere tokenism and of no greater value than if they were not involved in governance at all.

Pia-Milan Green is a research fellow for Carnegie Council’s Artificial Intelligence and Equality Initiative.

You may also like

JUL 2, 2024 Podcast

Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

Senior Fellow Anja Kaspersen speaks with Elisabet Haugsbø, president of tech union Tekna, about her engineering journey, resiliency in the AI era, and much more.

JUN 27, 2024 Podcast

AI, Military Ethics, & Being Alchemists of Meaning, with Heather M. Roff

Senior Fellow Anja Kaspersen and Heather Roff, senior research scientist at the The Center for Naval Analyses, discuss AI systems, military affairs, and much more.

JUN 17, 2024 Podcast

Linguistics, Automated Systems, & the Power of AI, with Emily M. Bender

In this episode, guest host Dr. Kobi Leins & University of Washington’s Dr. Emily Bender discuss why language matters in the development of technological systems.