Blue vector

CREDIT: rawpixel.

May 24, 2023 Article

Silicon Valley is knowingly violating ethical AI principles. Society can’t respond if we let disagreements poison the debate.

This article originally appeared on Fortune.com.

With criticism of ChatGPT much in the news, we are also increasingly hearing about disagreements among thinkers who are critical of AI. While debate about such an important issue is natural and expected, we can’t allow differences to paralyze our very ability to make progress on AI ethics at this pivotal time. Today, I fear that those who should be natural allies across the tech/business, policy, and academic communities are instead increasingly at each other’s throats. When the field of AI ethics appears divided, it becomes easier for vested interests to brush aside ethical considerations altogether.

Such disagreements need to be understood in the context of how we reached the current moment of excitement around the rapid advances in large language models and other forms of generative AI

OpenAI, the company behind ChatGPT, was initially set up as a non-profit amid much fanfare about a mission to solve the AI safety problem. However, as it became clear that OpenAI’s work on large language models was lucrative, OpenAI pivoted to become a public company. It deployed ChatGPT and partnered with Microsoft–which has consistently sought to depict itself as the tech corporation most concerned about ethics.

Both companies knew that ChatGPT violates, for example, the globally endorsed UNESCO AI ethical principles. OpenAI even refused to publicly release a previous version of GPT, citing worry about much the same kinds of potential for misuse we are now witnessing. But for OpenAI and Microsoft, the temptation to win the corporate race trumped ethical considerations. This has nurtured a degree of cynicism about relying on corporate self-governance or even governments to put in place necessary safeguards.

We should not be too cynical about the leadership of these two companies, who are trapped between their fiduciary responsibility to shareholders and a genuine desire to do the right thing. They remain people of good intent, as are all raising concerns about the trajectory of AI

This tension is perhaps best exemplified in a recent tweet by U.S. Senator Chris Murphy (D-CT) and the response by the AI community. In discussing ChatGPT, Murphy tweeted: “Something is coming. We aren’t ready.” And that’s when the AI researchers and ethicists piled on. They proceeded to criticize the Senator for not understanding the technology and for indulging futuristic hype and focusing attention on the wrong issues. Murphy hit back at one critic: “I think the effect of her comments is very clear, to try to stop people like me from engaging in conversation, because she’s smarter and people like her are smarter than the rest of us.”

I am saddened by disputes such as these. The concerns that Murphy raised are valid, and we need political leaders engaged in developing legal safeguards. His critic, however, is not wrong in questioning whether we are focusing attention on the right issues.

To help us understand the different priorities of the various critics and, hopefully, move beyond these potentially damaging divisions, I want to propose a taxonomy for the plethora of ethical concerns raised about the development of AI I see three main baskets:

The first basket has to do with social justice, fairness, and human rights. For example, it is now well understood that algorithms can exacerbate racial, gender and other forms of bias when they are trained on data that embodies those biases.

The second basket is existential: Some in the AI development community are concerned that they are creating a technology that might threaten human existence. A 2022 poll of AI experts found half expect AI to grow exponentially smarter than humans by 2059, and recent advances have prompted some to bring their estimates forward.

The third basket relates to concerns about placing AI models in decision-making roles. Two technologies have provided focal points for this discussion: self-driving vehicles and lethal autonomous weapons systems. However, similar concerns arise as AI software modules become increasingly embedded in control systems in every facet of human life.

Cutting across all these baskets is misuse of AI, such as spreading disinformation for political and economic gain, and the two-century-old concern about technological unemployment. While the history of economic progress has primarily involved machines replacing physical labor, AI applications can replace intellectual labor.

I am sympathetic to all these concerns, though I have tended to be a friendly skeptic towards the more futuristic worries in the second basket. As with the above example of Senator Murphy’s tweet, disagreements among AI critics are often rooted in fear that existential arguments will distract from addressing pressing issues about social justice and control.

Moving forward, individuals will need to judge for themselves who they believe to be genuinely invested in addressing the ethical concerns of AI However, we cannot allow healthy skepticism and debate to devolve into a witch-hunt among would-be allies and partners.

Those within the AI community need to remember that what brings us together is more important than differences in emphasis that set us apart.

This moment is far too important.

Wendell Wallach is a Carnegie-Uehiro Fellow at Carnegie Council for Ethics in International Affairs, where co-directs the Artificial Intelligence & Equality Initiative (AIEI). He is emeritus chair of the Technology and Ethics Study Group at the Yale University Interdisciplinary Center for Bioethics.

Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the author and do not necessarily reflect the position of Carnegie Council.

You may also like

JUL 31, 2024 Podcast

Responsible AI & the Ethical Trade-offs of Large Models, with Sara Hooker

In this episode, Senior Fellow Anja Kaspersen speaks with Cohere for AI's Sara Hooker to discuss model design, model bias, and data representation.

JUL 24, 2024 Podcast

AI & Warfare: A New Era for Arms Control & Deterrence, with Paul Scharre

Senior Fellow Anja Kaspersen speaks with Center for a New American Security’s Paul Scharre about emerging issues at the intersection of technology and warfare.

JUL 2, 2024 Podcast

Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

Senior Fellow Anja Kaspersen speaks with Elisabet Haugsbø, president of tech union Tekna, about her engineering journey, resiliency in the AI era, and much more.