“The study of thinking machines teaches us more about the brain than we can learn by introspective methods. Western man is externalizing himself in the form of gadgets.” ― William S. Burroughs, Naked Lunch
We are experiencing one of the biggest refugee crises since World War II. Within weeks of the beginning of the Russian invasion of Ukraine, more than 4 million people fled the country, according to the UN Refugee Agency (UNHCR). Although international media attention is focused on Eastern Europe, this is on top of an already-desperate situation in the Global South: It is estimated that countries in that region of the world have absorbed two-thirds of an estimated 82.4 million global refugees.
The scenario in Eastern Europe and the Global South is one of a massive number of people who are crossing borders in a hurry, searching for shelter and protection, and needing basic conditions for their dignified stay in the country of destination. This is an enormously challenging situation, but new technology has helped those who arrive and those who receive migrants. Technology can help host countries manage large amounts of information to make vital decisions for their security and the security of those crossing their borders. It has also helped migrants in many ways, such as obtaining real-time geolocation, free stays via online real estate rental companies, and cryptocurrency donations; and fighting Internet blackouts and fake news.
Artificial intelligence (AI) technologies can potentially mitigate situations such as the one we have observed in Ukraine in early 2022 and for years in the Global South. Using deontological and utilitarian perspectives to think through the pros and cons of using AI in the migratory context is a helpful exercise as this situation continues to evolve.
AI, Ethics, & Migration
It is a fact that AI has been increasingly utilized in the migratory cycle in recent years. Since 2020, the use of AI applied to migration has sped up due to the COVID-19 pandemic in order for countries to monitor their borders to, ostensibly, help stem the flow of the virus. AI technologies have also been used for identity verification, deliberation on visa applications and other administrative decisions, border management, and, using an algorithm, analysis of a migrant's likelihood of causing problems. Still, its indiscriminate use creates challenges for both public policy managers and migrants, including possible adverse effects on the protection of human rights such as discrimination and violation of privacy.
The history of migration crises reveals that the reality for a large portion of migrants is one of vulnerability, as is the case with asylum seekers and refugees; displaced people; children, especially separated and unaccompanied; women; minorities persecuted due to race and ethnicity; and the LGBTQI+. Their vulnerabilities can be mitigated or aggravated depending on how AI is used during migration.
Finally, we consider that migratory decisions exist in a spectrum of two extremes. On one side are the human rights of migrants, while on the other side is the national security of the receiving state. We recognize that states have the right to decide how to best manage migratory flows within their territories. However, we also emphasize that this power is not without constraints. It is limited by international human rights norms voluntarily adopted by the Member States party of the international human mobility regime.
Examples of Using AI in Migration
The use of AI in migration is an increasingly frequent and all-encompassing reality. In the United States, the Extreme Vetting Initiative was a project that monitored the social media activities of visa applicants and visa holders to assess whether they would contribute positively to society. The U.S. discontinued the machine learning aspect of this project after criticism that the program could present unreliable information and violate the right to free speech online. In New Zealand, algorithms detect probable unwanted people based on age, gender, and ethnicity in the migratory context. In the European Union, the lie detector iBorderCtrl, which operates by utilizing AI technologies, has already been used to screen passengers at airports. Greece is testing the Centaur system for monitoring refugee camps, which, based on algorithms, predicts and reports potential security threats. In Malaysia, verification of compliance with entry restriction measures was done by drones. Monitoring technologies used at the U.S.-Mexico border have been associated with an increase in migrant deaths, as migrants try to evade surveillance and end up traveling more dangerous routes. In 2020, the European Union's Horizon 2020 research and innovation program funded the Roborder project. The project is not functional yet, but it aims to develop a completely autonomous border monitoring system that has the ability to identify criminal activities at the border.
Deontology x Utilitarianism
To better reflect on migration and AI, we return to the deontological and utilitarian philosophical perspectives on the use of artificial intelligence.
Utilitarians argue that if AI can produce better results than humans, it should be used to maximize gains. For example, a robot deciding on migratory status generates greater security for states. According to this perspective, technology enables fast, unbiased, accurate, and data-backed decision-making. Therefore, it would reduce the chances of human biases and preconceptions in the decision-making process. Even if a minority would be harmed by decisions that do not exercise an adequate evaluative judgment, the decision would be valid in a broader context as it benefits a more significant number of people. From the states' point of view, quick decisions based on a large amount of data would translate into a better defense of national security interests. Therefore, the use of AI would make the process fairer, safer, and more objective for a greater number of people. Thus, the arguments in favor of using AI for migration management are predominantly utilitarian.
Deontologists understand that even if AI can deliver better results than humans, it is morally inadmissible to employ it to perform inherently human tasks. For example, using an algorithm to decide who can enter and remain in a country would be wrong, regardless of its consequences. At the root of this argument is the notion that human dignity results from the interaction between two individuals who recognize themselves as free and bearers of rights. Thus, AI should not be used, at least in some fields that require a valuation, as it would lack the ability to understand the entire personal and migratory context and measure the impacts of such a decision on the migrants' life project.
Even if well-intentioned, the use of AI can become an instrument of surveillance and oppression. Its use may lead to discriminatory practices and violate the rights to privacy, mobility, and association. Since technology often replicates databases’ biases that reflect unfair social practices such as racism and xenophobia, these and other forms of discrimination are likely to be reproduced by AI systems. Thus, people could be categorized in a Lombrosian perspective as having a delinquent personality.
Furthermore, the teams that develop algorithms often lack a diversity of gender, social class, and ethnic origin, which increases the chances of veiled discrimination. Privacy and the right of association are violated with the access and consideration of your social networks, friends, and communities, among others. In summary, AI is not able to understand the context and nuances of the different migratory realities. Thus it should not have the final word on this topic. That said, a deontological perspective gravitates toward the human rights of migrants.
Final Considerations
When thinking about migration issues, we have, on the one hand, national security considerations, and on the other, the human rights of migrants. So far, the only favorable argument for AI being used in these situations that appears to benefit migrants is the possibility of speeding up migration decisions. Even if AI can maximize well-being by offering speed and national security, its use by governments in the migratory context must find limits based on the principle of human dignity.
Governments, academia, and civil society must discuss the use of AI in the migratory context to avoid excesses and ensure its efficient use. We advocate that a clear notion of human rights must be incorporated in the development of algorithms and also their application and evaluation. It is essential that migratory algorithms are developed by teams with a diversity of gender, class, ethnicity, and national origin. Mechanisms for reviewing the algorithms before and during their use as well as accountability procedures have to be centerpieces in the use of this technology.