As part of the 17th annual international Computer Privacy, & Data Privacy (CPDP) conference held in Brussels, Belgium, the Ditchley Foundation and its director James Arroyo convened a workshop on the risks and opportunities that artificial intelligence (AI) and space-based data present to individual privacy. Carnegie Council Visiting Fellow Zhanna Malekos Smith was selected to present her research on space-based data and ethics. The session highlighted the ways in which AI is advancing humanity’s ability to collect, monitor, and analyze data from outer space for multiple purposes. This article summarizes the key takeaways on the potential uses, as well as misuses, of AI-enabled earth observation data, also known as space-based data.
Space-Based Data
Space-based data refers to information obtained from instruments like satellites, telescopes, and space probes. Generally, this category includes remote sensing data, space weather data, satellite imagery, and navigation and communications data. According to Space Data Ethics, a 2023 white paper released by the National Space Council’s Users’ Advisory Group, this data is focused on understanding phenomena at the macro level. Examples include observations of forest fires, animal species migration patterns, urban traffic conditions, water levels—even the movement of military troops. Space-based data can also include information collection about an individual’s movements, which raises concerns about privacy.
Privacy and Surveillance Issues
On the one hand, AI-enabled collection and processing of space-based data could enhance the ability of organizations to coordinate and deliver humanitarian relief to refugee populations. On the other hand, the white paper warns that “mislabeling Earth observations can similarly create unwarranted bias to harmful effects, e.g., mislabeling a populated area as a ‘slum’ or ‘refugee camp’ could cause it to be unduly stigmatized and even targeted for eradication.” With the aid of machine learning and predictive analytics to study refugee populations, there are concerns that authoritarian regimes could weaponize this information to harm those fleeing from persecution. According to a Chatham House report on refugee protection in the age of AI, “asylum and refugee protection will form one of the test cases for global and national governance of AI, and for whether human rights-compliant AI can be achieved.”
Apart from these concerns, the National Space Council white paper explains that satellite surveillance can be leveraged to monitor a suspected terrorist and intercept their space-enabled communications, or, more innocently, to help farmers manage tracts of land with precision agriculture. The paper cautions, however, that it is generally more cost-efficient and technologically viable to use technologies on earth, versus in space, to monitor individuals.
To promote fairness and autonomy, the same white paper also champions the idea of the United States advancing a new set of ethics to safeguard individual privacy rights and reduce the risk of discrimination. Specifically, the authors recommend: “To handle and share space data responsibly, work is urgently needed to anticipate the possible harms—which may be different from ordinary data-ethics failures—and develop a new ethics framework specifically for space data.”
Data Ethics in Space
During the international CPDP workshop, I polled the audience in asking whether they agreed with the proposition that a wholly new ethical framework should be developed to address space-based data risks. No one agreed that space data merited a new ethical framework. Rather, the attendees questioned the practical value of a “new model,” and instead encouraged using existing frameworks to ensure human dignity, oversight, and fairness in the application of this tool.
Rather than reinvent the wheel, one example of a pre-existing ethics framework used in data science is the Belmont Report. Following the U.S. Public Health biomedical scandal in 1972, which involved unethical practices studying African American patients dying from syphilis, Congress formed the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The commission met in Belmont, California, and promulgated the following three ethical principles for health science studies: (1) respect for persons, (2) justice, and (3) beneficence, in addition to informed consent. Next, in 1993, a group of information technology leaders convened a meeting in Menlo Park, California to produce the Menlo Report. This report built upon the foundation of the Belmont Report’s three principles but added several additional principles such as (1) respect for the law, (2) mitigating harms, (3) considering stakeholder perspectives, and (4) accountability.
Before the Users’ Advisory Group subcommittee on climate and societal benefits presents at the next national council to advocate for a “new” set of principles for data ethics, it could be more helpful to instead look at the Belmont and Menlo models to see what is lacking. How could these models be applied in the space domain? With the upcoming 74th International Astronautical Conference in Milan, Italy, U.S. space leaders could convene a side workshop at the conference to advance a discussion on ethical data science principles with a greater cross section of the international space community. In turn, this could shine a light on how to safeguard refugees from persecution under this AI-augmented capability.
Carnegie Council for Ethics in International Affairs is an independent and nonpartisan nonprofit. The views expressed within this article are those of the authors and do not necessarily reflect the position of Carnegie Council.