In this "Artificial Intelligence & Equality" podcast, Senior Fellow Anja Kaspersen talks with Dr. Ricardo Chavarriaga about the promise and peril of brain-machine interfaces and cognitive neural prosthetics. What are the ethical considerations and governance challenges in using computational tools to create models or enhance our brains?
ANJA KASPERSEN: Today's podcast will focus on artificial intelligence (AI), neuroscience, and neurotechnologies.
My guest today is Ricardo Chavarriaga. Ricardo is an electrical engineer and a doctor of computational neuroscience. He is currently the head of the Swiss office of the Confederation of Laboratories for AI Research in Europe (CLAIRE) and a senior researcher at Zurich University of Applied Sciences.
Ricardo, it is an honor and a delight to share the virtual stage with you today.
RICARDO CHAVARRIAGA: Thanks a lot, Anja, for having me. I am really happy and looking forward to a nice discussion today.
ANJA KASPERSEN: Neuroscience is a vast and fast-developing field. Maybe you could start by providing our listeners with some background.
RICARDO CHAVARRIAGA: Certainly. When we think about the brain, this is something that has fascinated humanity for a long time. The question of how this organ that we have inside our heads can rule our behavior and can store and develop knowledge has been indeed one of the questions for science for many, many years. Neurotechnologies, computational neuroscience, and brain-machine interfaces are tools that we have developed to approach the understanding of this fabulous organ.
When we talk about computational neuroscience it is the use of computational tools to create models of the brain. It can be mathematical models, it can be algorithms that try to reproduce our observations about the brain. It can be experiments on humans and on animals: these experiments can be behavioral, they can involve measurements of brain activity, and by looking at how the brains of organisms react and how the activity changes we will then try to apply our knowledge to create models for that.
These models can have different flavors. We can for instance have very detailed models of electrochemical processes inside a neuron, and then we are looking at just a small part of the brain. We can have large-scale models with fewer details of how different brain structures interact among themselves, or even less-detailed models that try to reproduce behavior that we observe in animals and in humans as a result of certain mental disorders. We can even test these models using probes to tap into how can our brain construct representations of the world based on images, based on tactile, and based on auditory information. So in computational neuroscience we are combining the knowledge that we get from cognitive and clinical neuroscience with mathematical tools with knowledge about statistics and computer science as a way to better understand the brain.
As I said, some of these studies are based on measuring the brain or stimulating the brain or somehow manipulating the brain. It can be in vitro or it can be in vivo. For that we use technology, and this is what we broadly call neurotechnologies, technologies that can interact with the brain, sensing, recording, and stimulating.
Among these neurotechnologies we have brain-machine interfaces. They will have a set of sensors. They can be implanted within the brain. They can be placed on your scalp and will provide us information about brain activity.
This activity is then processed using artificial intelligence methods and machine learning methods to try to extract information from that activity. This information can be, for instance, a particular state of the person: is he sleepy, is he less attentive; or intentions: does the person want to move a hand or a foot? When we have extracted this information it can be used as a command to control a device.
Let's imagine the case of a person who is heavily paralyzed. This is a particular condition called locked-in syndrome, where people lose control of their muscles but retain their cognitive capabilities. These people can modulate their brain activity, they can hear, they can produce speech, but they cannot speak because they cannot move their muscles.
If we can take information about these intentions using these neurotechnologies, we can then connect it to a communication device and provide them the capacity to communicate with others. There have been several studies along these lines. This is one example in the realm of assistive devices. We can have brain-computer interfaces used as prosthetic systems, a robotic arm, a robotic wheelchair.
There are also applications in the consumer-oriented domain—systems that can be used to control a video game, to measure the level of attention, or support for well-being applications, helping people to relax or to increase or decrease their levels of attention.
Last but not least, there is also interest in military applications. Some of these include trying to identify people who can be more likely to develop post-traumatic stress disorder, having technologies to diagnose brain injury on the battlefield, or also as cognitive enhancement to increase or maintain higher levels of attention for soldiers.
These are some of the current lines of research and potential applications for these brain-machine interfaces.
I have to say that most of these systems are currently at the research stage, so most of them are still being developed and tested in research laboratories. There are only a handful of them that are now in the commercialization stage or close to commercialization, but in the general sense there are still many questions and many needs to properly assess their safety, their efficacy, and their performance in real-life applications.
We are at the moment where the field is moving from this research stage, this early age of neurotechnologies, towards the technology translation phase and trying to realize the potential of these technologies and bring them to the users. That makes it quite exciting to work in this domain today.
ANJA KASPERSEN: I would like to follow up on the distinction you just made between in vitro and in vivo experiments. Can you elaborate a bit more on what the difference is and why it is important to understand what the difference is also from an ethical point of view?
RICARDO CHAVARRIAGA: When we talk about in vitro experiments we are basically referring to experiments in cultures of cells. These are neural cells, neurons, that are extracted from a brain and are kept in a solution so that they remain alive for some time, but they are detached from the body and from other brain structures. This allows us to characterize the properties of a neuron or how a small population of neurons can interconnect among themselves and how they react to certain interventions, but they don't provide us information about how this activity is linked to behavior or to other characteristics of an entire entity.
When we talk about in vivo we are talking about settings where we can measure the activity of the brain in a living entity. It can be an animal model or it can be a human. These measurements can be done with implanted electrodes. For instance, people who go through brain surgery or epilepsy sometimes remain in the hospital with electrodes implanted to measure the activity of the brain and to identify areas of the brain that are linked or are the source of the epileptic seizures. This information is used to better understand the processes that are going on.
This can also be measured with sensors that are placed on the scalp. One of the most common technologies is electroencephalography (EEG), which has been around for more than 100 years and is commonly used in clinical sittings. It allows us to measure certain activity of the brain as well. These will be the in vivo experiments.
When we talk about brain-machine interaction and the possibility of interacting with prosthetic devices or communication devices, we are talking about the second type of settings, where in this case the entire person, an entity, has sensors.
ANJA KASPERSEN: Thank you, Ricardo. Listening to you talk and share your passion for this field brings me to my next question for you, and also to allow our listeners to get to know you a bit better: What, and if relevant who, sparked your interest in the brain and the vast field of computational neuroscience?
RICARDO CHAVARRIAGA: I think it is part of a long story of fascination with both biology and machines.
I have been strongly inspired by my parents. Despite not being a wealthy family, they really valued education and learning throughout my entire life. My father and mother were both the first generation in their families to go to university, and at the time among the people I knew we were just one of a handful of households where both parents worked. That somehow embedded in me this love for improving and looking for new opportunities to learn about the environment and to apply it.
At the same time, I grew up in Colombia in the 1990s. Colombia is a country that has a long history marked by violence. In the 1990s it was an environment where the civil society was basically trapped between fights among the drug cartels, the right-wing paramilitary, and the left-wing guerillas. This made me conscious of the impact of the power imbalance among these different entities, the inequality, and at the same time how privileged I was of having access to education, to have been somehow spared from most of this violence, even though no one in this environment can say he was completely spared. At the same time that made me think about the responsibility that we have to contribute our knowledge to make our society a little bit better.
At the same time it was a booming era of personal computers coming in, robotics being developed, and more exchange of knowledge and communication between people thanks to a recent innovation at the time called the Internet. It was just amazing how we could use these technologies to bring people together to create machines that could support humans in their goals and dreams, and that led me to study engineering. I got hooked on the idea of putting biology and machines together and create bio-inspired machines. That was my first stage. I was really interested in doing research.
At some point I wanted to look for opportunities to expand my knowledge about biology, about science, and about the world, so I looked for opportunities and got the chance to come to Switzerland with a scholarship. I came thinking about artificial neural networks. It was right before the "AI winter" started. This seemed like an extremely interesting idea for making machines that could adapt and could learn.
When I came here and started to look at these artificial neural networks and saw how I could also study biological neural networks. I thought that sounded even better, this possibility of taking all the ideas about machines and then seeing to what extent they could apply for modeling the brain.
That is how, 50 years after my grandfather left his hometown fleeing political violence, I left mine—not due to violence, and I am really grateful for that—to pursue this opportunity to combine my fascination with the brain, my passion for technology, and my aspiration to contribute to society through technology.
ANJA KASPERSEN: Thank you for sharing your story, Ricardo.
You and I met a few years ago in the context of your work on the governance of neuroscience and neurotechnologies, more specifically to address the lack of specific standards for developing and deploying neurotechnologies for brain-machine interfaces. Can you tell us a bit more about this work, as I think it is a helpful way for our listeners to grapple with some of the complexities entailed in providing sufficient and timely oversight and governance to the broader field of neuroscience?
RICARDO CHAVARRIAGA: I have been doing research in brain-computer interfacing for more than 15 years. In the early stages of my career I was mostly focused on the technological aspects of these systems, on building better algorithms, on maybe making better protocols for that. Over time I was seeing some advances in this area, but a very slow rollout of things that could be translated into realistic applications.
Of course there is a dimension of it that is at the science's heart. The brain system is extremely complex, and there are of course some bureaucratic hurdles and studies to do so. I wanted to know a little bit more about what were the roadblocks that the neurotechnologies were facing when trying to translate it.
These led me to look more in detail to the level of standardization. This is work that I was able to do through the Institute of Electrical and Electronic Engineers (IEEE) Standards Association, where we started doing a gap analysis of standards for these technologies. This particular association developed standards through participation of not only engineers but people related to the development of technology in academia and industry, and also through input from regulatory bodies as well.
When we started studying what the standards for brain-machine interfaces were, the first realization was that indeed a brain-machine interface is a combination of multiple technologies. We have sensors, we have data-processing units, we have robotic devices, we have other elements, and some of these elements are in themselves emerging technologies as well. If we want to look at standards for neurotechnologies and brain-machine interfaces, we need to look at standards for AI, for sensors, for robotics, and others.
What we saw was that there were established standards for some of the sensing technologies, like EEGs and fMRI, but less so for novel approaches involving nanomaterials and optogenetics. We have some standards for the use of AI algorithms in certain applications, but not that much, and standards in safety for robotics devices.
But if we want to have a system that measures activity of the brain, processes this activity, and executes actions—for instance through a robotic device—they need to be interconnected. There was strikingly a lack of standards for this interoperability, how to put together these technologies and ensure that they are achieving their goal, if they are fit for purpose, and if they are efficient.
Not only were there no available standards for this interoperability for the specific use of brain-computer interfaces, but there was also a lack of standards to evaluate the performance of these systems. We have many metrics from a technical point of view to measure an algorithm and to measure a robotic device, but we don't have an integrated or an agreed framework to say if a brain-machine interface—for assistive communication, for instance—is actually fulfilling its purpose.
We can measure how many characters someone can write, for instance, but the link of this number to the actual purpose of the interface is not well established. Is that rate enough? How does it relate to the condition of the person? How does it relate to the human factors? There is still a lot of work to do.
After this gap analysis, we identified interoperability and benchmarking as clear needs in order to be able to develop brain-computer interfaces that are useful and impactful. We have started to make project awareness and in certain cases standardization initiatives for these cases.
But there is still along way to go to say that we have covered all the potential applications and all the potential combinations of technologies that can be used.
ANJA KASPERSEN: If I understand correctly, whatever the application, if allowed to develop in isolation and without global standards put in place, we could potentially be faced with situations where neuromodulating methods and technologies as well as the components used to build these systems are simply not interoperable.
RICARDO CHAVARRIAGA: Within a single interface we are already facing issues for interoperability and for ensuring that this interoperability is safe, is efficacious, and is secure.
In particular, if we think about scenarios where some of these technologies come from a well-regulated domain—let's take implanted electrodes—they then will interact with technologies that have been developed for a consumer market—for instance, virtual reality headsets or home appliances that are connected through the Internet of Things.
We can see that as a single system, but they will have different requirements and different development cycles, and how can we make this coherent in order to ensure that there are not threats to the mental privacy of the user—access to the information from the brain even if the person does not want to reveal that information? How can we ensure that it is secure from cyberattacks, for instance?
ANJA KASPERSEN: So a neural implant essentially to help someone who is cognitively impaired or otherwise is seeking a cognitive enhancement that is then utilizing what you said, other types of technologies, you need to make sure that those two interact both with regards to privacy but also with regards to the technical specifications and parameters?
RICARDO CHAVARRIAGA: Exactly. So we enter into a more interconnected world where we can control lamps in our home using our phone, for instance. We could envision in the future doing so by the brain-computer interfaces, so how can we ensure that the system that has access to this control command to turn on the light cannot access other information that comes from the brain? Here we are talking about hypothetical scenarios that are not a reality today, but I think it is important that we take this anticipatory stance when trying to develop the governance mechanisms and standards as well for the new generation of these systems.
ANJA KASPERSEN: Let me shift to another issue, which is important to any discussion of the governance of technology: trust.
I have heard you say on a few occasions that you worry less about having technologies that can help us read minds or even accurately predict behavior, and worry more about the attribution of trust in machines and the very fact that we may actually believe that we can even read minds. Can you explain what you mean by this?
RICARDO CHAVARRIAGA: The possibility of getting access to information about the brain to use it for interaction of course can create many applications.
We have talked about cognitive enhancement or cognitive training. However, these systems actually work by having a closed loop with the person, with the brain, so the overall performance and the effect that the systems we have depends on how these two entities are coupled, the artificial system and the brain.
If we only test some parts of the system, like the algorithm in giving conditions, we have a part of the story about their efficacy. We need to see what the effects are in real life and how they depend on the characteristics of the person or other ways in which the system is used, and that is what we typically have in clinical trials for the development of medical devices, for instance.
When we have systems that are developed outside this sector, if we think about consumer-oriented devices, there is less information about what the actual efficacy of these systems is in real life.
There is evidence in most of the systems we see today from scientific studies in limited circumstances that we can use electrical stimulation to improve certain cognitive capabilities normally in the short term. This has led to potential applications in cognitive enhancement and in the idea of using them in commercial applications.
Similarly, this can identify levels of attention. Again this led to products that are commercialized as support for mindfulness, for better attention, or even gaming. But we still lack the actual information about what is the net effect in real life of these devices—in other words, if the claims that these systems put in terms of their benefits for the users are a reality or not. If we don't have proper oversight mechanisms to evaluate these effects—and we have seen that it is difficult to set them because we don't have standards for measuring this performance—then how can we ensure that these users are getting a product that is actually fulfilling their purposes? That is a risk.
This attribution of trust, people believing that the technologies up to a point that we can obtain that information will open these markets and then will make it possible for some companies to feed this information to develop systems that are not fit for purpose, and in the general sense to divert our attention to systems that are not really working—that can generate backlash in the future in the public trust about neurotechnologies because we are not delivering—and hinder the possibility of developing really impactful applications.
ANJA KASPERSEN: I'm glad you brought up the ethical dimension here because in some of your research you look into how to improve human-computer interaction or brain-machine interfacing—I guess these words are being used somewhat interchangeably—and you argue that a key component is the system's ability to recognize errors in the human brain, which if I understand correctly is not an easy feat, especially not for humans themselves, and thus also prone to potentially erroneous applications. This brings us more into the ethical domains of research.
RICARDO CHAVARRIAGA: This is a particular line of research that I learned from several things I have done.
One of the current ways to develop a brain-machine interface is to train the models that will decode the brain activity based on examples. If I want to decode if the person wants to move their right hand, I will ask the person to try to move the right hand 100 times, 200 times, 300 times, and then do the same for the left hand, and then I use that data to train a machine-learning model.
But this has some limitations. First, of course it requires the person to do that lengthy calibration process. We know that the signals that are generated are influenced by many factors—if the person is tired, if they are motivated or not, or if he has learned or exercised too much, this may change as well.
I think this way to build the models has reached in many cases its limitations, and we need to move towards other paradigms where the interaction between the human and the machine is improved by actual mutual knowledge.
If I am using this device and I am interacting with a machine, we can collect information that helps us—in this case from the point of view of the machine—to adapt the machine to better suit the characteristics of the person.
One particular way of doing so was to look in the brain for responses that are generated whenever we make a mistake. Here I am talking about, for instance, if I am typing on a keyboard and press the wrong button, then I will see the wrong letter appearing. Sometimes even before I see the letter appearing I realize that I made this typing mistake, and we can identify a signal that is triggering the brain when we have that.
We also found that a similar signal appears when someone is observing an agent—in this case, a robot—doing that task. If I have a robot pressing the buttons or showing the letters and it shows the wrong one, we have a very similar signal, and we can decode that signal using multiple sensors in the brain. It is not perfect decoding, but it gives us general information of when the person disagrees with the action the robot made.
If we can provide this information to the robot, then it is kind of telling the robot "Don't do that again" in this situation and we can teach the robot how we want it to behave.
This can be a way to complement these models that we have built before through these examples so that they adjust over time to identify when the interaction went wrong and when the person wants it to behave differently. We have the chance of testing these types of paradigms in very simple scenarios—controlling a robot to pick up a cup from a table, for instance—but also in interactions with a semiautonomous car, where the car can provide indications of which direction the car thinks it should turn and lets the brain-machine interface in this case provide information to the car on whether the person agrees or disagrees with the suggestions from the car.
They are still far from being applicable in real-life scenarios, but it shows that we could use this information from the brain not just as a simple replacement for a joystick—which in my opinion is a quite reductive way of using the brain—but really as a channel to create synergy between the behavior of the human and the behavior of the machine in order to achieve a certain goal.
ANJA KASPERSEN: Building on that, the issue of inclusivity is an important one in your work, making sure that the interaction with the intended user of brain-machine interfaces and prosthetics is built for purpose. However, fine-tuning the parameters is not always a straightforward endeavor, even for brain scientists. I was hoping you could elaborate on some of the challenges that you yourself have encountered in using these technologies.
RICARDO CHAVARRIAGA: Sometimes when you are doing research—and this is actually a learning experience for me; we are so focused on technical aspects, as I said before, trying to get a better algorithm, trying to reduce the power of the electronic device—and we think that by doing it so we will solve the main hurdles of getting information from the brain and other aspects like the usability, the actual interfacing, and the packaging will just be the easy part of translating these technologies. When we do that we really neglect many, many aspects that will become barriers for adoption.
To give a personal example, I have dreadlocks, and even though I have been doing research in these technologies for more than 15 years as I said, I was unable to test most of these systems, in particular those that were using electroencephalography. Why? Because the standard sensors for this modality don't work well with dreadlocks, with this type of hairstyle.
At the time—and this was probably my own ignorance—I just put that aside as just basically like a funny story, it was not changing my life. But in hindsight, when we take that into account and say, "This is a system that is intended to be a support for real adaptation, be a system that is intended to be a helper for mindfulness or for well-being," by ignoring that fact we are basically preventing a part of the population to get access to these services.
If we enlarge the scope of the discussion and we see how hairstyle can be a factor for differential treatment for people of color, in particular for women, this starts having more implications than just being a funny story or the fact that I just need to find someone else to test my systems. That was a complete blind spot on my side early on in my career.
I think this is the crux of what we are trying to do here when we are discussing how the considerations about the ethical, legal, and social implications of technologies need to be integrated early on in the design.
The best way for doing so is by reaching out to potential users. I use the example of a hairstyle, but there are also examples with people with disabilities, where things like the phone size, the type of the stimuli that we present, can become actual barriers for these technologies to be used.
Integrating these stakeholders, having this communication, this dialogue, a continuous feedback channel, can help us get better science, but also I think it can help us to accelerate technology development.
Right now there are many discussions and people who argue that these considerations when they come too early can hinder innovation. I don't think that this is necessarily the case because if we start integrating these considerations early on in the design, this step from research to product development will become more straightforward.
Of course, this requires a gradual integration on the level of detail of these considerations and it requires better training and involvement of different fields of knowledge, so that we don't put extreme overheads early on when we still don't know many things about it, but that doesn't mean that we shouldn't start discussing it as early as possible.
As technology comes, we have this double-edged sword of providing ways for people to have access to services that before they were locked out we were creating these hurdles, and we see that in particular when we look at technologies that go for the largest markets.
If you think about virtual reality and how often old adults are involved in the testing and development of these systems, it is something to consider because partly they are not necessarily the big market.
When we look at newer technologies certain conditions like locked-in syndrome, which is people who could probably get the most benefit from these technologies but at the same time are a rather reduced market that can be considered a nonviable market for the development of technology.
We need to take into consideration how we can make the development of these technologies responsible, economically viable, and also to create synergies between developments that are oriented to multiple populations because probably that way we can have a larger reach than trying to funnel all efforts to a single application or a single population.
ANJA KASPERSEN: It stands to reason that brain-computer interface research and development is, as you alluded to, a highly interdisciplinary endeavor.
RICARDO CHAVARRIAGA: Indeed, this is in essence an interdisciplinary field because we need knowledge about the brain, we need knowledge about how to process the signal, how to create sensors, and how to interact. We need also to know how the use of these technologies can influence people not only from a medical perspective but also from a social and a psychological perspective.
We have seen, for instance, with deep brain stimulation—electrodes implanted in areas of the brain to reduce symptoms of Parkinson's disease—how a side effect can be anxiety. We need to study how the use of these technologies can influence the way people live because we can imagine someone who needs to choose between having the symptoms of Parkinson's or having anxiety if we cannot properly control these side effects.
If I am going to have my decisions and actions mediated by an artificial system, let's say a brain implant, I have a robotic arm, and I will integrate this robotic arm in my mental model of the body, so I will plan my movements according to the dimensions of this robotic arm. If it changes, I will need to change my mental model. This is something that can be learned of course, but will that have an influence in how I perceive myself?
What if it is a prosthetic for cognitive capabilities? If I have better memories using a given device and for some reason it needs to be explanted and I no longer have access to it, can I consider myself being the same person?
There is a recent case of a company developing retinal implants, systems that take images and translate them into electrical activity that is injected into the brain to produce an impression of sight or visual stimuli for people with visual impairments. This company is right now basically in economic turmoil. The people who have these implants today are currently facing the possibility that they will no longer be able to use them. How is this going to impact them?
We need to take all this knowledge into account in order to build appropriate and responsible brain-machine interfaces.
This is a great opportunity because we can interact with people who have knowledge that we basically cannot dream of knowing from our own perspective. It is also extremely challenging because each domain speaks its own language, each community has their own incentives and their own programs to develop their technology, and they are not always very compatible. This is something that is in itself a factor that is somehow changing neurotechnology communities.
When we want to expand the development and the use of this we really need to go beyond that and find better ways to communicate across the borders of these silos—not to mention people in political science and law who are also implicated as stakeholders to make these technologies available for the society.
ANJA KASPERSEN: In your view, what are some of the overlaps you see between the discussions taking place in the AI research community, especially related to the ethical considerations, and those taking place in the brain science community?
RICARDO CHAVARRIAGA: I think most AI challenges from an ethical perspective are directly translated for neurotechnologies. We have technologies that will be based on personal data, so the inherent and unavoidable questions about bias and privacy will be applied as well for these neurotechnologies.
When we look at the trajectory of the field of AI in the last years, it has basically been driven by the availability of big data. We are not seeing that yet in neurotechnologies, in particular for brain-computer interfaces, and the reason is that we are at an order of magnitude lower in neurotechnologies than we are for images and for natural language processing for instance.
There are certain areas where we are using deep learning and other machine-learning technologies for studies in neuroscience to create some models of the brain, but when we look at brain-computer interfaces and build models to decode activity for interaction, in most of these cases we are still using machine-learning technologies that precede the deep-learning approaches.
The curve is somehow different for these technologies. There are nonetheless efforts to try to leverage the advances that we have had in deep learning for brain-computer interfaces, in particular by combining information from multiple users and having like a tuning for specific users afterwards, but it is still a work in progress.
In general, from a personal perspective, I am more interested in a combination of data-driven and theory-driven approaches for decoding the brain. I think having the possibility of measuring the brain in real life gives us a first-in-a-lifetime window to know how the brain actually works.
Most of the information that we have from brain activity comes from very constrained experiments. The advent of new technologies for measuring the brain in unconstrained environments, making long-term recordings, to link that information to actions, and to affect into the environment allows us for the first time in our history to better trace how the brain actually works in my daily life and not only when I go to a research lab, look at a screen for one hour where I have a red dot and a green dot and I have to press a button when the former appears, which is not really my behavior.
I think the combination of these data-driven approaches can help us generate hypotheses for how structures can be organized, how information can be processed, but the real power will be to link them with theory-driven approaches that can help us challenge these hypotheses and give us better knowledge about our brain.
ANJA KASPERSEN: I have heard you say that many of the AI systems developed today are only vaguely inspired by how our brain works, and you have opined that that is not a bad thing. I am wondering what you mean by that. Additionally, being at the frontline of neurotechnology development and applications, how far are we from having machines that are capable of reasoning like a human?
RICARDO CHAVARRIAGA: Predictions about the future are always difficult. I think to properly answer the question we have to agree on what is this intelligence that we want to reproduce and acknowledge that there are different dimensions or expressions of what an intelligent being does that I can or cannot reproduce with the technology today.
When we think about what these machines have been successful at, they are cases like image recognition, cases like natural language processing, and certain games where we can identify rules, and we start expanding on certain of these tasks when these machines perform well.
But we are still lacking a uniformly agreed definition of intelligence. There are also different dimensions of this intelligence. I think sometimes the discussion gets mixed on trying to stick to a single definition of intelligence or trying to mold the definition of intelligence to match our intent, being able to say whether we are close or far from that idea.
To be honest, I am sometimes fed up with this dilemma because it is often framed not as trying to identify what intelligence is and whether it should be our goal to replicate that intelligence, but rather to push a given agenda, either to say, "We have to go for artificial general intelligence or human-like intelligence" or to say we shouldn't.
I think the most interesting question, at least for me, is to identify what are the intelligent traits that we want to reproduce in a machine. This comes from a technical perspective, it can be because I want to have a better system that produces language because it can help services in certain industries. Or it can be because I want to better understand how the representation of memories is built in the brain, and there I will go with different types of models with different decisions. Or I want to go into this minefield that is defining consciousness and trying to reproduce it in artificial systems.
That is for me a little bit more interesting—this is probably the engineer speaking—rather than trying to say: "What is this thing that we are missing if we want to have intelligent machines? I want to have an intelligent machine tool to see it and have philosophical discussions with it because I consider my fellow humans are not worth my time for doing that or there are not enough humans for doing it"— I don't think so.
Do I want machines that can help me carry a table? We need to tell it exactly where I want to take it, do that first here or do that first there, and I would rather approach it from that perspective.
I am extremely interested in intelligent machines that are able to understand some of my needs and preferences when achieving a task. It can be locomotion, it can be completing a physical activity at work, or even completing a text that I am trying to write and pointing me to useful sources. That can be for me a goal more valuable than just blindly saying we need artificial general intelligence for the sake of it.
I think many people come to this as a way to better understand what makes intelligence and what is not intelligence. Other people come from a more utilitarian perspective, and say: "If I can have a machine that does these things that I don't want to do and I don't need to spend time explaining it, they will do it."
Other people are fascinated with the possibility of doing it and say: "I will be smart enough to create a system that will improve itself up to a point that it will become as intelligent as myself or even more."
I think there are different perspectives on that. I am definitely on the side of using it as a way to better understanding the brain, but not necessarily to exceed human capabilities, but more as a complement or a nice partner in things that I do not necessarily consider to be worth being taken over by a human.
ANJA KASPERSEN: We tend to focus a lot on intelligence rather than what makes us human and distinct human features, such as compassion, empathy, and humor.
RICARDO CHAVARRIAGA: Yes. A very important dimension here is that neurotechnologies allow us to measure the brain, to interact with the brain, and to influence the mechanistic processes that are happening in the brain.
When we see effects of these interventions on aspects that we traditionally link more to the mind, to our personality, to our empathy, or to our compassion, also to our personality, that challenges us on deciding where these different phenomena and traits reside. Is this just simply an emergent phenomenon of now having these heavily interconnected processing units in our head interacting with the brain through our body, or is it something else?
These are questions that come here and that we can start to address by using neurotechnologies. This is probably one of these endless quests where we will always find new questions as we try to uncover the realities of how the brain mediates behavior, personality, and identity.
But I think it also makes us realize the complexity of a human being and how careful we should be when developing a system that can generate unforeseen changes on human beings. When we see that side effects of neurotechnologies can include some alterations of personality such as anxiety, or also beneficial effects such as improving attention, we should understand how this complex interaction can generate effects that are very difficult to imagine in the first place and also very difficult to control.
We are not yet at the stage where we can prevent some effects or can steer these processes so that they don't become what we would call "maladaptive," where we can change the brain in a way that is not only unintended but that has negative consequences for the user. I think that makes us think of the extra onus that we have to put in our duty as engineers, as developers, as neuroscientists, as ethicists, as lawyers, and as policymakers who are involved in these types of technologies.
ANJA KASPERSEN: There has been a lot of talk about quantum technologies and how that would allow us to break all kinds of parameters that have been set up to protect data in other types of systems. You mentioned in your earlier comments this issue around privacy and also cybersafety, cybersecurity, and cyberhygiene around what to think about when implanting devices into your brain.
RICARDO CHAVARRIAGA: I think there is always the temptation of saying, "We can solve the limitations of technology with even more technology." We can of course address some of the drawbacks of existing technologies, bringing new generations, bringing substitutions, but there is no free lunch. When we are dealing with emerging technologies we know that each of them have their inherent uncertainties. There are many things I don't know about the brain itself, many things I don't know about these artificial intelligence systems in themselves, and there are many things I don't know about quantum in themselves.
We cannot just blindly expect that the combination of them will reduce the uncertainty, so we have to be very careful when identifying what are the lines of actions and the points of contact between these emerging technologies where I can actually reduce the uncertainty and not just be introducing a nonlinear effect where these "unknown unknowns" get amplified by introduction of a new solution.
I am generally cautious on these aspects. I think there is probably a need to slow down things somehow. I have had several discussions about what the actual pace of technology development is. It is commonly said that it is going very fast, and in certain cases it is. When we take our phone, which is no longer used as a phone but as a personal computer in our pockets, and it's something that 20 years ago was basically nonexistent, this is going fast.
But when we think about treatments for mental diseases, where we basically had nothing for decades coming out of the pipeline, or even on antibiotics, we need to acknowledge that this pace of technological development is uneven, that sometimes what we attribute as the fast pace of development only is referring to research and not that much to the actual impact on the field.
I have seen in brain-computer interfaces, this field that I know more and I love, how we have every two years a new article in the newspaper saying how a paralyzed person is able to control a device by using one of these systems. Each of studies actually has improvements over the past one, but we are still not there yet.
This part of the story sometimes is not well discussed. How can we take the systems out of the research stage—and this involves not only the science part but many other dimensions—so that we can say that this is actually moving fast? And here we will have again these differences between the multiple technologies.
In short, quantum can bring new dimensions to these technologies and other areas that require computing power, but it still has its own challenges in itself. I am not necessarily holding my breath on thinking that quantum can really unlock the barriers that we have right now because I think there are other barriers that we can address today. I would rather personally invest my energy on removing these barriers today, and if quantum appears later as a safe alternative to make these technologies even better, then it will be welcome.
ANJA KASPERSEN: You mentioned mental health and that we have made little headway in this space for a very long time. It reminded me of a podcast I was listening to earlier today actually around what they called "spiritual technologies," that essentially are trying to replicate, or even gamify, spiritual experiences and a sense of mindfulness.
My question for you, also because we are looking at these types of mental models for treating mental health issues—you mentioned post-traumatic stress disorder earlier in the podcast. Do you think we can hack enlightenment, or rather the neurofeedback loop that allows us to stay in "the flow" which is so important for reaching a state of mindfulness?
RICARDO CHAVARRIAGA: I think it is one of the possible scenarios where we can find ways to stimulate the person, not necessarily directly through the brain or by different types of feedback, to have improvement on certain cognitive capabilities, to have experiences that are referred to as enlightenment, or to support practices such as mindfulness.
There is, though, a challenge of how we can ensure the added value of technologies for these particular practices, how can we identify the specific neural and biomarkers of these experiences? That will also require us to have an economical way to identify when these experiences are happening.
There is also an issue that we have also in certain health applications: How can we balance the information that we get from self-reports with information that we get from physical signals that we can measure?
I think there is a possibility. We have a challenge in identifying exactly what these systems are doing and how they contribute for the purpose that they are being promoted to. This goes back to an earlier comment we made on how we lack in certain applications appropriate evidence to validate these claims.
I think these are interesting questions to develop because if we can get better knowledge about identifying what of these phenomena are reproducible, are measurable, and can be potentiated by the use of these technologies, we can certainly create value for many users.
This is one side of having these types of consumer-oriented technologies, and I am convinced that there is space in a healthy innovation environment for these types of consumer-oriented technologies provided that we have the appropriate means to evaluate their efficacy.
ANJA KASPERSEN: You spoke about a healthy innovation environment. I have heard you express concerns about what you have referred to as "celebrity tech leadership" and the crowding of all communication channels, social media in particular, with headline-grabbing, sensationalist applications of neuroscientific innovations.
RICARDO CHAVARRIAGA: Yes, I think it is very important to be aware of the narratives. For the general public sometimes it is difficult to understand what these narratives are, what are the roles of different stakeholders in the development of these technologies, and the real state of the art.
We have had for many years this figure of the "lonely genius," the person who drives the advancement of humanity. You can imagine figures like Thomas Alva Edison or the fact that Nobel Prizes are attributed to a single person even though there is a whole team involved in it.
I think right now we have a bit of this in people who appear as disruptors, who have been successful in certain areas, and who have a platform to express their opinions and sometimes their interests on several topics.
We know that we as humans are very prone to a cognitive bias that is referred to as the "halo effect," which means that when we have positive impression of people in one area, we tend to be influenced that this positive impression also translates to other areas, and the trust that we put in these organizations or individuals somehow gets generalized to all the information that they provide.
Right now, with the Internet, social media, and other means that can ideally democratize the participation in public discourse, what we see is that attention can be focused on a limited number of individuals who already have a platform and can really have a strong influence on multiple areas.
Sometimes what we have is that the opinions of these experts that come from a single perspective do not give the entire picture of how things have been developed or how far we are. There is of course interest sometimes in the innovation sector to attract investment, to show that we are moving ahead, and that there is no lack of enthusiasm on the development of these products that can drive these narratives.
Also, we have a simplification of very complex arguments. Systems that are complex processes that take time often get expressed as if they could be shortcuts, and this creates a risk because people will get unrealistic expectations about what these technologies can deliver, when they can deliver, and to whom they can deliver. That is what makes me a bit cautious about how the discourse is not including all the voices that we have.
There are voices, of course, that come from experts in one field that are not necessarily contributing to the entire discussion in other fields, and certainly sectors of the population do not have enough space in these discussions.
I think one last point that makes me worry about this kind of leadership that simplifies the message that sometimes is marked by techno-optimism that we can again solve problems of technology with even more technology.
On one hand, this message hides part of the complexity. We saw it with COVID-19, for instance, the fact of having the vaccine, which is basically a tremendous achievement for the scientific community, is not enough. We need to have the means of production. We need to have the distribution chain. We need to have the public health campaigns for awareness to reduce reluctance in order for this to actually be efficient.
If we only focus on one part of the story, we don't give attention to these other elements that are also necessary for achieving our purpose.
The other part is that the discourse when we talk about technology and other areas tends to hide the uncertainty. Arguments are presented in a factual manner and the certainty of the delivery and the benefits of these applications are just taken for granted.
We know that science and the way society works is filled with uncertainties. If the public discourse is less and less used to talk about uncertainty, and if people who discuss different possibilities and alternative scenarios are not being trusted because it is considered that they are not confident enough or expert enough, then we are losing a way to communicate effectively what we know from science and how science and technology can actually contribute to address the societal challenges.
In a nutshell, we need to have a communication platform where all the voices can find their space, where we can communicate about the things that we don't know, that we can find ways to get this knowledge, where we don't oversimplify the message, and we recognize that we are dealing with complex problems that we do not know everything about, but we are aware of them, and everything that we do is intended to get better knowledge about them and better solutions to these challenges.
ANJA KASPERSEN: In responding to those uncertainties that you just alluded to, I noted that the mission statement of CLAIRE, the organization you lead, is "to provide a blueprint to steer AI as a transformative force for the benefit of all"—no small task in other words.
I read somewhere that you aspire to become essentially a European Organization for Nuclear Research (CERN) for AI. Do you feel that we are making headway on this and are actually able to create that environment that you were referring to earlier where we address the uncertainties, the promise, and the peril, and also avoid tech solutionist approaches in responding to both the potential but also the challenges these technologies represent and will continue to represent?
RICARDO CHAVARRIAGA: I think there are promising signs that we are moving in the right direction. That doesn't mean that everything is perfect, but there is increasing knowledge of the need for multistakeholder involvement for the development of these technologies.
I have heard all my life about the need for breaking silos, and the reality is that the silos are still there. We need to find better ways to move across these silos, to get out of our comfort zones, and have spaces where the researchers in academia sit with innovators, policymakers, ethicists, and others to identify how these technologies can really contribute and impact society.
There is of course a need for a very strong research environment and an efficient innovation landscape as well. That speaks about a need to engage with the political actors to promote public investment on these technologies so that we will find the correct balance between the private and public sectors on the development of these technologies. It is something that is probably not very well balanced today when we look at certain organizations that have a significant amount of power regarding not only their economic situation but also the access to data and the access to resources.
There is also a new generation of people looking at technology who are more open to address these ethical concerns and the social implications and who look at these technologies just as a magic wand that I can develop in my laboratory disconnected from everybody and after my "Eureka!" moment just publish it, have it on the Internet, and the world will be a better place just by snapping my fingers. I think there is more awareness of that.
There is still a lot of work to do in this aspect, but I think this new generation that is starting to look across these silos and considers it something that is worth spending their time and energy on, not only on the pure technical aspects.
There is also a kind of realization of what these technologies can do, but not with blind optimism. There have been all these ideas of technology to address the main global challenges, what some people talk about as "existential risks," and how technologies can contribute, but there is a more critical look at it than there was ten or five years before. I think this is a good seed to start steering efforts in that and to work in a common way.
Another part that I think is interesting—and it relates to your question about what we can learn from CERN—is there is also this idea of a need for being bold, to have large-scale projects that by definition require collaboration from multiple sectors, and projects that have a vision of what wants to be achieved but are also useful to produce derivative products that will spill over to society—we had it with space exploration, we had with the accelerators in CERN, the usual example is the World Wide Web, but there are many others that come out of that—that I don't need to be interested in particle physics to benefit from this.
I think this realization is again present today and we need to leverage on this idea of being bold, sometimes driven by a sense of urgency that we need to use technology for the benefit of all. We need to leverage that, being cautious because I don't believe—and this is something I heard in one conference that I liked a lot—that it is not good that we put technology in the critical path for survival. It is an element that contributes to that, but it is not technology that is going to save us. What saves us is how we integrate technology in all the economic, social, political, and ethical frameworks that we have.
We are at the right time for doing so. There is awareness. There is attention from a significant amount of people. There is probably not yet a general agreement on what is the right strategy, but I think this is something that we can work on. We can definitely do it, and I am convinced that we have to do it because our future depends on the issue of reproducibility and the issue of what is the approach that we have to science.
Somehow when we see the way daily life of scientists is and what the incentives are, I will say that some of them are really against the purpose of science that challenges itself. It is unrealistic to pretend that every scientist will have a breakthrough, that every discovery is new, and that every single paper that is published is changing knowledge or advancing knowledge. In that sense we have high aspirations that we will somehow make some noise in the science that we produce.
One issue of having metrics is that once you know the metrics you can try to game these metrics and rig the game. I think sometimes this leads into the type of science that we do, that we focus on low-hanging fruits, that we devote all the attention in certain areas, and leave aside others that are more risky or less trendy. I think there is a need to clean up our work there.
I also think that there is sometimes a conflation between what is basic science and applied science, in particular when we talk about translational research. I have seen many, many works that are presented as being translational but are developed with a mindset of basic research, in the sense that they don't necessarily consider the translational aspects of taking things outside of the original research state or extending to other potential types of users.
For instance, many times we have studies that are in the long run intended for assistive technologies but in the early stages are tested with control subjects, with people who have no motor disability. The learnings that we obtain from these studies are valuable, but they are still not enough to be able to do this translational phase.
We need to provide the means to do these translational studies, and this includes training, resources, and support. We need also to properly identify which studies are required.
Sometimes we have incentives for very early-stage research in these emerging technologies but not enough for development because it is no longer considered as a scientific endeavor and it is not mature enough to interest investors for development of a product. So we create a gap between these two where many potential ideas will fall through, and this is part of having a better environment for promoting this science.
We need to put all this together. We need to identify who can support these different processes, what they will require, what is needed, and also how can we properly learn from others. When we have studies that are not reproducible, when we have information that is not complete or is not shared, we cannot take the collective knowledge and the collective intelligence of the scientific community a step forward. When everybody tries to push in a single area or for its own agenda, then we are preventing this effect of the collective intelligence of the scientific community.
ANJA KASPERSEN: Actually when we think about ways of controlling the mind and understanding our brains, there is one word that we haven't discussed that comes to mind, which is that of "power." What are your thoughts? Do we need to rethink ethics in providing the right type of oversight to developments in this field to make sure that power does not fall into the wrong hands or is used in ways counter to our purpose?
RICARDO CHAVARRIAGA: Evidently the possibility of having functional brain-machine interfaces for mental surveillance, for mental manipulation, or for something as simple as providing a very efficient way for interaction or for our well-being will provide a significant amount of power to those who develop it and those who commercialize it.
If the area of data science and artificial intelligence is something to go by, we see how private actors can gain considerable power in discussions about the implementation and existence or not of appropriate oversight.
Since neurotechnologies will have a tight link with these data power industries, we can foresee a future where private actors gain access to more power or increase their power by the possibility of using these devices. Of course something that we need to be careful about is what the uses and the required characteristics of these systems are in terms of the protection of human dignity, in terms of their provisions for equity, and how the channels of distribution will be managed.
This of course is something that in order to be useful needs to be done at an international level. I don't see how we can expect a single national legislation or governance framework to have enough impact because we are talking about a global market here.
I mentioned one aspect, which is the power imbalance between private and public organizations. There is another, which is the difference between those countries that develop the technologies today and those that are more likely to become consumers of the technology. There is of course an economical power that comes to bear that will be a source of influence coming with all the technological global landscape that can lead us to a world where nations are aligned with their technology provider, and cybersecurity threats are a major driver to create a cleavage between these different types of alignments, and these will impact the adoption of neurotechnologies.
Another aspect is that if we don't consider the aspect of inclusivity when we develop these technologies of the brain, we can end up with systems that are based on the knowledge we have about the brain from a limited percentage of the global population. In other words, we will have systems, models, and treatments for mental health that are based on the knowledge that we have about brains from people in the United States, the European Union, or China and less from the Global South. How this will affect the efficacy and fitness for purpose of these devices is something that we can and need to address today. This is one aspect that affects this other dimension of power imbalance that we may have.
There can of course be other types of unintended uses of these technologies if they become easier to access, even do-it-yourself. Can this empower certain actors to use them for malicious purposes? This is something we need as well to be cautious about.
I think many of the problems that we are seeing right now with technology came from a wish of having more and more technology that is frictionless, and that makes us unaware of the costs that they have, the implications that these technologies have, and makes us a little bit reckless in the use of technology.
ANJA KASPERSEN: Thank you, Ricardo, for sharing your time and expertise with us, your passion for ensuring that this pivotal area of science receives the attention it deserves. This has been a gripping and insightful conversation.
Thank you to our listeners for tuning in, and a special thanks to the team at the Carnegie Council for hosting and producing this podcast.
For the latest content on ethics and international affairs follow us on social media @carnegiecouncil.
My name is Anja Kaspersen, and I hope we earned the privilege of your time. Thank you.