💡 “I don’t necessarily see Artificial General Intelligence as something that will wipe out humanity”
⏳ “My reaction ten years ago, without ChatGPT, was fear. I never thought we’d get this far”
⚖️ “There are people who want to soften the EU AI regulation — I would make it even stricter”
❓ “What kind of AI do we want? The fundamental question is what we want. Let’s make it beneficial to people”
This is issue #92 of the Transparent Algorithm newsletter, also available in Catalan, English, French, and Italian. Interview originally published in El Punt Avui, in Catalan.
Helena Matute Greño (Bilbao, 1960) is a psychologist and Professor of Experimental Psychology at the University of Deusto, where she directs the Experimental Psychology Laboratory. She is a member of Jakiunde, the Academy of Sciences, Arts and Letters of the Basque Country, and of the Spanish Academy of Psychology. She has studied and conducted research in Belgium, Australia, and the United States. She recently took part in the presentation, at the University of Barcelona, of Dr. Juli Ponce’s book The 2024 EU Artificial Intelligence Regulation, the Right to Good Digital Administration and Its Judicial Oversight in Spain, for which Helena Matute wrote the foreword.
Are you one of the first psychologists to work on psychology linked to new technologies?
In my field, there are very few people. In other areas there are more psychologists — for example, those studying bullying on social media or psychological problems related to the Internet — but in my specialty, we are few.
Is your specialty psychology and artificial intelligence?
I’d say my specialty is cognitive functionality: how the human mind works, cognitive biases, decision-making… I’ve spent years focused on the illusion of cause-effect, a very common bias that causes many problems. The other major theme, closely related, is how AI and new technologies affect processes such as decision-making.
Had you studied other areas before?
Yes. I studied learning theories. Much of my work is related to how humans and animals learn. In the 1990s we also began studying how machines learn, by simulating natural processes.
Humanity and technological progress have placed you in the perfect scenario. With the impact of AI, are you now in your prime?
Honestly, yes — it’s amazing. On the one hand, it’s fascinating; on the other, quite scary. In the early '90s we wanted to develop a theory of natural learning good enough to be coded into a computer. Back then, people talked about expert systems, but we wanted natural foundations, even if that meant introducing biases and errors. Many told us: “That’s no good, businesses want something that always works.” We saw that we wouldn’t get very far.
Why did you decide you needed to learn more about the human mind?
Because human learning theories were (and still are) very limited. We don’t really know the human learning algorithm. We achieved curiosities — like a machine that could play mus — but when we reached the “órdago” (all-in bet), it was impossible: the next day, we would beat it again. We tried with animal learning and made progress simulating pigeons, but there were also limits. We concluded that psychological research had to come first.
You’re talking about the late 1990s.
Yes, the papers we published range from 1990 to 1998.
How have you experienced the last 25 years of technological evolution, especially in AI?
Ten years ago I returned to the topic, and my main reaction — long before ChatGPT — was fear. I never imagined we’d get this far. I saw enormous possibilities. I started giving talks: some engineers told me I was exaggerating; others admitted I was right but didn’t dare say so publicly.
Are you still afraid?
Yes, perhaps even more. What can be done today is thrilling, but in the '90s I never considered risks. I understand engineers who don’t see them now either. But today, the risks are everywhere.
Are you more afraid of the arrival of Artificial General Intelligence or of current tools that affect learning, education, or human relationships?
I fear both. I don’t think AGI will necessarily wipe out humanity, but it can get out of hand if not aligned with our values. There’s now a crazy race to achieve it, and I ask myself: why? Meanwhile, everyday tools are already influencing our lives — education, health, decisions… everything.
And the AI tools themselves?
In medicine, industry, etc., they’re very useful — but must be tightly regulated. When AI interacts with people, we must be extremely cautious. Amazon already influences what we read; social media affects our information. The influence is huge.
Which population group do you see as more vulnerable: digital-native youth or analog seniors?
Young people trust too much: they use apps without questioning anything. Older people are more skeptical because we sense the corporate interests. In any case, the European regulation should be even stricter.
Some companies present their chatbots as “reasoners” with “consciousness” or “feelings.” What’s your opinion?
It’s pure marketing. Today, these systems only string together words with the highest probability of coherence. The problem is they sound very convincing, and if you don’t know the topic, you believe them. That’s extremely dangerous.
What’s the difference between talking to a chatbot that seems to know everything and chatting at a bar with people talking nonsense?
With friends, we know everyone is expressing emotions, and you choose what to believe. With AI, there’s an automation bias: we attribute objectivity and the absence of bias to the machine, as if it were an infallible calculator. We still haven’t accepted that it can make things up and be wrong.
Would you recommend staying away from AI, approaching it cautiously, or using it fearlessly?
We should approach it — but with great caution. We need mass awareness campaigns. The launch of ChatGPT was irresponsible: they released it without informing anyone, and serious cases soon followed. It suited the company: “Here it is, train it for us for free.”
You’ve done research in Belgium, Australia, and the U.S. What do you see in other countries?
All sorts of things. In the U.S., many envy European protection and are unhappy with the free rein given to big corporations.
Are you worried that Europe might backtrack on its regulation?
It would be a shame, after all the effort. The issue isn’t “guarantees or lagging behind” — it’s about what kind of AI we want. Just as we regulate traffic to save lives, we should regulate AI so that it benefits people. Europe could lead in that league.
Is AI an opportunity for languages and cultures like Basque or Catalan?
Yes, it’s a big opportunity. The language issue is one of the best resolved today, and the risks are minimal. If combined with appropriate regulation and a cultural approach, it could be fantastic.
Share this post