In this edition of The Big Interview: Renowned historian, philosopher, and futurist Yuval Noah Harari talks with WIRED Japan Editor-in-Chief Michiaki Matsushima about the nexus of artificial intelligence, information, and the human experience.
Category
🤖
TechTranscript
00:00Do you think you will be able to trust the super intelligent AI's that you're developing?
00:06And then they answer yes.
00:08And this is almost insane.
00:10Because the same people who cannot trust other people,
00:13for some reason, they think they could trust these alien AI's.
00:26Welcome to Big Interview.
00:28So welcome to the Wired Peek Interview.
00:32Thank you. It's good to be here.
00:34And that's, I believe, the main theme of your new book, Nexus.
00:36Yes.
00:37So we would like to know how should we live with,
00:42especially the AI or, you know, super intelligence and AI.
00:46And that's, I believe, the main theme of your new book, Nexus.
00:49Yes.
00:50So we would like to know how should we live with,
00:54especially the AI or, you know, super intelligence and the society in the future.
01:00The first question in the late 90s, actually, there is a one idea that if the internet will spread,
01:10you know, globally, and then it will bring the world peace.
01:15Because the information will tell the truth and every people could get access to that every information.
01:23And maybe multi-understanding will grow.
01:27And finally, the human being becoming wiser.
01:30However, you said that such a view of information is more like naive.
01:38Yeah.
01:39Could you explain why?
01:40Yes, because information is not truth.
01:43Most information is not about representing reality in a truthful way.
01:49The main function of information is to connect a lot of things together, to connect people together.
01:58And you sometimes can connect people with the truth,
02:01but it's often easier to do it with fiction and fantasy and so forth.
02:06Some of the most important texts in history, you know, like the Bible,
02:11they connect millions of people together, but not necessarily by telling them the truth.
02:18In a completely free market of information, most information will be fiction or fantasy or even lies,
02:27because the truth is costly, whereas fiction is cheap.
02:32To write a truthful account of anything, of history, of economics, of physics,
02:36you need to invest time and effort and money to gather evidence, to fact check.
02:42It's costly, whereas fiction can be made as simple as you would like it to be.
02:47And finally, the truth is often painful or unpleasant, whereas fiction can be made as pleasant and attractive as you would like it to be.
02:57So in a completely free market of information, truth will be flooded, overwhelmed, by enormous amounts of fictions and fantasies.
03:09This is what we saw with the internet, that it was a completely free market of information.
03:15And very quickly, the expectations that the internet will just spread facts and truth and agreement between people turned out to be completely naive.
03:25Recently, Bill Gates, an interview of The New Yorker, initially he thought that digital technology will actually empower the people.
03:45But finally he realized that social networking is totally different than the previous digital technologies.
03:54And he said he realized it too late.
03:57And he also said that AI is also a totally different technology than the previous one.
04:03And if AI is totally different than the technology we have previously, is there anything we could learn from the history if there is nothing equivalent to the AI?
04:17And the most important thing to understand is that AI is an agent and not a tool.
04:25I see.
04:26Previous information technologies, I mean I hear many people say the AI revolution, it's like the print revolution.
04:32Or it's like the invention of writing.
04:34And this is a misunderstanding.
04:36Because all these previous information technologies, they were a tool in our hands.
04:42If you invent a printing press, you still need a human being to write all the texts.
04:49And you need a human being to decide what books to print.
04:54AI is fundamentally different.
04:56It is an agent.
04:58It can write books by itself.
05:01It can decide by itself to disseminate these ideas or those ideas.
05:06And it can also create entirely new ideas by itself.
05:10And this is something unprecedented in history.
05:13Because we never had to deal with a super intelligent agent.
05:19But there were of course other agents in the world, like animals.
05:23But we were more intelligent than the animals.
05:26We were especially better than the animals at connecting.
05:31Why do we control the planet?
05:33Because we can create networks of thousands, and then millions, and then billions of people.
05:41Who don't know each other personally, but can nevertheless cooperate effectively.
05:47Ten chimpanzees can cooperate.
05:49Because among chimpanzees, cooperation is based on intimate knowledge, one of the other.
05:55But a thousand chimpanzees cannot cooperate, because they don't know each other.
05:59A thousand humans can cooperate.
06:01Even a million humans, or a hundred million humans.
06:05Like Japan today, has more than 100 million citizens.
06:09Most of them don't know each other, nevertheless you can cooperate with them.
06:14How come that humans manage to cooperate in such large numbers?
06:19Because they know how to invent stories, and shared stories.
06:23Religion is one obvious example.
06:26Money is probably the most successful story ever told.
06:30Again, it's just a story.
06:32I mean, you look at a piece of paper, you look at a coin.
06:35It has no objective value.
06:37It can nevertheless help people connect and cooperate.
06:41Because we all believe the same stories about money.
06:44And this is something that gave us an advantage over chimpanzees, and horses, and elephants.
06:51None of them can invent stories like money.
06:55But AI can.
06:57Which again, the emphasis on intelligence may not be, may be misleading.
07:03Okay.
07:04The key point about AI, it can invent new stories, like new, maybe new kinds of money.
07:11And it can create networks of cooperation better than us.
07:16So, you mentioned a lot about religion.
07:19The important thing is that you wrote in the book that those kinds of, you know, acceptance vision itself of the religion will affect about the acceptance of AI itself.
07:32Yes.
07:33In the Japanese or Asian way, in the animism way, we accept naturally more like an alien intelligence living together in the same environment.
07:44Or like, I would say, multi-species things.
07:47Yes.
07:48Maybe it's vulnerable to accept about what AI will tell or something.
07:53But could you tell if also the advantage of those things? What would you say?
07:58Well, I think that the basic attitude towards the AI revolution should be one that avoids the extremes of either being terrified that AI is coming and will destroy all of us.
08:13But also to avoid the extreme of being overconfident.
08:16I see.
08:17That, oh, AI, it will improve medicine, it will improve education, it will create a good world.
08:23We need a middle path of, first of all, simply understanding the magnitude of the change we are facing.
08:33That all the previous revolutions in history pale in comparison with this revolution.
08:41Because, again, throughout history, every time we invent something, so we still have human beings making all the decisions.
08:49So, for instance, in the financial system, I just recently read an article in Wired about an AI that created a religion and wrote a holy book of the new religion and also created or helped to spread a new cryptocurrency.
09:08And it now has, in theory, 40 million dollars, this AI.
09:14Wow.
09:15Now, what happens if AI's start to have money, start to have money of their own and the ability to make decisions about how to use it, if they start investing money in the stock exchange?
09:27So, suddenly, to understand what is happening in the financial system, you need to understand not just the ideas of human beings, you also need to understand the ideas of AI.
09:41And AI can create ideas which will be unintelligible to us.
09:48The horses could not understand the human ideas about money.
09:53So, I can sell you a horse for money.
09:55The horse doesn't understand what is happening because the horse doesn't understand money.
10:00The same thing might happen now, but we will be like the horses.
10:04The horses and elephants, they cannot understand the human political system or the human financial system that controls their destiny.
10:13That the decisions about our lives will be made by a network of highly intelligent AIs that we simply can't understand.
10:33The AI's trust network, we can't understand.
10:36And sometimes we say those things as not singularity, not only singularity, but hyperobject.
10:43Like hyperobject means what you can't understand.
10:46And that's context often said in environment things, you know, the nature of the kind of system of Earth.
10:55We can't understand fully.
10:57So, you know, human beings really struggling about how to deal with, adapt with those changes of the climate or the, you know, the big Earth.
11:05The big Earth systems.
11:06And maybe the AI is just coming up to the top list of how could we deal with.
11:12How human beings could do, you know, make being flexibility or even just deal with those hyperobjects or just singularity.
11:22How could we do that, you know, if you can't understand fully?
11:26Ideally, we should be able to trust the AIs to help us deal with these hyperobjects or with these higher complex realities which are beyond our understanding.
11:38But the big paradox of the AI revolution, I think, is the paradox of trust.
11:45We are now in the midst of an accelerating AI race with different companies and different countries rushing as fast as possible to develop more and more powerful AIs.
12:00Now, when you talk with the people who lead the AI revolution, with the entrepreneurs, with the business people, with the heads of the governments, and you ask them, why are you moving so fast?
12:13They almost all say that we know it's risky. We understand it's dangerous. We understand it would be wiser to move more slowly and to invest more in safety.
12:26But the other company or the other country doesn't slow down. They will win the race.
12:32They will develop super intelligent AI first and they will then dominate the world. They will conquer the world.
12:39And we cannot trust them. This is why we must move as fast as possible.
12:44Now, you ask them a second question. You ask them, do you think you will be able to trust the super intelligent AIs that you're developing?
12:54And then they answer yes. And this is almost insane. Because the same people who cannot trust other people, for some reason, they think they could trust these alien AIs.
13:08Yes.
13:09You know, we have thousands of years of experience with human beings. We have some good understanding of human psychology, human politics.
13:19We understand the human craving for power, but we also understand how to check the pursuit of power and how to build trust between humans.
13:31With AIs, with super intelligent AIs, we have no experience at all.
13:37I see.
13:38So this situation, the safest thing would be to, first of all, build more trust with other humans.
13:45Sure.
13:46So it's amazing that today we have these networks of trust in which hundreds of millions of people cooperate on a regular basis.
13:55And there is no such thing as a completely free market. Some things can be created successfully by competition in a free market.
14:04We know that. But there are certain services, goods, essentials that cannot be maintained just by competition in a free market.
14:15Justice is one example. Let's say it's a free market. I sign a business contract with you and then I break the contract.
14:24So we go to a judge. We go to a court. I bribe the judge. Suddenly you don't like the free market. You say, no, no, no, no, no.
14:32The court should not be a free market. It shouldn't be the case that the judge rules in favor of whoever gives the judge most money.
14:40Yes.
14:41In that situation, you don't like the free market so much. There is always some kind of samskratum, of trust, which is essential for any competition.
15:03Negative scenarios about democracy becoming populism or authoritarianism.
15:08Yes.
15:09But would you think about the positive side of, you know, using AI to encourage the more trust network, more democracies?
15:19Yes.
15:20Is there like any path we could make, we could use, you know, like those new technologies to enhance the democracy?
15:27Absolutely. I mean, we've seen, for instance, that in social media, there are algorithms that deliberately spread fake news and misinformation
15:36and conspiracy theories and destroyed trust between people, which resulted in a crisis of democracy.
15:43But the algorithms don't have to spread fake news and conspiracy theories.
15:49They did it because they were designed in a certain way. The goal that was given to the algorithms of social media platforms like Facebook or YouTube or TikTok was to increase engagement, maximize engagement.
16:04Yeah.
16:05This was the goal of the algorithms. And the algorithms discovered by trial and error that the easiest way to maximize engagement is by spreading hate and anger and greed, because these are the things that make people very, very engaged.
16:22When you're angry about something, you want to read more about it and you tell it to other people. There is more engagement.
16:29If you give the algorithms a different goal, for instance, increased trust or increased trustfulness, then they will not spread all these fake news.
16:41They can be helpful for building a good society, a good democratic society.
16:47Another very important thing is that democracy should be a conversation between human beings.
16:54I see.
16:55For that, you need to know, you need to trust that you are talking with another human being.
17:01Increasingly, on social media or generally on the internet, you don't know if what you're reading is something that a human being has written and is spreading or is it a bot.
17:16This destroys trust between humans and makes democracy much more difficult.
17:22But we can have a regulation, a law that bans bots and AIs from masquerading as human beings.
17:30If you see some story on Twitter, you need to know if this is being promoted by a human being or by a bot.
17:41And if people say, but what about freedom of speech? Well, bots don't have freedom of speech.
17:46I mean, we don't need to. I'm very much against censoring the expression of human beings.
17:54But this doesn't protect the expression of bots. Bots don't have freedom of speech.
18:00In that context, I remember that one of the big companies in Japan trying to make that AI constellation.
18:10You know, just connecting AI and even just connect with human beings and AI.
18:15And it just led them to discuss something important, like multi-stakeholder democracies.
18:22So AI will declare their AI, of course.
18:26And they have really different intelligence, like alien intelligence.
18:30And would you think, you know, in the near future, human beings will have a discussion with alien intelligence, would make us wiser?
18:41Absolutely. Because yes, AI's on the one hand can be very creative and can come up with ideas that wouldn't occur to us.
18:49So talking with an AI can make us wiser. But AI's can also flood us with enormous amounts of junk and of misleading information.
19:02And they can manipulate us. And the thing about AI's is that, you know, as members of society, we are stakeholders.
19:10For instance, the sewage system. We need the sewage system because we have bodies. We can become sick.
19:17If the sewage system collapses, then diseases like dysentery and cholera spread.
19:23This is not a threat to AI. For the AI, it doesn't care if the sewage system collapses.
19:29It cannot become sick. It cannot die. It doesn't care about it.
19:32We need to remember it's not a human being. It's not even an organic being.
19:37Its interests, its worldview are alien to us.
19:42When you talk with people, you know, like we are now talking to each other.
19:47The fact that we are embodied beings is very, very clear.
19:53Ultimately, they also have a physical existence.
19:56Because AI's, they don't exist in some kind of mental field.
20:01They exist in a network of computers and servers and so forth.
20:06So they also have physical existence, but it's not organic.
20:21So what is the most important thing for you when you think about the future?
20:28I think the two key issues, one we've covered a lot, which is the issue of trust.
20:38If we can strengthen trust between humans, we will also be able to manage the AI revolution.
20:45The other thing is the fear, the threat.
20:49I mean, throughout history, people lived their lives inside, you can say, a cultural cocoon.
20:57Made of poems and legends and mythologies, ideologies, money.
21:04All of them came from the human mind.
21:06Now, increasingly, all these cultural products will come from a non-human intelligence.
21:15And we might find ourselves entrapped inside such an alien world and lose touch with reality.
21:25Because AI can flood us with all these new illusions that don't even come from the human intelligence,
21:34from the human imagination.
21:36So it's very difficult for us to understand this illusion.
21:40I see.
21:41Thank you very much for all the interviews.
21:45It's a really inspiring and a great message, even for Japanese readership,
21:48and wired Japanese readership, too.