Skip to playerSkip to main contentSkip to footer
  • 2 days ago
Transcript
00:00I'll just let you respond to that before we move on.
00:04So historically, many people made really bad predictions.
00:07There are famous predictions about, you know, nobody will need a computer in their house and, you know, 640 kilobytes of memory is enough for everyone.
00:15Certainly. But there are also great examples of people accurately identifying problems and doing something about them.
00:22Maybe Y2K problem was a great example.
00:25For a decade, people saw it coming and did what needed to happen.
00:29So everything goes well. Nothing collapsed.
00:32We handled that simple problem.
00:34Right now, we are having this conversation as if it started today and we are making predictions about the future.
00:41This conversation about AI safety is decades old.
00:45And you can go back and see what predictions people made.
00:48And we talked about all the red lines we are seeing crossed right now.
00:52AI is going to start lying to us.
00:54AI is going to start to protect its own existence, trying not to get modified or deleted.
01:00We're seeing it in the latest evals.
01:02It's happening exactly what we predicted as well as capabilities.
01:06People said we're not going to have general AI for hundreds of years.
01:10Now, I think prediction markets are saying we're two years away.
01:13That's a big shift in support of every prediction made by people in the AI safety community.

Recommended