Skip to playerSkip to main contentSkip to footer
  • 2 days ago
Transcript
00:00Dr. Yampolski, if it is, as Amjad says, this is commercial, why would a commercial entity create something that would damage its commercial viability?
00:10That's a great question. So all the heads of top AI labs, whatever it is, OpenAI, Anthropic, Grok, DeepMind, are on record as saying they believe this technology is extremely dangerous. They think it is very likely to possibly cause an existential catastrophe. For example, Elon Musk lately says it's only 20%.
00:33Okay, how is this helping them to promote the lab commercially? Is this statement bragging about how great their system is? No, this is what they said before they were CEOs of those labs. This is what they honestly believe in.
00:47Right now, they are in a situation where they have commercial pressure to continue developing those systems. They cannot stop and tell investors, we're going to slow down and try to figure out safety first. They are in an arms race, basically prisoner's dilemma. What benefits the group is not what benefits individuals.
01:07As an individual head of the lab, you want your model to be the most advanced one, then the government steps in and says, this is becoming a little too much, so we need to slow down.
01:17So commercially, yeah, you want to be at the head of a competition. You want to be the first model to get to human level performance and get all the benefits of creating free labor.

Recommended