During a Senate Commerce Committee hearing last week, Sen. Amy Klobuchar (D-MN) spoke about regulation within the AI industry.
Category
🗞
NewsTranscript
00:00Senator Klobuchar. Thank you very much, Senator Cruz. A lot of exciting things with AI,
00:04especially from a state like mine that's home to the Mayo Clinic with the potential to unleash
00:11scientific research while we've mapped the human genome and we have rare diseases that can be
00:17solved. So there's a lot of positive, but we all know, as you've all expressed, there's challenges
00:22that we need to get at with permitting reform. I'm a big believer in that. Energy development.
00:27Thank you, Mr. Smith, for mentioning this with wind and solar and the potential for more fusion and
00:34nuclear, but wind and solar, the price going down dramatically in the last few years. And to get
00:42there, we're going to have to do a lot better. I think David Brooks put it the best when he said,
00:47I found it incredibly hard to write about AI because it is literally unknowable whether this
00:52technology is leading us to heaven or hell. We wanted to lead us to heaven. And I think we do
00:58that by making sure we have some rules of the road in place so it doesn't get stymied or set backwards
01:03because of scams or because of use by people who want to do us harm. As mentioned by Senator Cantwell,
01:11Senator Thune and I have teamed up on legislation to set up basic guardrails for the riskiest non-defense
01:18applications of AI. Mr. Altman, do you agree that a risk-based approach to regulation is the best way
01:26to place necessary guardrails for AI without stifling innovation? I do. That makes a lot of
01:32sense to me. Okay, thanks. And did you figure that out in your attic? No, that was a more recent discovery.
01:37Thank you. Very good. Just want to make sure. Our bill directs Mr. Smith, the Commerce Department,
01:45to develop ways of educating consumers on how to safely use AI systems. Do you agree that consumers
01:51need to be more educated? This was one of your answers to your five words, so I assume you do.
01:57Yes, and I think it's incumbent upon us as companies and across the business community to contribute to
02:03that education as well. Okay, very good. Back to you, Mr. Altman. The Americans rely on AI, as we know,
02:10increasingly on some high-impact problems to make them be able to trust that. We need to make sure
02:17that we can trust the model outputs. The New York Times recently reported earlier this week that AI
02:22hallucinations, new word to me, where models generate incorrect or misleading results are getting worse.
02:30That's their words. What standards or metrics does OpenAI use to evaluate the quality of its
02:36training data and model outputs for correctness? On the whole, AI hallucinations are getting much
02:44better. We have not solved the problem entirely yet, but we've made pretty remarkable progress
02:47over the last few years. When we first launched ChatGPT, it would hallucinate things all the time.
02:53This idea of robustness, being sure you can trust the information. We've made huge progress there.
02:58We cite sources. The models have gotten much smarter. A lot of people use these systems all the time,
03:03and we were worried that if it was not 100.0% accurate, which is still a challenge with these
03:10systems, it would cause a bunch of problems. But users are smart. People understand what these
03:14systems are good at, when to use them, when not. And as that robustness increases, which it will
03:20continue to do, people will use it for more and more things. But as an industry, we've made pretty
03:26remarkable progress in that direction over the last couple of years.
03:28Well, I know we'll be watching that. Another challenge that has been, we've seen, and Senator
03:35Cruz and I worked on a bill together for quite a while, and that's the Take It Down Act. And that
03:41is that we're increasingly seeing interact activity where kids looking for a boyfriend or girlfriend,
03:49maybe they put out a real picture of themselves, it ends up being distributed at their school, or
03:53somehow someone tries to scam them from financial gain, or it's AI, as we've increasingly seen,
04:00where it's not even someone photos, but someone puts a fake body on there. And we've had about
04:06over 20 suicides in one year of young people, because they felt like their life was ruined,
04:12because they were going to be exposed in this way. So this bill we passed, and through the Senate
04:18and the House, the First Lady supported it, and it's headed to the President's desk. Could you talk
04:24about how we can build models that can better detect harmful defects? Mr. Smith?
04:31Yeah, I mean, we're doing that, OpenAI is doing that, a number of us are, and I think the goal is to
04:37first identify content that is generated by AI, and then often it is to identify what kind of content is harmful.
04:46And I think we've made a lot of strides in our ability to do both of those things.
04:51There's a lot of work that's going on across the private sector, and in partnership with groups
04:55like NCMEC, to then collaboratively identify that kind of content, so it can be taken down.
05:03We've been doing this, in some ways, for 25 years since the internet, and we're going to need to do more of it.
05:09And on the issue, last question, Mr. Chair, since the last one was about your bill, I figure it's okay.
05:15The newspapers, and you testified before the Senate Judiciary Committee, Mr. Smith, about the bill,
05:23Senator Kennedy and I still think that there's an issue here about negotiating content rates.
05:29We've seen some action recently in Canada and other places.
05:33Can you talk about those evolving dynamics with AI developers and what's happening here
05:39to make sure that content providers and journalists get paid for their work?
05:44Yeah, it's a complicated topic, but I'll just say a couple of things.
05:47First, I think we should all want to see newspapers in some form flourish across the country,
05:53including, say, rural counties that increasingly have become news deserts.
05:58Newspapers have disappeared.
05:59Second, and it's been the issue that we discussed in the Judiciary Committee,
06:04there should be an opportunity for newspapers to get together and negotiate collectively.
06:09We've supported that.
06:10That will enable them to basically do better.
06:14Third, every time there's new technology, there is a new generation of a copyright debate.
06:20That is taking place now.
06:22Some of it will probably be decided by Congress, some by the courts.
06:25A lot of it is also being addressed through collaborative action,
06:29and we should hope for all of these things to, I'll just say, strike a balance.
06:33We want people to make a living creating content,
06:36and we want AI to advance by having access to data.
06:40Okay, thanks.
06:41I'll ask other questions on the record.
06:43Thank you, Mr. Chair.