• 17 hours ago

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en
Transcript
00:00Hoping to lay down rules for international controls on artificial intelligence while
00:15maintaining individual sovereignty over the technology, leaders of dozens of countries
00:19and hundreds of companies are gathering here in Paris for a two-day summit looking at the
00:24potentials and pitfalls of the tool, promising to make it less energy-hungry and more ethical.
00:29Ahead of the summit, French President Emmanuel Macron was touting the financial windfall
00:33AI could bring and listing investments being made in the field.
00:37Emelie Boyle tells us more.
00:41Ahead of the AI Action Summit starting on Monday in Paris, French President Emmanuel
00:48Macron gave a special interview.
00:50We're living through a technological revolution the likes of which we've rarely seen.
00:56This is a moment of opportunity for humanity.
00:59The summit will be attended by heads of state, industry leaders and experts who will try
01:04to hammer out agreements on guiding the development of the rapidly advancing technology.
01:09This has to be done through global regulation, through partnerships between private and public
01:18players so that the right behaviours emerge.
01:23What we want is for players to say when it's an artificial intelligence.
01:30AI has always been high on Macron's agenda.
01:34The president hopes to boost investment in the sector for France to stay in the game.
01:40The first battle for us Europeans is to invest, invest, invest.
01:45For us in France, tomorrow's summit will announce 109 billion euros of investment in artificial
01:50intelligence in the next few years.
01:52Geopolitics will be the major focus of the conference.
01:55Tensions are running high in the backdrop of DeepSeek's launch and Donald Trump's first
02:00few weeks in office.
02:01The American head of state has spoken of his desire to make the United States the world
02:06capital of AI.
02:09One thing that is sure when it comes to AI, it is big business.
02:12According to Goldman Sachs, within the next decade it could pour some 7 trillion dollars
02:16into the world's GDP and private investment in AI was already over 90 billion two years
02:21ago.
02:22I want to speak now to the director of operations at the company Lighton, which wants to help
02:26businesses seize the opportunities of generative AI.
02:29Mr Milo Regnal, thanks for being with us.
02:31Firstly, explain to us, we've heard so much about how AI will take people's jobs, so explain
02:36to us how it's good for business.
02:39So I think in a first instance, no one thinks that AI will be taking jobs in the next one
02:44to two years and that on the contrary, companies that don't seize the AI opportunity will be
02:50companies that will be having to lay off workers because their competitors will be growing
02:55much faster than them.
02:56So I think that is kind of the key worry for the next two to three years.
03:02After that, who knows?
03:03And things become more uncertain.
03:05Indeed, it goes so rapidly.
03:07When you think back, we've only had that chat GPG app for, I think, two years, and it's
03:11already gained so much in popularity and indeed advanced in what it can do, which is the impressive
03:18thing of AI is it can nearly teach itself things.
03:21And that's the danger we're hearing about more and more.
03:23Could AI become autonomous?
03:25Could it lead to a very grave danger?
03:27Yeah, so I think we're not yet at that point.
03:31And I think also we're not even yet at the point where companies are fully integrating
03:35AI into their base businesses.
03:39So the first step we need to work on in Europe, if we want AI to create jobs, is to increase
03:48the speed at which European companies are adopting AI in their processes.
03:52And on a side note, today, the AI systems that we are using aren't what we call kind
03:57of auto learning.
03:59So they're trained in the first point, and then they're deployed and they kind of help
04:03businesses.
04:04I think one point that we can work on a lot is we've seen AI used on the kind of chatbot
04:09form.
04:10In Europe, we have a lot of industrial safety critical system companies.
04:14And that's something where AI still has a lot of technological barriers related, for
04:20instance, to reliability, safety.
04:22So something that we can work on to kind of increase its opportunity for European industrial
04:27companies.
04:28Can you give me a concrete example, we'll say, of a company that could really profit
04:32from AI that is available today?
04:35Yeah.
04:36So today, to be very clear, the kind of most obvious use cases are related to information
04:44retrieval and information synthesis.
04:46We talk about large language models.
04:48So all of these use cases around language are those that have been proliferating.
04:53And so this means that any company that has large amounts of textual knowledge hanging
04:59around.
05:00So in particular, this can be, for instance, complex industrial companies that onboard
05:07new people coming into the company that have to learn about procedures to build certain
05:13car components, etc.
05:15Simply being able to easily access, thanks to chatbots, thanks to large databases and
05:20what we call RAG to retrieve that information.
05:23You can kind of upskill people faster and give them a faster access to information.
05:27So that's the kind of base use case, but it's the use case that's been deployed the fastest
05:31up until now.
05:32It almost sounds like a highly efficient research tool.
05:35But there's also the question then of the bias in the data being put into those systems.
05:41And you know, I was hearing from a woman on Sunday who said certain types of people are
05:47putting in the information, certain companies, certain countries are putting in information,
05:51and it will lead as a result to a lot of discrimination for a lot of other countries, notably.
05:56Yeah, that's right.
05:57And I think there are two points to kind of take that question.
06:02The first one is obviously we need to be working on building AI systems that A, look into the
06:08extent to which they might be biased and B, that kind of use various data sources to kind
06:16of contradict that bias.
06:17So for instance, up until now, these systems have been trained on much of the Internet,
06:22which happens to be very English language focused, very kind of Western centric.
06:27If you're in a country that is less so, then you want to be building up data sources to
06:32prevent that.
06:33That's the first element.
06:34The second element is that you can use these AI systems more as in quotes, reasoning machines,
06:43and you want to kind of anchor these systems in your own reality.
06:48And so that's what I was describing with companies that have large corpuses of their
06:52own data.
06:54What we do, for instance, at Lighton is we take these systems, we anchor them in a company's
06:59data and it works the same for governments, etc.
07:02And when you ask a question, the language model will be acting as a reasoning machine,
07:09but going to retrieve factual information from your data sources and replying saying,
07:14look, we have the source to your answer.
07:17This is the answer and this is where you can check to make sure that it's correct.
07:21And there are a certain number of other ways in which you can kind of bound the system
07:24for it to retrieve information from the right points.
07:26Okay, you have to have a check and balance on it.
07:29But currently, most AI is being generated by the United States or China.
07:36We're calling for global governance on the issue.
07:39We know in the past, the US has been quite lax when it comes to rules and regulations.
07:43A lot of innovators say, well, if you put too much regulations, we'll stop innovating.
07:46So if the US continues in that stealth and decides to leave the rules to one side and
07:51just to advance on innovation, won't they ultimately have all the control?
07:56So I think that's a valid and tricky point to perhaps to push back on that.
08:02I think that the firstly, the US under the Biden administration actually was very much
08:08aligned with the direction that the EU was taking on kind of AI governance, very much
08:13aligned with the overall global view that was being taken on AI governance, which is
08:18a risk-based approach, saying that these systems, as and when they become more capable and therefore
08:25the potential impact that they have is larger, we need to put in kind of high levels of safeguards
08:31and obligations.
08:33And to be honest, the EU Act has a relatively high bar for this.
08:38When systems become extremely capable, there is a bit more kind of documentation that you need
08:44to be doing to make sure that your systems are safe.
08:47I think the flip side of that is that it kind of accelerates government capacity building.
08:54So it facilitates information sharing between companies and government, which can also have a
09:00good advantage for kind of educating regulators as to what can be relevant.
09:05But you're right in the kind of very immediate fallback is that places where companies can go
09:11as fast as they want will be more competitive.
09:15It's a hard balance, isn't it?
09:16And finally, just on the individual level, are there things I know you work for companies on the
09:21issue, but on the individual level, are there things that we should be aware of as we start to
09:25use AI? Because it does look like it's pretty much unavoidable.
09:29Yeah, that's right. I think so.
09:31I also worked at a think tank called Institut Montaigne.
09:34At Institut Montaigne, we published an online training course called Destination AI to precisely
09:41try and educate as many citizens as possible on how AI functions and how they should be
09:46integrating it into their daily lives.
09:48And I think that the first element is just for people to understand the broad principle of how
09:54these systems work based on data, based on, at the moment, predicting the next token or word in a
09:59sequence of elements, precisely so that they can be aware of, for instance, AI-generated content,
10:07whether it's textual, whether it's visual, also to be able to identify situations in which they
10:13might be confronted with these kinds of AI-generated content.
10:17That's the first element.
10:19Second element, to simply be able to use these tools in their everyday lives.
10:23We were mentioning that companies that would adopt AI faster would be more competitive.
10:27That also applies on the individual citizen level.
10:30So these are kind of two elements that are very important.
10:33I'm going to have to dig my head out of the sand and face up to it and trade myself off, I think.
10:37Milo O'Reilly, thanks so much for coming in and joining us on the channel.

Recommended