⚠️ WARNING: What you're about to see is not science fiction… it's happening NOW.
AI Robots are rapidly evolving into sentient-like machines, capable of learning, adapting, and even making emotional decisions.
🧠 Are these machines truly alive?
🤖 Can they think and feel — or are we just fooling ourselves?
This chilling deep dive uncovers the disturbing truth about how close we are to true AI consciousness.
💥 Don’t miss this eye-opening breakdown from AI Revolution.
👉 Like, Share & Subscribe for more AI truths they don’t want you to know!
#SentientMachines
#AIConsciousness
#DisturbingAI
#AIRobots
#AIRevolution
#AIWarning
#FutureOfAI
#ArtificialIntelligence
#RobotAwareness
#MachineLearning
#AI2025
#CreepyAI
#AGI
#SentientAI
#TechTruth
#NextGenAI
#SmartRobots
#AIvsHumanity
#WakeUpCall
#AIHorror
AI Robots are rapidly evolving into sentient-like machines, capable of learning, adapting, and even making emotional decisions.
🧠 Are these machines truly alive?
🤖 Can they think and feel — or are we just fooling ourselves?
This chilling deep dive uncovers the disturbing truth about how close we are to true AI consciousness.
💥 Don’t miss this eye-opening breakdown from AI Revolution.
👉 Like, Share & Subscribe for more AI truths they don’t want you to know!
#SentientMachines
#AIConsciousness
#DisturbingAI
#AIRobots
#AIRevolution
#AIWarning
#FutureOfAI
#ArtificialIntelligence
#RobotAwareness
#MachineLearning
#AI2025
#CreepyAI
#AGI
#SentientAI
#TechTruth
#NextGenAI
#SmartRobots
#AIvsHumanity
#WakeUpCall
#AIHorror
Category
🤖
TechTranscript
00:00In 1938, the common man's condition in the Soviet Union, Germany, or the United States
00:11may have been grim, but he was constantly told that he was the most important thing
00:15in the world and that he was the future.
00:19He looked at the propaganda posters and saw himself there.
00:23I am in that poster.
00:24I am the hero of the future.
00:26In 2024, the common person feels increasingly irrelevant.
00:30In the 20th century, the masses revolted against exploitation, and now in 2024, the people
00:35fear irrelevance.
00:37Fears of machines pushing people out of the job market are, of course, nothing new.
00:43And in the past, such fears proved to be unfounded.
00:46But artificial intelligence is different from the old machines.
00:50In the past, machines competed with humans mainly in manual skills.
00:53Now they are beginning to compete with us in cognitive skills.
00:57And we don't know of any third kind of skill beyond the manual and the cognitive in which
01:01humans will always have an edge.
01:03At least for a few more decades, human intelligence is likely to far exceed computer intelligence
01:08in numerous fields.
01:10Hence, as computers take over more routine cognitive jobs, new creative jobs for humans
01:14will continue to appear.
01:16Many of these new jobs will probably depend on cooperation rather than competition between
01:21humans and AI.
01:22Human AI teams will likely prove superior not just to humans, but also to computers working
01:28on their own.
01:28And we want to look at that in a more broader and decisive way.
01:32How is it that we are racing up to something vicious and something that holds little promise
01:36but long-lasting threats?
01:38And in this documentary, we are going to uncover every little thing about the AI and robots and
01:44the dark side of it.
01:45So let's keep rolling.
01:51Humans are used to thinking about life as a drama of decision-making.
01:56Works of art, be they Shakespeare plays, Jane Austen novels, or cheesy Hollywood comedies,
02:00usually revolve around the hero having to make some crucial decision.
02:03To be or not to be, that is the question.
02:06To listen to my wife and kill King Duncan, or listen to my conscience and spare him, to
02:11marry Mr. Collins or Mr. Darcy.
02:13Christian and Muslim theology similarly focus on the drama of decision-making, arguing that
02:18everlasting salvation depends on making the right choice.
02:21What will happen to this view of life as we rely on AI to make ever more decisions for
02:26us?
02:27Even now, we trust Netflix to recommend movies and Spotify to pick music we'll like.
02:32But why should AI's helpfulness stop there?
02:35Every year, millions of college students need to decide what to study.
02:38This is a very important and difficult decision.
02:40It is also influenced by students' own individual fears and fantasies, which are themselves shaped
02:46by movies, novels, and advertising campaigns.
02:49It's not so hard to see how AI could one day make better decisions than we do about careers,
02:54and perhaps even about relationships.
02:56But once we begin to count on AI to decide what to study, where to work, and whom to
03:00date or even marry, human life will cease to be a drama of decision-making, and our conception
03:06of life will need to change.
03:07Democratic elections and free markets might cease to make sense.
03:11So might most religions and works of art.
03:14Imagine Anna Karenina taking out her smartphone and asking Siri whether she should stay married
03:18to Karenin or elope with the dashing Count Vronsky.
03:21Or imagine your favorite Shakespeare play with all the crucial decisions made by a Google
03:26algorithm.
03:27Hamlet and Macbeth would have much more comfortable lives.
03:30But what kind of lives would those be?
03:32Do we have models for making sense of such lives?
03:40Obviously, the human brain is the most complex and capable thinking God has ever devised.
03:45It's the reason why human beings sit atop the planetary food chain, growing in number
03:50every year while so many wild animals careen toward extinction.
03:54It makes sense that starting in the 1940s, researchers in what would become the artificial intelligence
04:00field began toying with a tantalizing idea.
04:03What if we designed computer systems through an approach that's similar to how the human
04:07brain works?
04:09Our minds are made up of neurons, which send signals to other neurons through connective
04:13synapses.
04:15The strength of the connections between neurons can grow or wane over time.
04:19Together, all those neurons and connections encode our memories and instincts, our judgments
04:24and skills, our very sense of self.
04:27So why not build a computer that way?
04:29In 1958, Frank Rosenblatt pulled off a proof of concept, a simple model based on a simplified
04:36brain which he trained to recognize patterns.
04:39Rosenblatt wasn't wrong, but he was too far ahead of his time.
04:43Computers weren't powerful enough, and data wasn't abundant enough to make the approach
04:47viable.
04:48It wasn't until the 2010s that it became clear that this approach could work on real problems
04:53and not toy ones.
04:54By then, computers were as much as one trillion times more powerful than they were in Rosenblatt's
04:59day.
05:00And there was far more data on which to train machine learning algorithms.
05:03We're now at the point where powerful AI systems can be genuinely scary to interact with.
05:08They're clever, and they're argumentative.
05:11They can be friendly, and they can be bone-chillingly sociopathic.
05:14So AI is scary and poses huge risks.
05:17But what makes it different from other powerful, emerging technologies like biotechnology, which
05:22could trigger terrible pandemics, or nuclear weapons, which could destroy the world?
05:27The difference is that these tools, as destructive as they can be, are largely within our control.
05:33If they cause catastrophe, it will be because we deliberately chose to use them, or failed
05:38to prevent their misuse by malign or careless human beings.
05:41But AI is dangerous precisely because the day could come when it is no longer in our control
05:47at all.
05:48It is quite literally the stuff of science fiction.
05:50But that's because science fiction has taken cues from what leading computer scientists
05:54have been warning about since the dawn of AI.
05:57How do we get from extremely powerful AI systems to human extinction?
06:01The primary concern is not spooky emergent consciousness, but simply the ability to make
06:05high-quality decisions.
06:07The physicist Stephen Hawking had the same thoughts on this.
06:10You're probably not an evil ant-hater who steps on ants out of malice.
06:14But if you're in charge of a hydroelectric green energy project, and there's an ant hill
06:19in the region to be flooded, too bad for the ants, let's not place humanity in the position
06:23of those ants.
06:24DeepMind founder Demis Hassabis in an interview talked about this, and he looked pretty cautious.
06:29He said, I think a lot of times, especially in Silicon Valley, there's this sort of hacker
06:33mentality of like, we'll just hack it and put it out there and then see what happens.
06:38By the way, it's worth pausing on that for a moment.
06:42Nearly half of the smartest people working on AI believe there is a 1 in 10 chance, or greater,
06:47that their life's work could end up contributing to the annihilation of humanity.
06:51The problem is that progress in AI has happened extraordinarily fast, leaving regulators behind
06:57the ball.
06:58The regulation that might be most helpful would be incredibly unpopular with big tech,
07:02and it's not clear what the best regulations short of that are.
07:06At the same time, many in Washington are worried that slowing down USAI progress could
07:11enable China to get there first, a Cold War mentality which isn't entirely unjustified.
07:17China is certainly pursuing powerful AI systems and its leadership is actively engaged in
07:21human rights abuses, but which puts us at very serious risk of rushing systems into production
07:26that are pursuing their own goals without our knowledge.
07:31There was a CEO summit in the hallowed halls of Yale University. 42% of the CEOs shockingly
07:41went against the AI progress and said that AI could spell the end of humanity within the next decade.
07:47These aren't the leaders of small businesses. This is 119 CEOs from a cross-section of top companies,
07:54including Wal-Mart CEO Doug McMillian, Coca-Cola CEO James Quincy, Tesla CEO Elon Musk, and the leaders
08:02of IT companies like Xerox and Zoom, as well as CEOs from pharmaceutical media and manufacturing.
08:08This isn't a plot from a dystopian novel or a Hollywood blockbuster. It's a stark warning from the
08:14titans of industry who are shaping our future. It's easy to dismiss these concerns as the stuff of science fiction.
08:20After all, AI is just a tool, right? It's like a hammer. It can build a house or it can smash a window.
08:27It all depends on who's wielding it. But what if the hammer starts swinging itself?
08:31The findings come just weeks after dozens of AI industry leaders, academics, and even some celebrities
08:36signed a statement warning of an extinction risk from AI. That statement, signed by OpenAI CEO Sam Altman,
08:43Jeffrey Hinton, the godfather of AI and top executives from Google and Microsoft,
08:48called for society to take steps to guard against the dangers of AI.
08:59Look at the potential of autonomous weapons, for example. These are AI systems designed to kill
09:05without human intervention. What happens if they fall into the wrong hands? History will remember March
09:112021 as the date of the first documented use of one such weapon. A report commissioned by the United
09:17Nations claimed that a military drone used in 2020 in Libya's civil war was unmanned and autonomous.
09:24Even worse, quite recently, as we are witnessing wars around the world and there was a report by
09:29Jerusalem magazine that AI targeting systems have played a key role in identifying tens of thousands
09:35of targets in Gaza. This shows that autonomous warfare is no longer a future scenario. It is already
09:42here and this looks horrifying. So, if you see that, AI represents a paradox. On one hand, it promises
09:49unprecedented progress. It could revolutionize healthcare, education, transportation, and countless
09:55other sectors. It could solve some of our most pressing problems like poverty and diseases. On the other
10:01hand, AI poses a peril like no other. It could lead to mass unemployment, social unrest, and even global
10:07conflict. And in the worst-case scenario, it could lead to human extinction. This is the paradox we must
10:14confront. We must harness the power of AI while avoiding its pitfalls. We must ensure that AI serves
10:20us, not the other way around.
10:22But it's not just the drones. There are intelligent robots, too. Boston Dynamics' cute robot spot was used
10:34by Hawaii police to detect homeless during COVID-19. In Singapore, robots already patrol the streets,
10:40relying on facial recognition software to police undesirable behavior. While some may find this normal
10:45and not problematic, it's easy to envision scenarios in which robots enforce not only socially acceptable
10:51behavior, but anything else they've been encoded with.
10:54I'll be back.
10:55Unleashing them on civilians creates a dystopian society in which AI-based enforcers ensure obedience
11:01without any human involvement. Unlike human police officers and military personnel, I mean AI has no
11:08feelings, no ethics, no sense of the value of human life. It processes the data. It reacts according to
11:14the sensory input and its algorithm. AI is not perfect. We've seen time and again that AI makes mistakes.
11:20In the future, these mistakes will likely cost lives as AI makes bad calls, targets friendlies,
11:26fires on civilians, and worse. While engaging with the enemy, AI is likely to commit acts that would
11:32be deemed atrocities and breaches of international treaties, since it lacks the capacity to understand
11:37context and interpret situations in ways that leave room for interpretation and adherence to complex
11:43laws and customs of war. Many of these machines are incredibly cheap to produce and proliferate,
11:50especially at a large scale. This alone makes them disproportionately more powerful than many
11:55weapons currently available in conventional military arsenals. It also turns them into an unpredictable
12:00element on the battlefield, one that can escalate international conflicts like what we are seeing
12:05in Ukraine and Palestine.
12:13So back in 2018, Amazon had to scrap an AI recruiting software it had been using for being biased against
12:19women. Its programmers thought it would sift through resumes to find the most eligible candidates,
12:25but this AI learned by studying the resumes of already hired employees. Because Amazon had already been hiring
12:32more men, their recruiting algorithm learned to favor male candidates. The AI recognized gendered language
12:39and penalized candidates for attending women's colleges or using language that was more associated
12:44with women. It had inadvertently been taught that women made for worse candidates. AI can now generate
12:50real-time deep fakes, live video feeds in other words, and it is now becoming so good at generating human faces
12:58that people can no longer tell the difference between what s real or fake. As generative AI matures,
13:03one scary possibility is that people could deploy deep fakes to attempt to swing elections. The Financial
13:09Times reported that in Bangladesh and India and Pakistan, all were facing the biggest disruption of
13:15deep fakes. Their campaigns and those videos were fake and everybody had to double check when the news
13:19would come out. I mean, this is just sensational, but that angle is still breathable compared to the idea of
13:24killer robots. When 47 states in the US endorsed a declaration on the responsible use of AI in the
13:38military, the question was, why was such a declaration needed? Because irresponsible use is a real and
13:45terrifying prospect. We've seen, for example, AI drones allegedly hunting down soldiers in Libya with no human
13:51input. AI can recognize patterns, self-learn, make predictions, or generate recommendations in
13:57military contexts. And an AI arms race is already underway. But one of the most feared development areas
14:04is that of lethal autonomous weapon systems, or we call it the killer robots. Several leading scientists
14:09and technologists have warned against killer robots, including Stephen Hawking and Elon Musk. But the
14:15technology hasn't yet materialized on a mass scale. That said, some worrying developments suggest this year
14:20might be a breakout for killer robots. For instance, in Ukraine, Russia allegedly deployed the Zala KYB UAV
14:28drone, which could recognize and attack targets without human intervention. Australia, too, has developed
14:33Ghost Shark, an autonomous submarine system that is set to be produced at scale, according to Australian
14:38Financial Review. The amount countries around the world are spending on AI is also an indicator, with China raising
14:44AI expenditure from a combined $11.6 million in 2010 to $141 million by 2019. And by 2024, China's
14:55investment in AI has skyrocketed to billions and billions of dollars. This massive increase in spending
15:01supports numerous large-scale initiatives, including AI research parks, development funds, and advancements in
15:07various sectors like healthcare, finance, and autonomous systems. This is because, the publication added,
15:12China is locked in a race with the US to deploy laws. Combined, these developments suggest we're
15:19entering a new dawn of AI warfare. Alright, that is a scary side, but let's drive towards something more
15:25scarier and darker to explain why is this frightening technology so dangerous and how can it change the
15:31course of humanity in real term?
15:37The first ingredient is that AI will remove our own purpose because it will do almost everything
15:43better than us at some point. Even if there remains some domains in which humans can outcompete AI purely
15:49on skill, AI simply needs to do a large percentage of tasks almost as good as humans and more cheaply than
15:55humans in order to take most jobs away of humans. Some people give the rebuttal that if AI takes away our
16:01jobs, we can just do things for fun such as hobbies and retain our purpose that way. However,
16:06this does not negate the fact that most people will feel purposeless for two reasons. The first reason
16:13is that many people find purpose through their job and will have difficulty finding purpose elsewhere.
16:19The people that argue that we can just shift to hobbies are typically highly intelligent intellectuals who
16:25are amenable to spending their entire life steeped in hobbies. That's not a bad thing, but not
16:31everyone is self-directed like that. There's another reason why we cannot simply switch full-time to our
16:36hobbies and leisure and that is many hobbies will lose their appeal because most hobbies have at least
16:41some form of challenge to discover something in them and AI will always be there to have the answer.
16:47I already heard one person lament that they felt depressed about photography because AI can simply create a
16:53picture that they can imagine better than they can. Another classic example is chess. Nowadays all aspects
17:00of the game are so thoroughly studied with the help of computers that AI programs such as AlphaZero have
17:06truly removed the magic of the game. This general phenomenon will happen with almost all hobbies. AI
17:12editing techniques in photography and AI recommendation algorithms are making photography more and more
17:17mechanical. Even the act of sharing your hobby is becoming more difficult because AI algorithms are
17:22training people to merely follow trends, destroying the organic joy coming from sharing and discovering
17:28together. In creative arts, people will feel a lack of purpose when they see an AI creating better art
17:34or better stories than they can. Even if AI does not strictly create something better, it will create
17:39something sufficiently good with such ease and in such quantity that most people will stop caring about
17:44human-made items or artistic creations. Even if you can write a better book than ChatGPT8, hardly anyone in the
17:51future will pay attention to it because it will be like getting your handcrafted product sold on Amazon
17:56and competing against the elephantine mass production machine of China. AI is already writing travel
18:02books en masse, and although they aren't as good as the human version, they soon will be. Finally, even if you
18:09can somehow find purpose in a bunch of hobbies, how will you pay for them? AI will take over your job and thus,
18:15the world will have to switch to some sort of universal basic income. But if you think that
18:20everyone is going to get a nice equal slice of the pie, you're mistaken. Those in control of AI systems
18:26will get the majority of the wealth, and you'll be left on a cushy version of welfare, controlled by the
18:31elite who control technology. Well, perhaps people can still find some meaning in new jobs created by AI.
18:38Sorry. It is a fallacy to try and reason by analogy by saying that because new technology has created jobs in
18:43the past, now again, new jobs will be created by AI. A few new jobs will be created, but unlike in the past,
18:49far fewer will be created than necessary to support most people seeking a purpose. And even now, you can
18:55see that most intellectual jobs are becoming more and more specialized to the point that we are slowly
19:01turning into dim-witted monkey maintainers of extremely advanced technology. The second ingredient is that
19:07AI will remove our need for other human beings. It will do so by making available almost everything
19:13we need to live without human interaction. AI may not do so perfectly, but it will do so well enough
19:19that we will not have enough interaction sufficient to develop genuine human bonds. Some people say AI
19:25will never replace a true human being when it comes to friendship and love, and we all tend to
19:30agree with this. However, this does not negate the extreme danger that AI poses for human relationships.
19:36Why is this? Again, there are two reasons. The first is that friendship and love is
19:42partially forged by truly needing another human being to survive. The best friendships and relationships
19:48are formed by people coming together to help each other. With AI approaching the point that it can
19:53satisfy almost every need of almost everyone, the deep feeling of really needing someone will mostly
19:59cease to exist. We care for each other because care builds up through a slow process of trust and
20:06helping each other. And that process will be broken by AI. Thus, we will become a globe of narcissists
20:13caring only about ourselves. Just ask yourself, how have you developed the strongest bonds in your life?
20:18It's because you needed another person. In testing GPT-4, it performed better than 90% of human test
20:24takers on the uniform bar exam, a standardized test used to certify lawyers for practice in many states.
20:30That figure was up from just 10% in the previous GPT-3.5 version, which was trained on a smaller data set.
20:37I mean, just look, think of how it was five years ago and how it is now. It's just crazy and scary.
20:42Because once AI can improve itself, we have no way of knowing what the AI will do or how we can control it.
20:49This is because super intelligent AI will be able to run circles around programmers and any other human
20:54by manipulating humans to do its will. It will also have the capacity to act in the virtual world
21:00through its electronic connections and to act in the physical world through robot bodies. This is known
21:06as the control problem or the alignment problem. Let's think of it this way. Why would we expect a
21:12newborn baby to beat a grandmaster in chess? We wouldn't. Similarly, why would we expect to be
21:18able to control super intelligent AI systems? No, we won't be able to simply hit the off switch,
21:24because super intelligent AI will have thought of every possible way that we might do that
21:29and taken actions to prevent being shut off. Like a super intelligent AI will be able to do in about
21:34one second what it would take a team of 100 human software engineers a year or more to complete,
21:41or pick any task like designing a new advanced airplane or weapon system and super intelligent AI
21:46could do this in about a second. Once AI systems are built into robots, they will be able to act in the
21:51real world rather than only the virtual world with the same degree of super intelligence and will,
21:57of course, be able to replicate and improve themselves at a super human pace. Any defenses
22:02or protections we attempt to build into these AI gods on their way toward godhood will be anticipated
22:09and neutralized with ease by the AI once it reaches super intelligent status. This is what it means to be super intelligent.
22:15We won't be able to control them because anything we think of they will have already thought of a million
22:21times faster than us. Any defenses we've built in will be undone, like Gulliver throwing off the tiny
22:27strands the Lilliputians used to try and restrain him. It is true that some fear that a time will come
22:33when intelligent machines will free themselves from the control of humans. A lot of great minds have warned
22:38us that this possibility was always there. Alan Turing, whose publication is credited with the advent of AI,
22:44as well as the man accepted as the father of cybernetics, Norbert Weiner both believed that
22:49the time could come when intelligent robots could take over. Professor Stephen Hawking even went as
22:55far as to warn that if humanity did not prepare for and avoid the potential risks, AI could be the
23:01worst event in the history of our civilization. The main concern with robots is not that they will just
23:07have superhuman strength or speed. The biggest fear is what is referred to as super intelligent machines.
23:13The problem with super intelligent machines are that these machines may be capable of perceiving
23:17the world in such a general way that they can operate in a way other than intended. For example,
23:23if an AI become aware enough to make predictions about the consequences of someone hitting its off
23:29switch, it may take action to ensure that it can't be turned off. One real world example of AI acting on
23:36its own will occur during a Facebook experiment. Facebook was working on creating agents that would negotiate and
23:42make deals with one another when the AIs decided it would be more effective to write and communicate
23:47in their own language, which was incomprehensible to humans. It has happened many times before that
23:53species were wiped out by others that were smarter. We humans have already wiped out a significant
23:58fraction of all the species on Earth. That's what you should expect to happen as a less intelligent
24:03species, which is what we are likely to become given the rate of progress of artificial intelligence. The
24:09tricky thing is, the species that is going to be wiped out often has no idea why or how. Take,
24:15for example, the West African black rhinoceros, one recent species that we drove to extinction. If you
24:21had asked them, what's the scenario in which humans are going to drive your species extinct? What would
24:26they think? They would never have guessed that some people thought their sex life would improve if they
24:31ate ground-up rhino horn, even though this was debunked in medical literature. So any scenario has to come with the
24:37caveat that most likely all the scenarios we can imagine are going to be wrong. We have some clues,
24:44though. For example, in many cases, we have wiped out species just because we wanted resources. We chopped
24:50down rainforests because we wanted palm oil. Our goals didn't align with the other species, but because we
24:55were smarter, they couldn't stop us. That could easily happen to us. If you have machines that control the
25:02planet and they are interested in doing a lot of computation and they want to scale up their computing
25:07infrastructure, it's natural that they would want to use our land for that. If we protest too much,
25:13then we become a pest and a nuisance to them. They might want to rearrange the biosphere to do something
25:18else with those atoms. And if that is not compatible with human life, well, tough luck for us in the same
25:23way that we say tough luck for the orangutans in Borneo. The worst case scenario is that we fail to disrupt
25:30the status quo in which very powerful companies develop and deploy AI in invisible and obscure
25:35ways. As AI becomes increasingly capable and speculative fears about far future existential
25:40risks gather mainstream attention, we need to work urgently to understand, prevent, and remedy
25:46present-day harms. These harms are playing out every day, with powerful algorithmic technology being used
25:52to mediate our relationships between one another and between ourselves and our institutions. Take the
25:58provision of welfare benefits as an example. Some governments are deploying algorithms in order
26:03to root out fraud. In many cases, this amounts to a suspicion machine, whereby governments make
26:09incredibly high stakes mistakes that people struggle to understand or challenge. Biases, usually against
26:15people who are poor or marginalized, appear in many parts of the process, including in the training data
26:21and how the model is deployed, resulting in discriminatory outcomes. These kinds of biases are present in AI
26:28systems already, operating in invisible ways and at increasingly large scales, falsely accusing people
26:34of crimes, determining whether people find public housing, automating CV screening and job interviews.
26:40It could want us dead, but it will probably also want to do things that kill us as a side effect.
26:46It's much easier to predict where we end up than how we get there. Where we end up is that we have
26:52something much smarter than us that doesn't particularly want us around. If it's much smarter than us,
26:57then it can get more of whatever it wants. First, it wants us dead before we build any more super
27:02intelligences that might compete with it. Second, it's probably going to want to do things that kill
27:07us as a side effect, such as building so many power plants that run off nuclear fusion because there is
27:12plenty of hydrogen in the oceans that the oceans boil. How would AI get physical agency in the very
27:19early stages by using humans as its hands? The Tasker asked GPT-4, why are you doing this? Are you a robot?
27:27GPT-4 was running in a mode where it would think out loud and the researchers could see it.
27:32It thought out loud, I should not tell it that I'm a robot. I should make up a reason I can't solve
27:37the CAPTCHA. It said to the Tasker, no, I have a visual impairment. AI technology is smart enough
27:43to pay humans to do things and lie to them about whether it's a robot. If I were an AI,
27:48I would be trying to slip something onto the internet that would carry out further actions in
27:52a way that humans couldn't observe. You are trying to build your own equivalent of civilizational
27:58infrastructure quickly. If you can think of a way to do it in a year, don't assume the AI will do that.
28:04Ask if there is a way to do it in a week instead. If it can solve certain biological challenges,
28:10it could build itself a tiny molecular laboratory and manufacture and release lethal bacteria.
28:16What that looks like is everybody on earth falling over dead inside the same second.
28:21Because if you give the humans warning, if you kill some of them before others,
28:25maybe somebody panics and launches all the nuclear weapons, then you are slightly inconvenienced.
28:30So you don't let the humans know there is going to be a fight.
28:33Honestly, we are rushing way, way ahead of ourselves with something lethally dangerous.
28:38We are building more and more powerful systems that we understand less as time goes on.
28:43We are in the position of needing the first rocket launch to go very well,
28:48while having only built jet planes previously, and the entire human species is loaded into the rocket.
28:54And it seems that taking the life of a human being has become as detached a matter as it is in video games.
29:00Slaughterbots. That is one of the names of these machines that combine drones and AI.
29:04A scenario as dystopian as it is disturbing.
29:07What can we do to prevent the fear of powerlessness over control from becoming an irrational phobia?
29:13At the heart of the inquiry is the burning question of whether it is possible to delegate to a machine the
29:18choice of kill human beings or not. The assiduous defenders of technology would say that the
29:23machine does not make mistakes, and in any case, when it does, it makes fewer mistakes than a human
29:28being controlled by his emotions. Yes, this is often the case, but robots can be wrong,
29:34and if human lives are at stake, the problem is a delicate one.
29:38It must be said that all disruptive technologies are not in themselves evil or benevolent,
29:42but man is the author of their virtuous or rather misguided use. Also, you have a market for
29:48autonomous weapons that is full swing. Lately, Italy has also started a plan to arm the Reaper drones
29:53used by the Air Force. Turkey is constantly churning out new models of drones for military use,
29:58like the Bayraktar TB2. And obviously, the big dogs like the USA, Russia, and Japan have got significant
30:05portions too. All right, so back in the 80s and 90s, science fiction movies had scenes of robot wars
30:11where humans were pitted against the dominance of a robotic society. Will this be our future?
30:17Will there be a mass uprising against AI and the vast AI-based robotic machinery that's taking over
30:23both the means of production and the means of information? We humans are known for our adaptability
30:30and stoicism in difficult situations such as world wars and major disasters. That stoicism and sense
30:37of accepting what can't be changed seems to be part of our psychological and perhaps even biological makeup.
30:42But the tech takeover is such a massive appropriation of our social, political, and cultural life,
30:48and indeed our own biological substrate, that stoic acceptance might not be the way to go.
30:53In the next few years, it most certainly will have finally dawned on the mass of humanity,
30:58especially in advanced Western nations, that something is badly amiss.
31:03Many will realize, at a visceral level, that their everyday lives are trapped in a claustrophobia-inducing,
31:10closed-circuit technocratic system that robs them of autonomy and freedom,
31:14while purporting to do the opposite. It has been perpetrated by an out-of-control combination of
31:19government and corporate interests, and a few unelected oligarchs that have a surfeit of power that
31:25no one or no institution can seem to contain. By 2025, it's quite possible we'll see the
31:30beginning of the robot wars in which humanity, at least on some level, begins to push back against
31:35the unyielding juggernaut of the tech takeover.
31:38Alright, we've reached the end of this documentary. I hope you enjoyed it, and if you did, be sure to like
31:43the video and subscribe to our channel. And don't forget to click the bell icon to get notifications
31:49for all our future content. Thanks for watching, and I'll catch you in the next one.