Skip to playerSkip to main contentSkip to footer
  • 4/23/2025
🚨 BREAKING AI NEWS!
OpenAI has officially begun training GPT-5, and the implications are massive! 🤖💥
But what’s even more shocking is the whistleblower’s warning — is GPT-5 the start of Borderline AGI? 🧠⚠️

🔍 In this episode of AI Revolution, we break down:
🧬 What GPT-5 is really capable of
🗣️ The anonymous insider leak that’s sending chills through the tech world
🤯 The ethics, fears, and future of Artificial General Intelligence
🛡️ What this means for the future of AI safety and humanity

🔥 Buckle up — the future is accelerating faster than we imagined.

👉 Like 👍 | Subscribe 🔔 | Share 📲 to stay ahead in the AI game!

#GPT5
#OpenAI
#AGI
#ArtificialGeneralIntelligence
#AIRevolution
#Whistleblower
#AIWarning
#FutureOfAI
#AILeak
#TechNews
#MachineLearning
#NextGenAI
#OpenAINews
#GPT5Training
#AI2025
#ElonMuskAI
#AGIRisks
#ArtificialIntelligence
#AIExplained
#FacelessYouTube

Category

🤖
Tech
Transcript
00:00OpenAI is cooking up something big, a brand new flagship AI model that could bring us closer than ever to Artificial General Intelligence, or AGI.
00:12This new model is supposed to be the next big thing after GPT-4.
00:16They haven't officially named it GPT-5, but everyone thinks that's what it will be, so let's go with that.
00:20Alright, so in their latest blog post, OpenAI announced that they've started training this new, top-of-the-line model to follow up on the impressive GPT-4.
00:30OpenAI is calling it the next level of capabilities, and they're saying it's a huge step towards their goal of building AGI.
00:37Although they didn't specify the exact new features GPT-5 will have, we can guess based on hints from Sam Altman and opinions from other experts in the field.
00:45Now, when it comes to predicting what GPT-5 will be capable of, we can look at how each major flagship model has gotten better and better since GPT-3.5, including GPT-4 and GPT-4-0.
00:57With each new model, there were upgrades like faster processing, longer context lengths, and the ability to handle different types of data.
01:04GPT-3.5 could only deal with text input and output.
01:08With GPT-4 Turbo, you could feed it text and images, and it would spit out text.
01:13With GPT-4-0, you could give it a mix of text, audio, images, and video, and it would give you back text, audio, or images.
01:19Following that trend, the next step for GPT-5 would probably be the ability to output video.
01:25Back in February, OpenAI unveiled this text-to-video model called Sora, and it's likely that they'll fold that technology into GPT-5 so it can generate videos.
01:34Now, let's be real. Chatbots are impressive and all, but we've been seeing a growing demand for AI that can just figure out what you want done and do it with minimal instruction.
01:43That's what they call AGI. With AGI, you could just tell the AI agent what your end goal is, and it would be able to reason out what needs to be done, plan how to do it, and actually carry out the task.
01:55For example, in a perfect world where GPT-5 had AGI, you could just tell it, order a burger from McDonald's for me, and the AI would be able to open up the McDonald's website, input your order, put in your address and payment info, and handle the whole thing.
02:11All you'd have to worry about is chowing down on that burger.
02:14Now, as exciting as this is, OpenAI is also keenly aware of the risks that come with developing such advanced technology.
02:21To address these concerns, they've created a new Safety and Security Committee.
02:26This committee will focus on developing policies and processes to ensure that the technology is used safely and responsibly.
02:32The committee includes key figures like OpenAI CEO Sam Altman and board members Brett Taylor, Adam D'Angelo, and Nicole Seligman.
02:40And they aim to have these new safety policies in place by late summer or fall.
02:44However, OpenAI has encountered some bumps in the road.
02:47Recently, they faced some internal turmoil that made headlines.
02:50Back in November 2023, Sam Altman was unexpectedly fired by the OpenAI board.
02:57This was a huge shock because Altman had been leading the company through some of its most successful periods, including the launch of ChatGPT.
03:04So, what exactly happened?
03:06Helen Toner, a former OpenAI board member, recently shed light on why the board decided to oust Altman.
03:12In an interview on the TEDAI Show podcast, Toner explained that the board lost trust in Altman due to several issues.
03:18One major concern was that Altman failed to disclose his ownership of the OpenAI startup fund.
03:24Sam didn't inform the board that he owned the OpenAI startup fund, even though he, you know, constantly was claiming to be an independent board member with no financial interest in the company.
03:35Additionally, there were multiple instances where he provided inaccurate information about the company's safety processes.
03:41On multiple occasions, he gave us inaccurate information about the small number of formal safety processes that the company did have in place.
03:50Toner also mentioned that Altman created a toxic atmosphere and engaged in manipulative behavior, which ultimately led to his dismissal.
03:58The problem was that after the paper came out, Sam started lying to other board members in order to try and push me off the board.
04:05There's more individual examples, and Sam could always come up with some kind of, like, innocuous sounding explanation of why it wasn't a big deal or misinterpreted or whatever.
04:12But that's a completely unworkable place to be in as a board, especially a board that is supposed to be providing independent oversight over the company, not just, like, you know, helping the CEO to raise more money.
04:23Now, the story didn't end there. After his firing, Altman quickly rallied support from OpenAI employees and investors, including Microsoft, which has a significant stake in OpenAI.
04:35Nearly all of OpenAI's staff threatened to leave unless Altman was reinstated, leading to a rapid reversal of the board's decision.
04:42Just four days after his firing, Altman was back as CEO and several board members resigned.
04:47This whole episode raised serious questions about the internal dynamics at OpenAI and how the company is managed.
04:54Toner's revelations highlighted a lack of transparency and accountability, which are crucial for a company that aims to develop AGI.
05:02Adding to the drama, two key figures from OpenAI's safety team, Ilya Sutskever and Jan Leik, recently left the company.
05:09They were leading OpenAI's super alignment team, which focused on ensuring that future AI models don't pose a threat to humanity.
05:16Both Sutskever and Leik expressed concerns about OpenAI's priorities and commitment to safety.
05:22Leik in particular mentioned that he had been disagreeing with the leadership for a while and felt that the company wasn't prioritizing safety as it should.
05:30Leik has since joined Anthropic, an AI safety and research company founded by former OpenAI employees.
05:36Anthropic has been positioning itself as a leader in developing safer AI models and recently released its latest model, Claude III,
05:43which some claim is the most powerful AI model currently available.
05:47Leik said he joined Anthropic to continue the mission of ensuring that AI systems are safe and aligned with human values.
05:53This move has certainly added a new dimension to the competition between AI companies.
05:57So, while OpenAI is pushing forward with its new model, they're also dealing with other challenges.
06:03For instance, there was a recent controversy involving Scarlett Johansson,
06:07who claimed that OpenAI's updated GPT-4-0 model used a voice that sounded eerily similar to hers without her permission.
06:16This highlights the ongoing issues around data usage and intellectual property and AI development.
06:21OpenAI denied using Johansson's voice, but the incident raised important questions about consent and ethics in AI.
06:27Additionally, OpenAI has faced legal issues, including a lawsuit from the New York Times over copyright infringement.
06:34The lawsuit alleges that OpenAI's models were trained using Times content without proper authorization,
06:40raising further questions about how AI companies use and manage data.
06:45This is part of a broader conversation about the ethical use of data in training AI models,
06:50which is becoming increasingly important as these technologies become more advanced and widespread.
06:55Now, in the midst of these controversies, OpenAI has also been criticized for its handling of employee departures.
07:02Leaked documents revealed that the company used aggressive tactics to force departing employees to sign restrictive exit agreements.
07:10These agreements included clauses that threatened to cancel vested equity if employees didn't comply.
07:15This approach caused significant distress among former employees and drew public criticism.
07:19The use of such high-pressure tactics seemed contrary to OpenAI's mission of benefiting humanity.
07:26OpenAI's CEO, Sam Altman, later apologized for these practices, stating that the company never intended to claw back anyone's vested equity and promise to make changes.
07:35The company has since removed non-disparagement clauses from its standard departure paperwork
07:39and is reaching out to former employees to release them from existing obligations.
07:43This move is seen as an attempt to rebuild trust and demonstrate a commitment to ethical practices within the company.
07:50Despite these challenges, OpenAI remains a key player in the AI field, and their new model could be a game-changer.
07:57The company is pushing the boundaries of what AI can do, aiming to create a machine that can match human intelligence.
08:03But how OpenAI handles these recent challenges will shape the future of artificial intelligence
08:08and determine whether they can truly achieve their mission of benefiting all of humanity.
08:13Alright, if you found this video informative, don't forget to like, share, and subscribe for more content on the latest in AI.
08:20Let me know your thoughts in the comments below.
08:22Do you think OpenAI will achieve AGI?
08:25What are your concerns about these advancements?
08:27I'd love to hear from you.
08:29Thanks for watching, and I'll see you in the next one.

Recommended