• 18 hours ago
Harvard professor Bharat Anand explains how AI is changing education by improving access, not just intelligence. He explores its impact on jobs, learning, and the role of teachers.
Transcript
00:00Speaker 1 – We come now to what is, to my mind, one of the most important conversations
00:07of the moment.
00:10How do we educate our children at a time when machines can do what they can, when every
00:17single query you may have is available at a chat prompt, what skills do we equip our
00:25children with, what do we teach them?
00:27My wife is here.
00:28It's something that we talk about all the time.
00:29So what we did is, we decided to call in one of the best experts in the world, someone
00:34who is researching and thinking very deeply, in fact even probably writing a book on the
00:39future of education.
00:40He is one of the top professors of the Harvard Business School.
00:42He has made a lot of effort to fly down specially to the India Day Conclave to be over here.
00:47Ladies and gentlemen, can we have a very warm round of applause as I welcome Professor Bharat
00:51Anand.
00:52He is the Vice Provost for Advances in Learning, the Henry Byers Professor of Business Administration.
00:58He is an expert on digital strategy, media, entertainment strategy, corporate strategy
01:02and organizational change and at this moment, he is focusing very deeply on the future of
01:08education.
01:09If you got young kids, if you are middle aged yourself, wondering how to upskill yourself,
01:13this is a session you need to pay a lot of attention to.
01:15How we are going to do this is this, Professor Anand will first make a presentation and then
01:20I have lots of questions, I am sure so do you about what do we teach our children, how
01:25do we train them, how should we go about this, so just keep all those thoughts in your head.
01:29We will hand the stage to Professor Anand.
01:31This is a master class.
01:32He said, I don't need to sit anywhere and you can be cold call, okay.
01:35Cold call is, if you are not paying attention to him, he sees because it's around lunchtime
01:40and you seem to be looking at the door, looking around for food, he can cold call you and
01:44ask you some questions.
01:45So, be on standby for that.
01:46With that, Professor Anand, the stage is yours.
01:48Thank you, Rahul.
01:50Good morning.
01:51I need some more energy.
01:53Good morning.
01:55It's a pleasure to be here with all of you today to talk about Gen-AI and education.
02:03For those who don't know what Gen-AI is, imagine a person who is often wrong but never in doubt.
02:12Now, be honest with me, how many of you thought about your spouse?
02:18I did not, okay, but that's Gen-AI.
02:22And what I want to talk about is what happens when we have large language models like JATCPT
02:29and generative AI intersect with institutions like Harvard where I sit and I've been there
02:34for the last 27 years, currently overseeing teaching and learning for the university.
02:40Let me just ask you a question.
02:42How many of you think in the next five to 10 years, generative AI will have a very large
02:48impact on education?
02:50Just raise your hands.
02:52How many would say a moderate impact?
02:55So we have a few.
02:57How many would say little to no impact?
03:01Pretty much none, okay.
03:03Let me come back to this.
03:05Here's a chart showing the rise of technologies and the time it took for different technologies
03:11to reach 50% penetration in the U.S. economy.
03:16So if you look at computers, it actually took 20 years to reach about 30% penetration.
03:23Radio, it took about 20 years to reach half the population.
03:29TV about 12 years, smartphones about seven years, smart speakers about four years, and
03:37chat bots about two and a half years.
03:40This is part of the reason we're talking about this today.
03:44Here's what we know so far about gen AI in education.
03:49First, the transformative potential stems from its intelligence.
03:54That's the I in AI.
03:58Secondly, as prudent educators, we should wait until the output is smart enough and
04:07gets better and is less prone to hallucinations or wrong answers.
04:13Third, given the state of where bot tutors are, it's unlikely, I think many believe,
04:19that it's going to be ultimately as good as the best active learning teachers who have
04:23refined their craft over many, many years and decades.
04:27Fourth, and Sal Khan talks about this, this is likely to ultimately level the playing
04:32field in education.
04:34And finally, the best thing we can do is to make sure that we secure access to everyone
04:39and let them experiment.
04:42Before you take a screenshot of this, don't, because I'm going to argue all of this is
04:48wrong.
04:51Now that I hopefully have your attention, I'm going to spend the next 10 minutes arguing
04:54why.
04:55Let's actually start with the first one, which is the transformative potential stems from
05:01how intelligent the output is.
05:04I would argue, and in fact, we just heard this from the previous speaker, we've been
05:07actually experiencing AI for 70 years, machine learning for upwards of 50 years, deep learning
05:14for 30 years, transformers for seven to eight years.
05:17This has been an improvement gradually over time.
05:20There were some discrete changes recently, but the fundamental reason why this has taken
05:24off, I would argue, has less to do with the discrete improvements in intelligence two
05:29years ago, as opposed to the improvement in access or the interface that we have with
05:35the intelligence.
05:36What do I mean by that?
05:37I'm going to give you the one minute history of human communication.
05:41So we started out sitting around campfires, talking to each other.
05:45From there, we started writing pictures on the walls.
05:49That was graphics.
05:51From there, we start writing scrolls and books.
05:54That was formal text.
05:55And finally, the pinnacle of human communication, which was ones and zeros, and that's mathematics.
06:02That's the evolution of human to human communication.
06:05The evolution of human to computer communication has gone exactly in the opposite direction,
06:09which is 60, 70 years ago, starting with punch cards, ones and zeros.
06:13For those of you old enough, might remember that.
06:16Then we moved to things like DOS prompts, commands that we had to input.
06:21By the way, and this is the fundamental thing, the big difference between Windows 1.0 and
06:25Windows 3.0, functionally, they were almost identical.
06:30The big difference was the interface, meaning we moved to a graphical user interface and
06:35suddenly 7-year-old kids could be using computers.
06:38That I think is more similar to the revolution we're seeing now, which is AI for a long time
06:43was the province of computer programmers, software engineers, tech experts.
06:48With chat GPT, it basically became available to every one of us on the planet through a
06:52simple search bar.
06:54That's basically the reason for the revolution.
06:56Where is this going?
06:58Probably towards just audio.
07:00I don't know if anyone can guess what's the next evolution of this in terms of communication.
07:04Neural, reading emotions.
07:10You might argue basically us grunting and shaking our arms, formerly that would be called
07:15the Apple Vision Pro.
07:19You could argue we are regressing as a species.
07:22On the other hand, you could argue that in fact what's happening is that the distance
07:26between humans and computers is fundamentally shrinking.
07:30That's the first thing I just want to say, which is fundamentally this is about access.
07:35What does this mean?
07:37It means that, does anyone know what this is?
07:42This is Photoshop.
07:44There's a lot of people who spend one year, two years, four years trying to master this,
07:48graphics design.
07:50Arguably, we don't need this kind of expertise anymore.
07:53We can simply get it by communicating directly in natural language with computers now.
07:59This, for those of you who don't know, is Epic.
08:01It's a medical software record.
08:03My wife, who's a cardiologist, does not like this.
08:06She spends two hours every single day filling in notes on these software records.
08:12You could argue sometime in the near future that communication will become much simpler.
08:17By the way, one of the things to keep in mind is for every one of you sitting in organizations,
08:23by the way, this is a happy organization, to think about what this is likely to do to
08:27the org structure.
08:30If you think about the bottom of this organization, there's people who have expertise in different
08:34kinds of software.
08:36Some expertise in Photoshop, some in Concur, some in different kinds of software.
08:44You could argue there's going to be consolidation within those functions.
08:47The middle managers who used to oversee all these software experts, it's likely we're
08:52going to see shrinkage there.
08:54In fact, you could argue all the way that the person at the top could, in fact, do sales,
08:59graphics design, design, marketing, everything by just interacting directly with the computer.
09:04It's not a stretch to say, and some people predict this, that the first one-person, billion-dollar
09:08company is going to be likely to be born pretty soon.
09:12People are already working on this.
09:14I would urge you to think about this question, which is what does this mean for your expertise
09:19in organizations or the organizations you run?
09:22Because that's going to have big implications for how you run these organizations.
09:26All right.
09:27That's the first point, which is fundamentally this is not about intelligence, but it's how
09:30it's accessed.
09:32The implication of this is more people will be able to use more computers for specialized
09:36purposes, but it doesn't necessarily mean it's likely to be the same people.
09:42That's the first thing.
09:44Second, I think we all look at these hallucinations and we say, let's wait.
09:50Let's wait until it gets better.
09:52By the way, that begs the question that hallucinations are a fundamental intrinsic property of generative
09:57AI because they're probabilistic models.
10:00But I would go further and say even when AI capabilities fall far short and impair the
10:06human value proposition, there's still a reason to adopt it.
10:10Why do I say that?
10:12I'm a strategist.
10:13A strategist, we think of two sides of the equation.
10:17One is the benefit side.
10:19What are customers willing to pay?
10:20The other is the cost or the time side.
10:24Even if there's no improvement in intelligence, simply because of cost and time savings, there
10:30might be massive benefits to trying to adopt this.
10:33The metaphor I want you to think about is the following company.
10:40Has anyone flown Ryanair?
10:45What is the experience like, Ishan?
10:49Basic efficient.
10:50By the way, when I ask my students this, they often say, I hate it every single time I fly.
10:56And of course, it begs the question, why are you repeatedly flying it?
10:59This is an airline, like most low cost airlines.
11:03It doesn't offer any food on board, no seat selection.
11:06You've got to walk through the tarmac.
11:08You've got to pay extra for bags.
11:09No frequent flyers, no lounges.
11:11And this is the most profitable airline in Europe for the last 30 years running.
11:16Why?
11:17It's not providing a better product.
11:20It's saving cost.
11:23That's the metaphor I would love for you to keep in mind when you think about generative
11:26AI and its potential.
11:28So let me just walk through this.
11:29And sorry, as a strategist, I have to put up a two by two matrix at some point.
11:34There's two dimensions here I'd love for you to think about.
11:36The first is, what is the data that we're inputting into these large language models?
11:42And the data could be explicit in the form of files, like text files, numbers, et cetera.
11:48That's explicit data.
11:50Or it could be tacit knowledge, meaning creative judgment, et cetera, et cetera.
11:56But the second dimension is as important, which is, what's the cost of making an error
12:02from the output?
12:04Not the prediction error.
12:06What's the cost of something going wrong?
12:09In some cases, it could be low.
12:10In some cases, it could be high.
12:12So let's actually talk through some examples.
12:15First is, explicit data, low cost of errors, that's high volume customer support.
12:21For the last 30 years, this thing has been automated.
12:23By the way, that trajectory is likely to continue.
12:26Why do I say that?
12:28It is virtually impossible for any company to have people manning the phones to talk
12:32to 100,000 customers.
12:34This is the direction where it's going.
12:37Even if we have 2% or 3% or 4% errors, it's OK.
12:41It's simply much more efficient to respond to customers in this way.
12:45So that's one dimension.
12:47Second dimension is drafting legal agreements.
12:51For all the lawyers in the room, just watch out.
12:53It's going to be much, much easier.
12:55It already is to draft legal agreements.
12:57But we can't rely on generative AI to simply give us this thing without checking it.
13:04Some of you may have heard of that lawyer who did that a couple of years ago.
13:08Basically didn't review the agreements.
13:10There were some errors.
13:11He got fired.
13:12So we might have human in the loop.
13:14You don't want to basically take the output at face value, OK?
13:18Because the cost of making an error is simply too high.
13:21Third, on the top right, is creative skills, design, marketing, copywriting.
13:28These are things where it's hard to evaluate what's truly better or worse.
13:33And so in some sense, the design outputs we get, the social media content we get as
13:39suggestions from generative AI, pretty good.
13:42The cost of making an error there, not that high.
13:45And finally, we get to the top right, where we want to be very, very careful.
13:50Because this is like large enterprise software integration.
13:54You don't want to go there pretty soon, OK?
13:56Or designing an aircraft.
13:58Now, what does it mean for education?
14:00Let's actually play this out.
14:02I'm going to use our example as an illustration.
14:05If I'm sitting at Harvard, basically we get, when we open up the website, about 10,000
14:13applications in the first couple months for admission.
14:16Maybe 30,000 people who look at the website.
14:19By the way, they have questions.
14:20It's impossible to speak personally and individually to everyone who has a question.
14:26This is beautiful for chatbots to be able to simply respond.
14:31Again, if there's an error in the response, it's OK.
14:35I mean, these are people who are simply thinking about applying, and they might find information
14:40in other ways.
14:41Secondly, legal contracts with food contractors.
14:45We want to be careful about human in the loop.
14:47Thirdly, designing social media content, when we go to the top left.
14:51This is something we can do far more efficiently today with generative AI.
14:55And finally, I can assure you, we're not going to be using this anytime soon for hiring faculty
15:00or disciplinary actions against students.
15:02By the way, think about this not just for your organization.
15:05Think about it for you individually.
15:07So if I was to do that, responding to emails.
15:11I get a lot of emails every day.
15:15Most of these emails are things that are very standard.
15:18Professor, when are your office hours?
15:20Where is the syllabus posted?
15:22By the way, even in other cases where students ask questions, like, Professor, I have two
15:26offers.
15:27One from McKinsey, one from Boston Consulting Group.
15:32The cost of an error is not that high in my response.
15:35You'll be OK.
15:36Or I'm trying to decide whether to go to Microsoft or Amazon.
15:40You'll be OK.
15:41OK, I'm just kidding, by the way.
15:42I can assure you I respond to all those emails individually.
15:45But you get the point.
15:48Writing a case study.
15:49It takes us nine months to write these famous Harvard Business School case studies.
15:53The head of the MBA program last year said, I want to teach a case on Silicon Valley
15:57Bank tomorrow.
16:00What he did was go to Chachi P.T., said, write a case like Harvard Business School with these
16:05five sections, financial information, competitor information, regulatory information.
16:10It spits it out.
16:11He then said, please tweak the information.
16:14Give me this data on the financials.
16:16Talk about these competitors.
16:18He iterated.
16:19It kept spitting out information.
16:21From beginning to end, he had a case study complete in 71 minutes.
16:28If you're not scared, by the way, we are about what the potential here is.
16:33Brainstorming a slide for teaching.
16:34There's a couple slides in this talk where I took some pictures and I started trying
16:38to resize it.
16:40PowerPoint designers simply threw up some suggestions saying, here's how you might want
16:44to do it in one second.
16:46It didn't take me 10, 15 minutes to try and redesign these slides.
16:49A beautiful application for using this.
16:52And finally, thinking about exactly how I teach in the classroom or my research direction,
16:56I'm not going there anywhere soon.
16:59I'd love for you to think about a couple things from this simple framework.
17:04Number one, we are obsessed with talking about prediction errors from large language models.
17:10I think the more relevant question is the cost of making these errors.
17:14Meaning, in some cases, the prediction error might be 30%.
17:19But if the cost of error is zero, it's okay to adopt it.
17:23In other cases, prediction errors might be only 1%, but the cost of failure is very high.
17:30You want to stay away.
17:32So stop thinking about prediction errors.
17:33Let's start thinking about the cost of errors for organizations.
17:36Secondly, if you notice what I've done, I've broken down the analysis from thinking about
17:41industries.
17:43What's the impact of AI on banking or education or retail into jobs?
17:49And in fact, gone a step further and broken it down into tasks.
17:53So don't ask the question of what is AI going to do to me.
17:57Ask the question, which are the tasks that I can actually automate?
18:00And which are the tasks I don't want to touch?
18:03And the third is, I don't know about you, in my LinkedIn feed, every single day, I get
18:08new information about the latest AI models and where the intelligence trajectory is going,
18:14getting better and better.
18:15That's basically about the top right cell.
18:19I would say that's a red herring for most organizations, because basically there's three
18:23other cells where you can adopt it right now and today with human in the loop.
18:29So that's just something I'd love for you to think about.
18:32By the way, we did this with Harvard faculty, where we interviewed 35 Harvard faculty who
18:38were using Gen AI deeply in their classrooms.
18:42Those videos are up on the web.
18:43If you just type in Google generative AI faculty voices Harvard, you see all these videos.
18:49Here's some examples of what they were doing.
18:51A faculty copilot chatbot.
18:53It's almost like a teaching assistant that simulates the faculty that answers simple
18:59questions and is available to you 24-7.
19:02Secondly, one of the things that we as faculty spend a lot of time thinking about is designing
19:09the tests and the quizzes and the assessments every year.
19:13And we've got to make it fresh, because we know our students probably have access to
19:17last year's quizzes.
19:20Large language models are basically spitting this out in a couple minutes.
19:23And of course, as individuals, we would refine it.
19:26We're not going to just take it at face value.
19:28We refine it.
19:29We look at it.
19:30But it's saving a lot of time.
19:31Third, when we're giving lectures, students often have questions which they're too scared
19:37to ask live in front of 300 students.
19:40Oh, it's beautiful if we can simply type in the questions, have Gen AI summarize the questions
19:46and put it up on a board.
19:47The faculty know exactly what the sentiment is in the classroom and where students are
19:51getting confused.
19:52By the way, notice one thing about all these examples.
19:57Every single one of them is about automating the mundane.
20:02It's not about saying, let's rely on the intelligence that's getting better and better.
20:06It's the left column of the framework I was talking about.
20:10So these are ways that it's being used nowadays in our classrooms.
20:15The third thing, this premise that bot tutors are unlikely to be as good as the best instructors.
20:22We had a few colleagues at Harvard who tested this for a course called Physical Sciences
20:272.
20:28This is one of the most popular courses.
20:30And by the way, the instructors are very good in that course.
20:32They've been refining active learning teaching methods for many years.
20:36What they did as an experiment was say, for half the students every week, we'll give them
20:42access to the human tutors.
20:44For the other half, give them access to an AI bot.
20:47And by the way, the nice thing about the experiment is they flipped that every single week.
20:51So some people always had access to the humans.
20:54Some people had access to the AI for that week, but then they'd flip the next week.
20:58Every single week, they tested your mastery of the content during that week.
21:04And what was interesting was the scores of the students using the AI bots were higher
21:12than with the human tutors.
21:14And these are tutors who've been refining their craft year in and year out.
21:18What was even more surprising is engagement was higher.
21:22By the way, this is a first experiment.
21:24The only point is we better take this seriously.
21:28Next, will it level the playing field in education part of the premises?
21:34Because everyone has access.
21:36Any individual in a village, a low income area is basically going to have access to
21:41the same technology as those who are in elite universities.
21:46And this is going to level everything.
21:49There's a possibility it might go exactly the other way, which is the benefits might
21:54accrue disproportionately to those who already have domain expertise.
21:59Why do I say this?
22:00Think about a simple example.
22:02When you have knowledge of a subject, and you start using generative AI or chat GPT,
22:08the way you interact with it, asking it prompts, follow on prompts, you're basically using
22:14your judgment to filter out what's useful and what's not useful.
22:18If I didn't know anything about the subject, I basically don't know what I don't know.
22:23So in some sense, the prompts are garbage in, garbage out.
22:26By the way, this is being shown in different studies.
22:29There was a meta-analysis summarized by The Economist a couple of weeks ago, where they
22:34basically talk about different kinds of studies that are showing for certain domains and expertise,
22:40the gap between high-performance, high-knowledge workers and no-knowledge workers is actually
22:45increasing.
22:47We better take this seriously.
22:48Why?
22:49And this is not the first time this has happened.
22:52Twelve years ago, there was a big revolution in online education.
22:56Harvard and MIT got together, created a platform called edX, where we offered free online courses
23:02to anyone in the world.
23:04By the way, they still exist.
23:06If you want to take a course from Harvard for free, pay $100 for the certificate, you
23:11can get it on virtually every subject.
23:14What happened as a result?
23:16edX reached 35 million learners, as did Coursera and Udacity and other platforms.
23:21What was beautiful is roughly free 3,000 courses.
23:26The challenge was completion rates less than 5%.
23:30Why?
23:31By the way, if you're used to a boring lecture in the classroom, the boring lecture online
23:35is 10 times worse.
23:37So there's virtually no engagement.
23:39People take a long time to complete or may not complete.
23:41But here's what's interesting.
23:44The vast majority, 75% of those who actually completed these courses already had college
23:50degrees, meaning the educated rich were getting richer.
23:55Now think about that.
23:56That's very sobering.
23:57Why is that?
23:59Because those are people used to curiosity, intrinsic motivation.
24:03By the way, they're used to boring lectures.
24:04They've gone to college.
24:06But this has big implications for how we think about the digital divide.
24:09So I just want to keep that in your mind.
24:12And the last thing I just want to say is, rather than going out and trying to create
24:16tutor bots for as many courses as possible, I think what we really need to do is have
24:20a strategic conversation about what's the role and purpose of teachers, given the way
24:26the technology is proceeding.
24:28The one thing I will say here is that when we think about what we learned in school,
24:34okay, think back.
24:35Think back many, many years.
24:38We learned many things.
24:41Tell me honestly, how many of you have used geometry proofs since you graduated from high
24:47school?
24:49Three people.
24:52Why did we learn state capitals and world capitals of every single country?
25:00Foreign languages.
25:01And by the way, this is Italian.
25:03Devi is not a goddess.
25:05Devi in Italian says, you must.
25:08They have similarities.
25:11Why did we learn foreign languages?
25:13When we think about business concepts in our curriculum, I often get my students who come
25:17back 10 years later and say, those two years were the most transformative years of my life.
25:21I often ask them, what were the three most important concepts you learned?
25:26They said, we have no idea.
25:27I'm like, no, no.
25:28Okay, give me one.
25:29No, no.
25:30We have no idea.
25:31I'm like, so why do you say this was transformative?
25:33The point simply being they're saying this was transformative not because of the particular
25:37content, but because of the way we were learning.
25:40We were forced to make decisions in real time.
25:43We were listening to others.
25:44We were communicating.
25:46What are they saying?
25:47They're saying that the real purpose of case method was listening and communication.
25:52The real purpose of proofs was understanding logic.
25:56The real purpose of memorizing state capitals was refining your memory.
26:00By the way, that example there is the poem If by Rudyard Kipling.
26:04Some of you might remember this from school.
26:06It goes something like this.
26:07If you can keep your head when all about you are losing theirs and blaming it on you.
26:11I have PTSD because my nephew, when he was reciting this to me preparing for his 10th
26:15grade exams, I was like, what the heck are you doing?
26:17But it was basically refining memory skills.
26:20And for foreign languages, it was just learning cultures and syntax.
26:23When we go deep down and think about what we were actually teaching, I think that probably
26:29gives us a little more hope.
26:31Because it means it doesn't matter if some of these things are probably accessible through
26:35gen AI.
26:37When calculators came along, we thought it's going to destroy math skills.
26:41We're still teaching math, thankfully, 50 years later, and it's pretty good.
26:44So this is something that I think is going to be an important strategic conversation.
26:48This is the slide I'd love for you to keep in mind, which is basically everything I've
26:52just said.
26:53If you want to take a screenshot, this is the slide to take a screenshot.
26:57Thank you all so much.
26:59And I hope to be in touch.
27:06At HBS, I took Professor Anand's class on economics for managers listening to him feels
27:10like being back in class.
27:11Fortunately, he didn't cold call anyone, which is terrific.
27:14So thank you for that.
27:15I have a few questions.
27:17We've got young children, and you've got so much of knowledge available now on chat prompts.
27:25What's your advice to everyone who's got young children now wondering about what should they
27:28be teaching their children so that when they grow up and when we don't know what the actual
27:33capabilities of these machines are, that what they've learned is still useful?
27:38How old are your kids, Rahul?
27:39So my son is nine, and my daughter's five.
27:42What are you telling them right now?
27:43Now, I want to learn from you, and we're telling them a lot of stuff, whether good, bad, ugly,
27:47I don't know.
27:48I'm trying to refine that and give them a framework of what we should be telling them.
27:51So there's two things.
27:52So I think, first of all, this is probably one of the most common questions I get.
27:57By the way, it's really interesting that the tech experts, and there was an article in
28:02the Wall Street Journal about this 10 days ago, are basically telling their kids, don't
28:06learn computer science.
28:10That skill, at least basic computer programming, is gone.
28:14Advanced computer science, advanced data analysis, if you want to do that, that's going to be
28:18fine.
28:19What are they telling their kids to learn?
28:21They're telling their kids to learn how to teach dance.
28:24They're telling their kids to learn how to do plumbing.
28:27They're telling their kids to learn about the humanities.
28:31Why are they saying that?
28:32Implicitly, they're saying, what are those skill sets that are robust to machine intelligence?
28:40Now I will say it is virtually impossible to predict that, given the pace at which this
28:44improvement is occurring.
28:46I probably have a slightly different kind of answer.
28:48By the way, my daughter's majoring in psychology, without me telling her anything.
28:53So the kids, I think, know basically where this is going.
28:56But the one thing I'll say, Rahul, is I don't know, when you started out college, what were
28:59you majoring in?
29:00Journalism.
29:01Journalism.
29:02You started out with journalism.
29:04OK, that's enlightened.
29:06I started out doing chemistry.
29:09And then the reason I switched to economics was probably like many of you.
29:13There was one teacher who inspired me.
29:16And that's what made me switch.
29:19And I would say to kids, follow the teachers who inspire you.
29:24And the reason is, if you can get inspired and passionate about a subject, that's going
29:28to build something that's going to be a skill that will last all your life, which is curiosity,
29:35which is intrinsic motivation.
29:36We talked about it in the last session.
29:38This is no longer about learning episodically.
29:41It's about learning lifelong.
29:43And that's, I think, going to be the most important skill.
29:45But in the way that Indian families operate, and as do so many Asian families too, parents
29:50want to equip their children with the skills that are likely to be most useful when they
29:55grow up.
29:56So it used to be, say, engineering and doctors back in the day, then IT a few years ago.
30:03So if you were looking ahead, what do you think the children should be learning so they
30:07acquire skills which are useful in the job market years down?
30:11I think that's honestly being too instrumental.
30:14As I said, 10 years ago, a lot of my students were talking to me and saying, what should
30:17I major in?
30:18I never told them computer science.
30:20If I told them that, I would have regretted it.
30:22But I genuinely mean this.
30:24That's looking at things too narrowly.
30:26What I would say is think about things like creativity, judgment, human emotion, empathy,
30:32psychology.
30:33Those are things that are going to be fundamentally important regardless of where computers are
30:37going.
30:38By the way, you can get those skills through various subjects.
30:41It doesn't matter.
30:42It's not a one-to-one mapping between those skills and a particular topic or disciplinary
30:46area.
30:47This is partly what I'm saying.
30:48Really think about where their passion is.
30:50How do we teach our children how to think?
30:52Because everything's available on Google, Copilot, ChatGPT.
30:56You can just ChatGPT it.
30:58Joining the dots, giving them a framework to be able to interpret, analyze, and think,
31:04how do you tell them that when the easiest thing is, let's Google it?
31:10It's a good question.
31:11Just two things on that.
31:13The first is, there was an interesting study done by colleagues at MIT recently where they
31:18had groups of students and they were asked to undertake a particular task or learn about
31:23a topic.
31:25Some students were given AI chatbots.
31:28Some students were only given Google Search with no AI.
31:33What they found is the students with access to AI intelligence learned the material much
31:39faster, but when it came time to apply it on a separate test, which was different from
31:45the first one, they found it much harder.
31:48The students who learned the material through Google Search with no other access took longer,
31:55but they did much better on those tests.
31:57Why is that?
31:58Part of the issue is learning is not simple.
32:02It takes effort.
32:03Okay?
32:04And so part of the issue is you can't compress that effort.
32:10The harder it is to learn something, the more likely you'll remember it for longer periods
32:16of time.
32:18So I think for me, the big implication is when I tell my students, look, all these technologies
32:22are available, it depends on how you use it, my basic approach to them is just saying study.
32:31Because if you get domain expertise, you will be able to use these tools in a much more
32:36powerful way later on.
32:39So in some sense, this goes back to the notion of agency.
32:43It's like we can be lazy with tools and technologies or we can be smart.
32:47It's all entirely up to you.
32:50But this is my advice.
32:51You know, some of my friends in Silicon Valley have the toughest controls on their children
32:56when it comes to devices.
32:58You know, we look at how much time our children can spend on their iPads or TV, we're far
33:02more lenient.
33:03And they're the guys who are actually in the middle of the devices and they're developing
33:06them and they know the dangerous side effects.
33:08Now, those devices are also the repository of knowledge, which is where you can learn
33:12so much from.
33:13So as an educator, every parent has his own take on how much time children can spend.
33:17But as an educator, how do you look at this device addiction, just spending far more time
33:21picking up some knowledge but also wasting a lot of time?
33:24Yeah, I think, I mean, there's a nuance here, which is basically what they're doing is not
33:29saying don't use devices.
33:31They're saying don't use social media.
33:34And this goes back again to one of the things we were talking about earlier.
33:38We have gone through a decade where things like misinformation, disinformation, and
33:43so on, there is no good solution as far as we know today.
33:47There's also various other kinds of habits and so on that are getting approved.
33:51That's partly what they're saying stay away from.
33:53They're not saying stay away from computers.
33:54We can't do that.
33:55And in fact, you don't want to do that.
33:57But there's a nuance in terms of how we interact with tools and computers that we just want
34:01to keep in mind when we think about guardrails, right?
34:04Are you seeing your students getting more and more obsessed with their devices?
34:08And how does that impact?
34:09What are you trying to do to get them to socialize more?
34:13You know, to spend more time with each other and not be stuck on their phones?
34:16Yeah, it's a very interesting question.
34:17So in some sense, last year, we had a conference at Harvard.
34:21We had 400 people from our community attend the conference.
34:25And some of our colleagues were saying, we should have a policy of laptops down.
34:29No laptops in class, take out our devices.
34:33I was coming in for a session right afterwards.
34:35But part of the reason I wanted them to take out their mobile phones was I had two or three
34:40polls during my lecture where I wanted them to give me their input.
34:44So I said, mobile phones out.
34:47And this was sort of crazy.
34:49But the story illustrates something interesting, which is these devices for certain things
34:53can be really powerful.
34:54It can turn a passive learning modality into an active learning modality where every single
34:59person is participating.
35:01We don't want to take that away.
35:03What we want to try and deal with is people playing games while you're lecturing.
35:07Now, by the way, me personally, I just put it on myself.
35:11If I'm not exciting enough or energizing enough for my students to be engaged, use your mobile
35:15phones.
35:16That's on me.
35:18But that's partly what the challenge is.
35:19No, no.
35:20They're quite engaged.
35:21Show of hands.
35:22How many felt engaged during the session?
35:23And how many were like,
35:24I'm hungry.
35:25When's lunch?
35:26Tell us.
35:27OK.
35:28So that, which is why agentic AI and chatbots can never do what professors can, right?
35:34So I'll take some questions.
35:36Kali has a question.
35:37Kali, go ahead.
35:38Hi, professor.
35:40You mentioned that one of the things that we should work on to teach our children is
35:44empathy.
35:46How do you actually teach empathy in our formal education system?
35:51Or does this just go back to then parents and family?
35:57It's a hard question.
36:01In fact, this is, by the way, one of the most important issues we're facing today on campuses.
36:07It's related in part even in higher education, not just younger kids.
36:11When we talk about difficult conversations on campus, part of the reason we're facing
36:17those issues is because people are intransigent.
36:20It's like, I don't care what you say.
36:24I'm not going to change my mind.
36:26One of the things we introduced a couple of years ago on the Harvard application for undergraduate
36:31is a question that says, have you ever changed your mind when discussing something with anyone
36:36else?
36:37Something to that effect.
36:39That's basically saying how open-minded are we?
36:42That's one version of empathy.
36:43There's many other dimensions.
36:45I think part of the challenge is that we don't teach that in schools, right?
36:51We don't teach that formally in schools, which is partly why there's this whole wave now
36:55of schools, not just in other countries in India, which are starting to talk about how
37:00do we teach the second curriculum, the hidden curriculum?
37:03How do we teach those social and emotional skills, the book of life, so to speak?
37:07And I think, I mean, it's not rocket science to say this.
37:12It starts at home, right?
37:13That's basically what we do with our kids every single day.
37:18But that's something that's, I think, going to become fundamentally more important, partly
37:21because of the reasons of what I talked about.
37:23Dr. Sanjeev Baga, he has a question.
37:25Okay, I see lots of hands going up.
37:28Yes, Dr. Baga.
37:31Wonderful, wonderful listening to you.
37:33Just with regards to AI and technology, I've always said that AI and digital technology
37:39is not an expenditure.
37:41It's actually an investment.
37:43So, very quickly, if you'll allow me just 60 seconds, in healthcare, it gives you better
37:48clinical outcomes.
37:49It has decreased from number one cause of death as hospital-acquired infections in many
37:55hospital chains as practically less than 1%.
37:59So it gives you a safer outcome.
38:01It gives you a better patient experience.
38:03The turnaround of the bed strength is a lot quicker.
38:06And more importantly is it gives you better operational excellence.
38:09So all the hospitals, as far as medical facilities are concerned, who have not embraced it as
38:14yet, will find it difficult to operate in the present environment.
38:18What AI and digital technology has made us learn as doctors is that data is the new gold.
38:24If you don't analyze data, if you don't see what your results are, if you don't see where
38:28your clinical outcomes are, then you can't go forward.
38:31So AI is what is in the future for us, all of us.
38:35That's more in the form of an observation.
38:36Let me just elaborate on that in two ways.
38:38One is, I think I would just go back, and useful to contextualize AI, right?
38:44Like right now, we often get obsessed by the latest technology.
38:48When we think about upskilling, reskilling in education, there's a revolution that started
38:53a decade ago.
38:54As I alluded to, there's basically 3,000 courses available to all of you today on any subject.
39:01So the notion of let's wait for AI, no, no, no, it's already there.
39:05My father-in-law, who's 92 years old, during COVID, he said, Bharat, what should I do?
39:09I said, we have all these courses from Harvard available.
39:12In the last two years or three years, he's completed 35 courses.
39:17Wow.
39:18OK, at the age of 92.
39:19Wow.
39:20By the way, he's paid $0 for that because he said, I don't need a certificate.
39:25So I told him, you're the reason we have a business model problem.
39:28But that's one aspect.
39:31The second aspect is sort of thinking about where you're going.
39:34I think you're exactly right, Sanjeev, which is every organization is going to have low-hanging fruit.
39:39The one thing I just caution is there's going to be a paradox of access, meaning if every
39:45organization, every one of your peers has access to the same technology as you,
39:50it's going to be harder for you to maintain competitive advantage.
39:54That's a fundamental question.
39:56OK, this is just a basic observation.
39:58So I just want to sort of mention that.
40:00But you're absolutely right about the low-hanging fruit in medicine and health care.
40:04OK, Toby Walsh has a question or an observation, and then we either lots of hands up.
40:08OK, I don't frankly know what to do because we're also out of time.
40:11So let this just be where we conclude.
40:13One of the greater challenges, especially in higher education, is the cost has gone through the roof.
40:18Are you optimistic that AI is going to be able to turn that around?
40:23So again, I'll just go back to what's happened in the last decade.
40:28As I said, you can now get access to credentials and certificates at a minimal cost compared to the cost of getting a degree.
40:36OK, just to put it in perspective, we have 17,000 degree students every year who come to Harvard.
40:42They are paying a lot of money.
40:44Those who need financial aid get financial aid.
40:47By the way, can anyone guess how many students we have touched over the last decade?
40:5310 times, 100 times that.
40:55It's about 15 million.
40:57That is not a story we publicize.
40:59But that's a story about the number of students who have actually taken a Harvard course or enrolled in a Harvard course.
41:05So in some sense, I think where we are today is the marginal cost of providing education is very, very low.
41:11What we need for that is not incremental improvement on the existing model.
41:17We need to basically break it apart and say, how do we put it back together again in a way that makes sense for everyone?
41:24There's an organization that we just started at Harvard called Axum, jointly with MIT,
41:29with the endowment from the sale of the edX platform, whose only function is to increase access and equity in education.
41:35And by the way, their focus is on 40 million people in America who start college but never complete it,
41:41not just because of cost, but for many other reasons.
41:44In some sense, the potential to reduce the cost is massive, but it's going to require leadership and strategy.
41:51This gentleman here has a question.
41:53Can someone just take the mic to him, please?
42:02So earlier it was, OK, use AI and it will summarize and help you in productivity.
42:07But with the latest open AI models like O3 Mini and all that, they are doing reasoning which is much better than humans.
42:15So the people who are not using it are at a disadvantage.
42:20So isn't it right that the students use AI and be familiar with it and be up to speed with that
42:29rather than not using it and be at a disadvantage to other students?
42:32Yeah, absolutely. There's no question about that.
42:34By the way, I sit at Harvard overseeing the generative AI task force for teaching and learning,
42:40and we have 17 faculty.
42:42The most interesting conversations I've had about adoption are with our students.
42:48Now, when we understand their behavior, it just throws up things that we wouldn't even have thought about.
42:53I'll ask you one question.
42:55We had a sandbox that we created for the entire Harvard community, which was a safe and secure sandbox,
43:00giving them access to large language models as opposed to using public open AI.
43:04The adoption rate amongst our faculty was about 30%, 35% in the first year.
43:09What do you think the adoption rate was amongst our students?
43:14It was about 5%.
43:17So we were surprised.
43:18When we went to them, we said, what's going on?
43:20Are you familiar with the sandbox?
43:22They said, yeah, we are.
43:24We said, are you using it?
43:25They said, no.
43:26We said, are you using AI in any way?
43:28Yeah, yeah, we have access to ChatGPT.
43:30We have our own private accounts there.
43:32So we're like, wait a minute.
43:33Why are you not using the secure Harvard sandbox?
43:36What do you think their answer was?
43:40They said, why would we use something where you can see what we're inputting?
43:45Now, by the way, as faculty members, if the number one question we talk about with generative AI is,
43:51oh, we're worried about cheating in assessments,
43:53the students are listening to us.
43:54They're like, oh, if that's what you're worried about,
43:56we're not coming anywhere close to you.
43:58So part of the point is the students are far ahead of us in terms of using this.
44:02They're using it to save time.
44:03They're using it for engaging in deep learning.
44:05We better understand that ourselves to figure out what we can do.
44:09Join in, Lukaji.
44:10Brilliant presentation.
44:11Just wanted to understand one side of the spectrum.
44:14You have all the positives.
44:16What's on the other side?
44:18What risk do you think is there on the other side?
44:21It starts coding on its own, gets out of hand.
44:23Is that a possibility?
44:24So the risks are the things I talked about towards the end, which is, number one,
44:30we put our head in the sand as institutions, and we don't take this seriously.
44:35That's the first risk.
44:36The second risk is lazy learning, the way I would call it.
44:40Now, again, that's agency.
44:41It partly depends on you as a student.
44:44Do I want to be lazy?
44:45Do I not want to be lazy?
44:47The third risk is everything we were talking about in the previous session
44:50with respect to misinformation, disinformation.
44:53The fourth big risk is asking the fundamental question,
44:56what's our role as teachers?
44:58And I'll just share one anecdote in closing.
45:00There's a colleague at another school who called me and said,
45:04my students have stopped reading the cases.
45:07They're basically inputting the assignment questions into generative AI.
45:09And by the way, they're so smart, they're saying,
45:11give me a quirky answer I can use in class.
45:14Okay?
45:15The assessments are compromised.
45:17And get this, the faculty have stopped reading cases.
45:20They're inputting the cases and basically saying,
45:22give me the teaching plan.
45:25That's the downside.
45:28You know, we met on a flight from Delhi to Mumbai,
45:31and we had a long conversation about the future of education.
45:33You've been able to, in the past 45 minutes,
45:35recreate the magic of that conversation here on stage.
45:38Can we have a very warm round of applause for the professor?
45:41For making the effort of coming here and for joining us
45:44and for delivering this master class.
45:47Absolute pleasure.
45:48Thank you so much.

Recommended