The following is a conversation with Elon Musk,
his fourth time on this, the Lex Friedman Podcast.
(Elon whistling)
1I thought you were gonna finish it.
2
3(Elon whistling)
4
5It's one of the greatest
6themes in all film history.
7
8(whistling continues)
9
10- Yeah, that's great.
11
12- So I was just thinking
13about the Roman Empire.
14
15As one does.
16(Elon laughing)
17
18- There's that whole meme where old guys
19
20are thinking about the Roman
21Empire at least once a day.
22
23- And half the population's confused
24
25whether it's true or not.
26
27But more seriously thinking about the wars
28
29going on in the world today.
30
31And as you know, war and military conquest
32
33has been a big part of
34Roman society and culture.
35
36And it, I think, has been
37a big part of most empires
38
39and dynasties throughout
40human history, so.
41
42- Yeah, they usually came
43as a result of conquest.
44
45I mean, there's some,
46like the Hungarian Empire
47
48where there was just a lot
49of sort of clever marriages.
50
51- But fundamentally there's
52an engine of conquest.
53
54They celebrate excellence in warfare.
55
56Many of the leaders
57were excellent generals.
58
59- [Elon] Yeah.
60- That kind of thing.
61
62So a big picture question, Grok approved,
63
64I asked if this is a good question to ask.
65
66- [Elon] Get tested Grok approved.
67
68- Yeah.
69
70At least on fun mode.
71
72(Elon laughing)
73
74To what degree do you think
75war is part of human nature
76
77versus a consequence of
78
79how human societies are structured?
80
81I ask this as you have
82somehow controversially
83
84been a proponent of peace.
85
86- I'm generally a proponent of peace.
87
88I mean, ignorance is perhaps,
89
90in my view, the real
91enemy to be countered.
92
93That's the real hard part.
94
95Not, you know, fighting other humans.
96
97But all creatures fight.
98
99I mean, the jungle is a,
100
101you look at the, people
102think of this nature
103
104as perhaps some sort of peaceful thing.
105
106But in fact it is not.
107
108There's some quite funny
109Werner Herzog thing
110
111where he is like in the jungle,
112
113like saying that it's
114like basically just murder
115
116and death in every direction.
117
118I mean, the plants, animals in the jungle
119
120are constantly trying to kill each other,
121
122every single day, every minute.
123
124So it's not like, you know,
125we're unusual in that respect.
126
127- Well, there's a relevant question here.
128
129Whether with greater intelligence
130comes greater control
131
132over these base instincts for violence.
133
134- Yes.
135
136We have much more
137vulnerability to control our
138
139limbic instinct for violence
140than say a chimpanzee.
141
142And in fact, if one looks and say,
143
144chimpanzee society, it is not friendly.
145
146I mean, the Bonobos are an exception,
147
148but chimpanzee society
149is filled with violence.
150
151And it's quite, quite horrific, frankly,
152
153that that's our limbic system in action.
154
155Like, you don't wanna be on
156the wrong side of a chimpanzee,
157
158they'll eat your face off
159and tear and nuts off.
160
161- Yeah, basically, there's
162no limits or ethics
163
164or there almost had just war.
165
166There's no just war in
167chimpanzee societies,
168
169is war and dominance
170by any means necessary.
171
172- Yeah, chimpanzee society is a pro,
173
174like a permanent version of human society.
175
176They're not like peace
177loving basically at all.
178
179There's extreme violence.
180
181And then once in a while,
182
183somebody who's watched
184too many Disney movies
185
186decides to raise a chimpanzee as a pet,
187
188and then that eats their
189face or rip their nuts off
190
191or chew their fingers off
192and that kind of thing.
193
194It's happened several times.
195
196- Ripping your nuts off is an interesting
197
198strategy for interaction.
199
200- It's happened to
201people. It's unfortunate.
202
203Like that's, I guess a one way to ensure
204
205that the other chimp
206doesn't like, you know,
207
208contribute to the gene pool.
209
210- Well, from a martial arts perspective
211
212is the fascinating strategy.
213
214- (laughing) The nut riper. (laughing)
215
216- I wonder which of the
217martial arts teaches that one.
218
219- I think it's safe to say
220if somebody's got your nuts
221
222in their hands and as the
223option ripping them off,
224
225you'll be amenable to whatever they want.
226
227- Yeah.
228(Elon laughing)
xxxxxxxxxx
2911- So like I said, somehow controversially,
2
3you've been an proponent
4of peace on Twitter, on X.
5
6- Yeah.
7
8- So let me ask you about
9the wars going on today
10
11and to see what the
12path to peace could be.
13
14How do you hope the current war in Israel
15
16and Gaza comes to an end?
17
18What path do you see
19
20that can minimize human suffering
21
22in the long term in
23that part of the world?
24
25- Well, I think that part of
26the world is definitely like,
27
28if you look up the,
29there is no easy answer
30
31in the dictionary.
32
33It'll be that, like the
34picture of the Middle East,
35
36in Israel especially.
37
38So there is no easy answer.
39
40What my, this is strictly
41my opinion of, you know,
42
43is that the goal
44
45of Hamas was to provoke an
46overreaction from Israel.
47
48They obviously did not
49expect to, you know,
50
51have a military victory, but they expect,
52
53they really wanted to
54commit the worst atrocities
55
56that they could in order to provoke
57
58the most aggressive response
59possible from Israel.
60
61And then leverage that aggressive response
62
63to rally Muslims worldwide
64
65for the course of Gaza and Palestine,
66
67which they have succeeded in doing.
68
69So the counterintuitive thing here,
70
71I think that the thing that
72I think should be done,
73
74even though it's very difficult,
75
76is that I would recommend that Israel
77
78engage in the most conspicuous
79acts of kindness possible.
80
81Every po- everything.
82
83That is the actual thing
84
85that we're taught the goal of Hamas.
86
87- So in some sense,
88
89the degree that makes sense in geopolitics
90
91turn the other cheek implemented.
92
93- It's not exactly turn the other cheek.
94
95Because I do think that there's, you know,
96
97I think it is appropriate for Israel
98
99to find the Hamas members
100
101and, you know, either kill
102'em or incarcerate them.
103
104Like that's something that's
105something has to be done
106
107'cause they're just gonna
108keep coming otherwise.
109
110But in addition to that, they
111need to do whatever they can.
112
113There's some talk of establishing,
114
115for example, a mobile hospital.
116
117I'd recommend doing that.
118
119Just making sure that, you know,
120
121there's food, water, medical, necessities,
122
123and just be over the top about it
124
125and be very transparent.
126
127So you can't claim it's a trick.
128
129Like just put webcam on the
130thing, you know, or 24/7.
131
132- Deploy acts of kindness.
133- Yeah.
134
135Conspicuous acts of kindness that,
136
137with that are unequivocal.
138
139Meaning they can't be somehow,
140
141because Hamas will then,
142
143their response will be, oh, it's a trick.
144
145Therefore you have to
146counter how it's not a trick.
147
148- This ultimately fights the broader force
149
150of hatred in the region.
151
152- Yes.
153
154And I'm not sure who said it,
155it's an apocryphal saying,
156
157but an eye for an eye
158makes everyone blind.
159
160Now that neck of the woods,
161
162they really believe in the
163whole eye for an eye thing.
164
165But I mean, you really have,
166
167if you're not gonna just
168outright commit genocide,
169
170like against an entire people,
171
172which obviously would not
173be acceptable to really
174
175shouldn't be acceptable to anyone,
176
177then you're gonna leave
178basically a lot of people alive
179
180who subsequently, you know, hate Israel.
181
182So really the question is like,
183
184how, for every Hamas member that you kill,
185
186how many did you create?
187
188And if you create more than you killed,
189
190you've not succeeded.
191
192That's the, you know,
193the real situation there.
194
195And it's safe to say that if, you know,
196
197if you kill somebody's child in Gaza,
198
199you've made at least a few Hamas members
200
201who will die just to kill an Israeli.
202
203That's the situation.
204
205So, but I mean,
206
207this is one of the most
208contentious subjects
209
210one could possibly discuss.
211
212But I think if the goal ultimately
213
214is some sort of long-term peace,
215
216one has to be, look at this
217from the standpoint of,
218
219over time, are there more or
220fewer terrorists being created?
221
222- Let me just linger on war.
223
224- Yeah, war safe to say,
225
226wars always existed and always will exist.
227
228- [Lex] Always will exist,
229
230- Always has existed
231and always will exist.
232
233- I hope not. You think it always will?
234
235- There will always be war.
236
237This question of just how much war.
238
239And, you know, there's this,
240
241there's this sort of the
242scope and scale of war.
243
244But to imagine that there
245would not be any war
246
247in the future, I think would
248be a very unlikely outcome.
249
250- Yeah you talked about
251the Culture series.
252
253There's war even there.
254
255- Yes. It's a giant war.
256
257The first book starts off
258with a gigantic galactic war
259
260where trillions die, trillions.
261
262- But it still nevertheless
263protects these pockets
264
265of flourishing.
266
267Somehow you can have galactic war
268
269and still have pockets of flourishing.
270
271- Yeah.
272
273I mean it's, I guess if we
274are able to one day expand to,
275
276you know, the galaxy or whatever,
277
278there will be a galactic
279war at some point.
280
281- The scale, I mean the scale
282of war has been increasing,
283
284increasing, increasing.
285
286It's like a race between
287the scale of suffering
288
289and the scale of flourishing.
290
291- Yes.
xxxxxxxxxx
2111- A lot of people seem
2to be using this tragedy
3
4to beat the drums of war
5
6and feed the military industrial complex.
7
8Do you worry about this?
9
10The people who are rooting for escalation
11
12and how can it be stopped?
13
14- One of the things that did concerned me
15
16is that there are very
17few people alive today
18
19who actually viscerally
20understand the horrors of war,
21
22at least in the US.
23
24I mean, obviously there are
25people in on the front lines
26
27in Ukraine and Russia who understand
28
29just how terrible war is,
30
31but how many people in
32the west understand it?
33
34And my grandfather was in World War II.
35
36He was severely traumatized.
37
38He was there, I think for
39almost six years in the,
40
41in eastern North Africa and Italy.
42
43All his friends were
44killed in front of him.
45
46And he would've died too,
47except they randomly gave some,
48
49I guess, IQ test or something.
50
51And he scored very high.
52
53And he was not an officer, he was a,
54
55I think a corporal or a
56sergeant or something like that.
57
58Because he didn't finish high school
59
60because he had to drop outta high school,
61
62'cause his dad died and he had to work
63
64to support his siblings.
65
66So because he didn't graduate high school,
67
68he was not eligible for the officer corps.
69
70So, you know, he kind of got
71put into the cannon fodder
72
73category, basically.
74
75But then, this, randomly
76they gave him this test.
77
78He was transferred to British
79intelligence in London.
80
81That's where he met my grandmother.
82
83But he had PTSD next
84level, like next level.
85
86I mean, just didn't
87talk, just didn't talk.
88
89And if you tried talking to him,
90
91he'd just tell you to shut up.
92
93And he won a bunch of medals.
94
95Never bragged about it once,
96not even hinted, nothing.
97
98I like found out about it,
99
100'cause his military records were online.
101
102That's how I know.
103
104So he would say like, no way in hell
105
106do you wanna do that again.
107
108But how many people, now
109he obviously he died,
110
111you know, 20 years ago or
112longer, actually 30 years ago.
113
114How many people are alive
115that remember World War II?
116
117Not many.
118
119- And the same perhaps applies
120to the threat of nuclear war.
121
122- Yeah. I mean, there
123are enough nuclear bombs
124
125pointed at United States
126to make the revel,
127
128the radio active revel balance many times.
129
130- There's two major
131wars going on right now.
132
133So you talked about the
134threat of AGI quite a bit.
135
136But now as we sit here with the intensity
137
138of conflict going on, do
139you worry about nuclear war?
140
141- I think we shouldn't
142discount the possibility
143
144of nuclear war.
145
146It is a civilizational threat.
147
148Right now I could be wrong,
149
150but I think the current
151probability of nuclear war
152
153is quite low.
154
155But there are a lot of
156nukes pointed at us.
157
158So, and we have a lot of
159nukes pointed at other people.
160
161They're still there.
162Nobody's put their guns away.
163
164The missiles are still in the silos.
165
166- And the leaders don't
167seem to be the ones
168
169with the nukes talking to each other.
170
171- No.
172
173There are wars which are tragic
174
175and difficult on a local basis.
176
177And then there are wars
178which are civilization ending
179
180or has that potential.
181
182Obviously global thermonuclear
183warfare has high potential
184
185to end civilization, perhaps permanently.
186
187But certainly, you know, to severely wound
188
189and perhaps set back human progress by,
190
191you know, to the stone age
192or something, I don't know.
193
194Pretty bad.
195
196Probably scientists and
197engineers wanna be super popular
198
199after that as well.
200
201And like you got us into this mess.
202
203- So generally I think we
204obviously wanna prioritize
205
206civilizational risks over
207things that are painful
208
209and tragic on a local level,
210but not civilizational.
211
xxxxxxxxxx
2071- How do you hope the war
2in Ukraine comes to an end?
3
4And what's the path, once again
5
6to minimizing human suffering there?
7
8- Well, I think that
9what is likely to happen,
10
11which is really pretty much the way it is,
12
13is that something very close
14to the current lines will be
15
16how a ceasefire or truth happens.
17
18But, you know, you just
19have a situation right now
20
21where whoever goes on the
22offensive will suffer casualties
23
24at several times the rate
25of whoever's on the defense.
26
27'Cause you've got defense in depth,
28
29you've got minefields,
30trenches, anti-tank defenses.
31
32Nobody has air superiority.
33
34'cause the anti-aircraft missiles
35
36are really far better than the aircraft.
37
38Like, they're far more of them.
39
40And so neither side has air superiority.
41
42Tanks are basically death
43drafts, just slow moving
44
45and they're not immune
46to anti-tank weapons.
47
48So you really just have
49long range artillery
50
51and infantry ranges.
52
53It's World War I all over
54again with drones, you know,
55
56throwing old drones, some drones there.
57
58- Which makes the long range artillery
59
60just that much more accurate and better
61
62and so more efficient at
63murdering people on both sides.
64
65- Yeah.
66
67So it's whoever is, you
68don't wanna be trying
69
70to advance from either side,
71
72'cause the probability of
73dying is incredibly high.
74
75So in order to overcome
76
77defense in depth trenches
78
79and minefields, you
80really need a significant
81
82local superiority in numbers.
83
84Ideally combined alms
85where you do a fast attack
86
87with aircraft, a
88concentrated number of tanks
89
90and a lot of people.
91
92That's the only way you're
93gonna punch through the line.
94
95And then you're gonna punch through
96
97and then not have
98reinforcements just kick you
99
100right out again.
101
102I mean, I really recommend people read
103
104World War I warfare in detail.
105
106That's rough.
107
108I mean, the sheer number
109of people that died there
110
111was mind boggling.
112
113- And it's almost impossible
114to imagine the end of it
115
116that doesn't look like almost
117exactly like the beginning
118
119in terms of what land
120belongs to who and so on.
121
122But on the other side of a
123lot of human suffering death
124
125and destruction of infrastructure.
126
127- Yes.
128
129I mean, the thing that, the reason I,
130
131you know, proposed some sort of truce
132
133or peace a year ago
134
135was because I've predicted
136pretty much exactly
137
138what would happen, which
139is a lot of people dying
140
141for basically almost no changes in land.
142
143And this, the loss of the flower
144
145of Ukrainian and Russian youth.
146
147And we should have some
148sympathy for the Russian boys
149
150as well as the Ukrainian boys,
151
152'cause they, Russian boys didn't ask to be
153
154on their front line, they have to be.
155
156So there's a lot of sons not
157coming back to their parents,
158
159you know, and I think most
160of 'em don't really have,
161
162they don't hate the other side.
163
164You know, it's sort of like
165as this saying about, like,
166
167this saying comes from World
168War I, it's like young boys
169
170who don't know each
171other, killing each other
172
173on behalf of old men
174that do know each other.
175
176The hell's the point of that?
177
178- So Vladimir Zelensky said that he's not,
179
180or has said in the past,
181
182he's not interested in
183talking to Putin directly.
184
185Do you think he should?
186
187- Yeah.
188- Sit down man to man,
189
190lead a leader and negotiate peace?
191
192- Look, I think I would just recommend
193
194do not send the flower of Ukrainian youth
195
196to die in trenches.
197
198Whether he talks to Putin
199or not, just don't do that.
200
201Whoever goes on the offensive
202
203will lose massive numbers of people
204
205and history will not
206look kindly upon them.
207
xxxxxxxxxx
6561- You've spoken honestly
2about the possibility of war
3
4between US and China in the long term.
5
6If no diplomatic solution
7is found, for example,
8
9on the question of Taiwan
10and one China policy.
11
12- [Elon] Right.
13
14- How do we avoid the trajectory
15
16where these two superpowers clash?
17
18- Well, it's worth reading that book
19
20on the, difficult to
21pronounce, the Thucydides trap,
22
23I believe it's called.
24
25I love war history. I like
26inside out and backwards.
27
28There's hardly a battle
29I haven't read about.
30
31And trying to figure out like
32what really was the cause
33
34of victory in any
35particular case as opposed
36
37to what one side or
38another claim the reason.
39
40- Both the victory and
41what sparked the war?
42
43- Yeah, yeah,
44- [Lex] The whole thing.
45
46- Yeah, so that Athens and
47Sparta is a classic case.
48
49The thing about the Greek is
50
51they really wrote down a lot of stuff.
52
53They loved writing.
54
55You know, there are lots
56of interesting things
57
58that happened in many parts of the world,
59
60but people didn't write down. (laughing)
61
62So we don't know what happened,
63
64or they didn't really
65write with in detail.
66
67They just would say like,
68we had a battle and we won.
69
70And like, well what?
71Can you add a bit more?
72
73The Greeks, they really
74wrote a lot. (laughing)
75
76They were very articulate
77on, they just love writing.
78
79So, and we have a bunch of
80that writing as preserved.
81
82So we know what led up
83to the Peloponnesian War
84
85between the Spartan and Athenian alliance.
86
87And we know that they saw it coming.
88
89I mean, the Spartans didn't write,
90
91they also weren't very
92verbose by their nature,
93
94but they did write, but
95they weren't very verbose.
96
97They were terse.
98
99But the Athenians and the
100other Greeks wrote a line
101
102and they were like, and
103Sparta was really kinda
104
105like the leader of Greece.
106
107But Athens grew stronger
108
109and stronger with each passing year.
110
111And everyone's like,
112well that's inevitable
113
114that there's gonna be a clash
115between Athens and Sparta.
116
117Well, how do we avoid that?
118
119And they couldn't.
120
121They actually, they saw it coming
122
123and they still could
124not avoid it. (laughing)
125
126So, you know, at some point
127if there's, if one group,
128
129one civilization or country
130
131or whatever exceeds another,
132
133sort of, like, you know,
134
135the United States has been the
136biggest kid on the block for,
137
138since I think around 1890
139from an economic standpoint.
140
141So the United States
142has been the economic,
143
144most powerful economic
145engine in the world,
146
147longer than anyone's been alive.
148
149And the foundation of war is economics.
150
151So now we have a situation in the case
152
153of China where the economy
154is likely to be two,
155
156perhaps three times larger
157than that of the US.
158
159So imagine you're the
160biggest kid on the block
161
162for as long as anyone can remember.
163
164And suddenly a kid comes
165along who's twice your size.
166
167- So we see it coming.
168- [Elon] Yeah.
169
170- How is it possible to stop?
171
172Is there some, let me
173throw something out there,
174
175just intermixing of
176cultures understanding.
177
178There does seem to be a giant cultural gap
179
180in understanding of each other.
181
182And you're an interesting case study
183
184because you are an American,
185obviously you've done
186
187a lot of incredible manufacture
188here in the United States,
189
190but you also work with China.
191
192- I've spent a lot of time in China
193
194and met with the leadership many times.
195
196- Maybe a good question to ask is,
197
198what are some things about China
199
200that people don't understand
201positive, just in the culture?
202
203What's some interesting things
204
205that you've learned about the Chinese?
206
207- Well, the sheer number of really smart,
208
209hardworking people in China is incredible.
210
211There are really, say like how many smart,
212
213hardworking people are there in China,
214
215there's far more of them
216there than there are here.
217
218I think in my opinion.
219
220And they've got a lot of energy.
221
222So I mean, the architecture in China
223
224that's in recent years is far
225more impressive than the US.
226
227I mean the train stations, the buildings,
228
229the high speed rail, everything,
230
231it's really far more impressive
232than what we have in the US.
233
234I mean, I recommend
235somebody just go to Shanghai
236
237and Beijing, look at the buildings
238
239and go to, you know, take the
240train from Beijing to Xian,
241
242where you have the terracotta warriors.
243
244China's got an incredible
245history, very long history.
246
247And you know, I think arguably the,
248
249in terms of the use of language
250from a written standpoint,
251
252sort of one of the oldest,
253
254perhaps the oldest written language.
255
256And then China, people
257did write things down.
258
259So now China historically has always been,
260
261with rare exception,
262been internally focused.
263
264They have not been inquisitive.
265
266They've fought each other.
267
268There've been many, many civil wars.
269
270In the three kingdoms war,
271
272I believe they lost about
27370% of their population.
274
275So, and that,
276
277so they've had brutal internal wars,
278
279like civil wars that make the US Civil War
280
281look small by comparison.
282
283So I think it's important to appreciate
284
285that China is not monolithic.
286
287We sort of think of like China
288
289as this sort of one entity of one mind.
290
291And this is definitely not the case
292
293from what I've seen,
294
295and I think most people who
296understand China would agree.
297
298People in China think about China
299
30010 times more than they think
301
302about anything outside of China.
303
304So it's like 90% of
305their consideration is,
306
307you know, our is internal.
308
309- Well, isn't that a really positive thing
310
311when you're talking
312about the collaboration
313
314and a future piece between superpowers,
315
316when you're inward facing,
317
318which is like focusing
319on improving yourself
320
321versus focusing on...
322- [Elon] Yeah.
323
324- "Improving" others
325through military might.
326
327- The good news, the
328history of China suggests
329
330that China is not inquisitive.
331
332Meaning they're not gonna go out
333
334and invade a whole bunch of countries.
335
336Now they do feel very strongly,
337
338you know, so that's good.
339
340I mean, 'cause a lot of
341very powerful countries
342
343have been inquisitive.
344
345The US is one of the,
346also one of the rare cases
347
348that has not been inquisitive.
349
350Like after World War II,
351
352the US could have basically taken
353
354over the world in any country.
355
356Like, we've got nukes,
357nobody else has got nukes.
358
359We don't even have to lose soldiers.
360
361Which country do you want?
362
363And the United States could
364have taken over everything.
365
366Oh wait, at will.
367
368And it didn't, and the
369United States actually
370
371helped rebuild countries.
372
373So it helped rebuild Europe,
374you know, helped rebuild Japan.
375
376This is very unusual behavior,
377almost unprecedented.
378
379You know, the US did
380conspicuous acts of kindness
381
382like the Berlin airlift, you know?
383
384And I think, you know, it's always like,
385
386well, America's done bad things.
387
388Well, of course America's done bad things.
389
390But one needs to look at
391the whole track record.
392
393And just generally, you
394know, one sort of test
395
396would be how do you treat
397your prisoners at war?
398
399Or let's say, you know,
400
401no offense to the Russians, but
402let's say you're in Germany,
403
404it's 1945, you got the
405Russian Army coming one side
406
407and you got the French, British,
408
409and American Army's coming the other side.
410
411Who would you like to
412be to surrender it to?
413
414Like, no country is like morally perfect,
415
416but I recommend being a
417POW with the Americans.
418
419That would be my choice
420very strongly. (laughing)
421
422- In the full menu of POWs.
423
424- Very much so.
425
426And in fact, Wernher von Braun, took,
427
428you know, smart guy was like,
429
430we've got to be captured by the Americans.
431
432And in fact, the SS was under
433orders to execute Von Braun
434
435and all of the German rocket initiatives.
436
437And they narrowly escaped their SS.
438
439They said they were going
440out for a walk in the woods.
441
442They left in the middle
443of winter with no coats.
444
445And they ran like, with no
446food, no coats, no water.
447
448And just ran like hell and ran west.
449
450And by Sherlock, they, I
451think his brother found
452
453like a bicycle or something,
454
455and then just cycled west as fast
456
457as he couldn't have found a US patrol.
458
459So anyway, that's one way
460you can tell morality is.
461
462Where do you wanna be PoW? (laughing)
463
464It's not fun anywhere,
465
466but some places are
467much worse than others.
468
469So anyways, so like America has been,
470
471while far from perfect,
472generally a benevolent force,
473
474and we should always be self-critical
475
476and we try to be better.
477
478But anyone with have brain knows that.
479
480So I think there are some,
481
482in this way China and the
483United States are similar.
484
485Neither country has been
486acquisitive in a significant way.
487
488So that's like a, you know,
489a shared principle I guess.
490
491Now China does feel very
492strongly about Taiwan.
493
494They've been very clear
495about that for a long time.
496
497You know, from their standpoint,
498
499it would be like one of
500the states is, you know,
501
502not there like Hawaii
503or something like that.
504
505But more significant
506than Hawaii, you know?
507
508And Hawaii is pretty significant for us.
509
510So they view it as really the,
511
512that there's a fundamental part of China.
513
514The islander Formosa not Taiwan,
515
516that is not part of China, but should be.
517
518And the only reason it hasn't been
519
520is 'cause the US Pacific Fleet.
521
522- And as their economic power grows
523
524and as their military power grows,
525
526the thing that they're clearly saying
527
528is their interest will, you
529know, clearly be materialized.
530
531- Yes.
532
533China has been very clear that
534they'll incorporate Taiwan
535
536peacefully or militarily,
537
538but that they will incorporate it
539
540from their standpoint is 100% likely.
541
542- You know, something you
543said about conspicuous
544
545acts of kindness.
546
547As a geopolitical policy,
548it almost seems naive,
549
550but I'd venture to say
551that this is probably
552
553the path forward, how you avoid most wars.
554
555Just as you say it, it sounds naive,
556
557but it's kind of brilliant.
558
559If you believe in the goodness
560
561of underlying most of human nature.
562
563It just seems like it
564conspicuous acts of kindness
565
566can reverberate through the populace
567
568of the countries involved.
569
570- Yeah, well.
571- [Lex] And deescalate.
572
573- Absolutely.
574
575So after World War I,
576they made a big mistake.
577
578You know, they basically tried to lump
579
580all of lamb on Germany
581
582and, you know, settled Germany
583
584with impossible reparations.
585
586And, you know, really, there was a lot of,
587
588there was a fair, quite a bit of blame
589
590to go around for World War I,
591
592but they try to, you know,
593put it all in Germany.
594
595And that was, that laid
596the seeds for World War II.
597
598So it's a lot of people
599were not just Hitler,
600
601a lot of people felt wronged
602
603and they wanted vengeance
604
605and they got it.
606
607- [Lex] People don't forget.
608
609- Yeah, you know, you
610kill somebody's father
611
612or mother or son, daughter,
613they're not gonna forget it.
614
615They'll want vengeance.
616
617So after World War II, they're like, well,
618
619the Treaty of Versailles was
620a huge mistake in World War I.
621
622And so this time, instead of,
623
624you know, crushing the losers,
625
626we're actually gonna help
627'em with the Marshall Plan
628
629and we're gonna help rebuild Germany.
630
631We're gonna help rebuild
632or, you know, Austria
633
634and the other, you know,
635Italy and whatnot, so.
636
637And that was the right move.
638
639- There's a, it does feel
640like there's a profound truth
641
642to conspicuous acts of kindness
643being an antidote to this.
644
645- Something must stop the
646cycle of reciprocal violence.
647
648Something must stop it
649
650or it will, you know, it'll never stop.
651
652Just eye for an eye, tooth for tooth,
653
654limb for a limb, life for
655a life, forever and ever
656
xxxxxxxxxx
5551- To escape briefly,
2
3the darkness was some
4incredible engineering work.
5
6xAI just released Grok AI assistant
7
8that I've gotten a chance to play with.
9
10It's amazing on many levels.
11
12First of all, it's amazing
13that a relatively small team
14
15in a relatively short amount
16of time was able to develop
17
18this close to state-of-the art system.
19
20Another incredible thing
21is there's a regular mode
22
23and there's a fun mode.
24
25- Yeah. I guess I'm to blame
26for that one. (laughing)
27
28- I wish, first of all,
29
30I wish everything in life had a fun mode.
31
32- Yeah.
33
34- There's something compelling
35
36beyond just fun about the fun mode
37
38interacting with a large language model.
39
40I'm not sure exactly what it is
41
42'cause I only have had a little
43bit of time to play with it.
44
45But it just makes it more interesting,
46
47more vibrant to interact with the system.
48
49- Yeah, absolutely.
50
51Our AI Grok is modeled
52
53after the Hitchhiker's
54Guide to the Galaxy,
55
56which is one of my
57favorite books, which is,
58
59it's a book on philosophy,
60disguises book on humor.
61
62And I would say that
63is that forms the basis
64
65of my philosophy, which is that
66
67we don't know the meaning of life,
68
69but the more we can expand the scope
70
71and scale of consciousness,
72
73digital and biological, the
74more we're able to understand
75
76what questions to ask about the
77answer that is the universe.
78
79So I have a philosophy of curiosity.
80
81- There is generally a
82feeling like this AI system
83
84has an outward looking,
85
86like the way you are, like sitting
87
88with a good friend
89looking up at the stars,
90
91like the asking pod head, like
92questions about the universe,
93
94wondering what it's all about.
95
96The curiosity you talk about.
97
98There's a sense no matter how
99mundane the question I ask,
100
101there's a sense of cosmic
102grandeur to the whole thing.
103
104- Well we are actually working
105hard to have engineering
106
107math, physics answers
108that you can count on.
109
110So for the other sort of AIs out there
111
112that these so-called
113large language models,
114
115I've not found the
116engineering to be reliable.
117
118And the hallucination,
119
120it unfortunately hallucinates most
121
122when you at least want it to hallucinate.
123
124So when you're asking
125important difficult questions,
126
127that that's where it tends
128to be confidently wrong.
129
130So we're really trying hard to say, okay,
131
132how do we be as grounded as possible
133
134so you can count on the results?
135
136Trace things back to
137physics first principles,
138
139mathematical logic.
140
141So underlying the humor is an aspiration
142
143to adhere to the truth of the universe
144
145as closely as possible.
146
147- [Lex] That's really tricky.
148- It is tricky.
149
150So that's why, you know,
151
152there's always gonna be
153some amount of error,
154
155but we want to aspire to be
156
157as truthful as possible about the answers
158
159with acknowledged error.
160
161So that there was always, you
162don't wanna be confidently
163
164wrong, so you're not
165gonna be right every time,
166
167but you wanna minimize
168
169how often you're confidently wrong.
170
171And then like I said, once
172you can count on the logic
173
174as being not violating physics,
175
176then you can start to build on that,
177
178to create inventions, like
179invent new technologies.
180
181But if you can't, if you cannot
182
183count on the foundational
184physics being correct,
185
186obviously the inventions
187are simply wishful thinking,
188
189you know, imagination
190land, magic basically.
191
192- Well, as you said, I
193think one of the big goals
194
195of xAI is to understand the universe.
196
197- Yes. That's how simple
198three word mission. (laughing)
199
200- If you look out far into the future,
201
202do you think on this level of physics,
203
204the very edge of what we
205understand about physics,
206
207do you think it will make discoveries,
208
209sort of the sexiest discovery
210of them as we know now,
211
212sort of unifying general
213relativity and quantum mechanics.
214
215So coming up with a theory of everything,
216
217do you think it could push
218towards that direction?
219
220Almost like theoretical
221physics discoveries.
222
223- If an AI cannot figure out new physics,
224
225it's clearly not equal to humans,
226
227let alone nor has surpassed humans,
228
229'cause humans have
230figured out new physics.
231
232They're just, you know,
233physics is just understanding,
234
235you know, deepening wants
236insight into how reality works.
237
238And then there's engineering
239which is inventing things
240
241that have never existed.
242
243Now the range of
244possibilities for engineering
245
246is far greater than for physics.
247
248'Cause you know, once
249you figure out the rules
250
251of the universe, that's it.
252
253You've discovered things
254that already existed.
255
256But from that you can
257then build technologies
258
259with that are really almost limitless
260
261in the variety and cap- you know?
262
263It's like once you understand the rules
264
265of the game properly, and we do, you know,
266
267with current physics, we do
268at least at a local level,
269
270understand how physics works very well.
271
272Where our ability to predict
273things is incredibly good.
274
275Like quantum mechanics is,
276
277the degree to which quantum
278mechanics can predict outcomes
279
280is incredible.
281
282That was my heart hardest
283class in college by the way.
284
285My senior quantum
286mechanics class was harder
287
288than all of my other classes put together.
289
290- To get an AI system,
291a large language model
292
293to reliably, be as reliable
294as quantum mechanics
295
296and physics is very difficult.
297
298- Yeah. You have to test any
299conclusions against the ground,
300
301truth of reality.
302
303Reality is the ultimate judge.
304
305Like physics is the law,
306
307everything else is a recommendation.
308
309I've seen plenty of people
310break the laws made by man,
311
312but none break the laws made by physics.
313
314- Yeah, it's a good test actually.
315
316If this LLM understands
317
318and matches physics, then
319you can more reliably trust
320
321whatever it thinks about the current state
322
323of politics in some sense.
324
325- And it's also not the case currently
326
327that even that its internal
328logic is not consistent.
329
330So, especially with these,
331
332with the approach of like just
333predicting a token, predict,
334
335token, predict, token,
336it's like a vector sum.
337
338You know, you're summing
339up a bunch of vectors,
340
341but you can get drift.
342
343So as those, a little bit of error,
344
345a little bit of error adds up.
346
347And by the time you are
348many tokens down the path,
349
350it doesn't make any sense.
351
352- So it has to be somehow
353self-aware about the drift.
354
355- It has to be self-aware about the drift,
356
357and then look at the thing
358as a gestalt as a whole
359
360and say it does it have
361coherence as a whole.
362
363So, you know, when authors write books
364
365that they will write the
366book and then they'll go
367
368and revise it, you know,
369taking into account,
370
371you know, all the, the
372end and the beginning
373
374and the middle and rewrite
375it to achieve coherence
376
377so that it doesn't end up
378in a nonsensical place.
379
380- [Lex] Maybe the process
381of revising is what...
382
383- Yeah.
384
385- Reasoning is, and then
386that's the process of revising
387
388is how you get closer and closer to truth.
389
390Maybe you, like, at least
391I approached it that way.
392
393You just say a bunch of bullshit first
394
395and then you get it better.
396
397You start a bullshit and then you get...
398
399- Create a draft and then
400you iterate on that draft
401
402until it has has coherence,
403
404until it all adds up basically.
405
406- So another question
407about theory of everything,
408
409but for intelligence, do
410you think there exists
411
412as you're exploring this with xAI,
413
414creating this intelligence system,
415
416do you think there is a
417theory of intelligence
418
419where you get to understand what,
420
421like what is the I in AGI
422
423and what is the I in human intelligence?
424
425- There's no I in Team
426America, oh wait, there is.
427
428(both laughing)
429
430- No, it's gonna be stuck in my head now.
431
432(both laughing)
433
434Yeah. There's no me and
435whatever in quantum mechanics.
436
437Oh, wait.
438(Elon laughing)
439
440I mean is that part of the
441process of discovering,
442
443understanding the universe is
444understanding intelligence?
445
446- Yeah. Yeah.
447
448I think we need to
449understand intelligence,
450
451understand consciousness.
452
453I mean there are some sort of
454fundamental questions of like,
455
456what is thought?
457
458What is emotion?
459
460- [Lex] Yeah.
461- Is it really just one atom
462
463bumping into another atom?
464
465It feels like something more than that.
466
467So I think we're probably
468missing some really big things,
469
470like some really big things.
471
472- Something that'll be
473obvious in retrospect.
474
475- [Elon] Yes.
476- Like there's a giant,
477
478like you put the whole
479consciousness, emotion.
480
481- Well, some people
482would quote like a soul,
483
484you know, in religion it'd be a soul.
485
486Like you feel like you're you, right?
487
488I mean, you don't feel like
489you're just a collection
490
491of atoms, but on what
492dimension does thought exist?
493
494What dimension does do emotions exist?
495
496We feel them very strongly.
497
498I suspect there's more to it
499than atoms bumping into atoms.
500
501- And maybe AI can pave
502the path to the discovery
503
504of whatever the hell that thing is.
505
506- Yeah. What is consciousness?
507
508Like what are, when you put the
509atoms in a particular shape,
510
511why are they able to form thoughts
512
513and take actions, that and feelings.
514
515- And even if it is an illusion,
516
517why is this illusion so compelling?
518
519- Yeah. Like how do...
520
521- Why does this illusion exist?
522
523- [Lex] Yeah.
524- On what plain
525
526does this illusion exist?
527
528- [Lex] Yeah.
529- And sometimes I wonder is,
530
531you know, either perhaps
532everything's conscious
533
534or nothing is conscious, one of the two.
535
536- I like the former,
537
538everything conscious just seems more fun.
539
540- It does seem more more fun. Yes.
541
542But we're composed of atoms
543
544and those atoms are composed
545of quarks and leptons.
546
547And those quarks and
548leptons have been around
549
550since the beginning of the universe.
551
552- The beginning of the universe.
553
554- Right. What seems to be the
555beginning of the universe.
xxxxxxxxxx
4121- The first time we talked,
2you said what you would,
3
4which is surreal to think
5
6that this discussion was
7happening is becoming a reality.
8
9I asked you what question
10would you ask an AGI system
11
12once you create it?
13
14And you said, what's outside
15the simulation is the question.
16
17And good question.
18
19- [Elon] Yeah.
20- But it seems like with Grok
21
22you started to literally,
23the system's goal
24
25is to be able to ask such questions,
26
27to answer such questions
28
29and to ask such questions.
30
31- Where are the aliens?
32- [Lex] Where are the aliens?
33
34- That's one of the, like
35the very paradox question.
36
37A lot of people have asked me
38
39if I've seen any evidence
40of aliens and I haven't,
41
42which is kind of concerning.
43
44'Cause then I think
45what I'd probably prefer
46
47to at least have seen some
48archeological evidence of aliens,
49
50to the best of my knowledge,
51there is no proof.
52
53I'm not aware of any evidence of aliens.
54
55If they're out there, they're very subtle.
56
57We might just be the only consciousness,
58
59at least in the galaxy.
60
61And if you look at say
62the history of Earth,
63
64forensically the archeological record,
65
66Earth is about four and
67a half billion years old.
68
69Civilization as measured
70from the first writing
71
72is only about 5,000 years old.
73
74We have to give some credit
75there to the ancient Sumerians
76
77who aren't around anymore.
78
79I think it was a archaic,
80
81pre cuneiform was the first
82actual symbolic representation,
83
84but only about 5,000 years ago.
85
86I think that's a good date
87
88for when word say civilization started.
89
90That's one millionth of Earth's existence.
91
92So civilization has been around,
93
94it's really a flash in the pan so far.
95
96And why have we, why
97did it take so long for,
98
99you know, four and a half billion years?
100
101For the vast majority of
102the time, there was no life.
103
104And then there was archaic there bacteria
105
106for a very long time.
107
108And then, you know, you had
109mitochondria get captured,
110
111multicellular life.
112
113Differentiation into plants and animals.
114
115Life moving from the oceans to land.
116
117Mammals, higher brain functions.
118
119And the sun is expanding slowly,
120
121but it will overheat,
122
123it will heat the earth up
124at some point in the future,
125
126boil the oceans and and
127earth will become like Venus.
128
129Where near life, life as
130we know it is impossible.
131
132So if we do not become multi-planetary
133
134and ultimately go beyond our solar system,
135
136annihilation of all life
137on earth is a certainty.
138
139A certainty.
140
141And it could be as little as
142on the galactic timescale,
143
144half a billion years.
145
146You know, it's long
147time by human standards,
148
149but that's only 10% longer
150
151than earth has been around at all.
152
153So if life had taken 10%
154longer to evolve on earth,
155
156it wouldn't exist at all.
157
158- We got a deadline
159coming up. Better hurry.
160
161But that said, as you said,
162humans intelligent life on earth
163
164developed a lot of cool
165stuff very quickly,
166
167- So...
168- [Elon] Yes.
169
170- It seems like becoming a multi-planetary
171
172is almost inevitable.
173
174Unless we destroy this thing.
175- We need to do it.
176
177I mean, it's not, I mean,
178I suspect that there,
179
180if we are able to go out there
181
182and explore other star systems that we,
183
184there's a good chance
185we find a whole bunch
186
187of long dead one planet civilizations,
188
189then have made it past their home planet.
190
191- That's so sad.
192- Yeah.
193
194- It's sad. Also fascinating.
195
196- I mean, there are various explanations
197
198for exploring paradox.
199
200And one is the sort of,
201they're these great vultures,
202
203which civilizations don't pass through.
204
205And one of those great filters is
206
207do you become a multi-plan
208civilization or not?
209
210And if you don't, it's
211simply a matter of time
212
213before something happens on your planet.
214
215You know, either natural
216
217or manmade that causes us to die out.
218
219Like the dinosaurs, where are they now
220
221that didn't have
222spaceships, so. (laughing)
223
224- I think the more likely thing is,
225
226'cause just to empathize with the aliens
227
228that they found us and
229they're protecting us
230
231and letting us be.
232- I hope so.
233
234Nice aliens.
235
236- Just like the tribes in the Amazon,
237
238the uncontacted tribes,
239we're protecting them.
240
241That's what...
242
243- That would be a nice explanation.
244
245- Or you could have like a, what was it?
246
247I think Andrej Karpathy said, it's like
248
249the ants in the Amazon
250asking where's everybody?
251
252- Well, they do run into
253a lot of other ants.
254
255- [Lex] That's true.
256- They have these ant wars.
257
258- (laughing) Sounds like a good TV show.
259
260- Yeah, they literally have these big wars
261
262between various ants.
263
264- Yeah, maybe I'm just
265(both laughing)
266
267dismissing all the
268different diversity events.
269
270- You should listen to that Warner Herzog
271
272talking about the jungle.
273
274It's really hilarious. Have you heard it?
275
276- No, I have not.
277- [Elon] It's awesome.
278
279- But Warner Herzog is a way...
280
281(both laughing)
282
283- You should play it for the, you know,
284
285as an interlude in the, it's on YouTube.
286
287It's awesome.
288(both laughing)
289
290- [Lex] I love him so much.
291- Yeah, he's great.
292
293- Was he the director of "Happy People:
294
295Life in the Taiga"?
296
297I think also
298
299- He did that bear documentary,
300
301- [Lex] The bear documentary.
302
303- And this thing about penguins.
304
305- Yeah.
306(both laughing)
307
308The analysis, the
309psychoanalysis of penguins.
310
311- Yeah, the penguins like
312headed for like mountains
313
314like that are like 70 miles away.
315
316And Penguin has just
317had for doom basically.
318
319- Well, he had a cynical take.
320
321I have a, he could be
322just a brave explorer
323
324and there'll be great stories told
325
326about him amongst the penguin population
327
328for many centuries to come.
329
330(laughing)
331
332What were we talking about? Okay.
333
334- [Elon] Penguins.
335(both laughing)
336
337- Yeah, so aliens, I mean, I don't know.
338
339Look, I think the smart
340move is just, you know,
341
342this is the first time
343in the history of earth
344
345that it's been possible for life
346
347to extend beyond earth.
348
349That window is open.
350
351Now, it may be open for a long time
352
353or it may be open for a short time
354
355and it may be open now
356and then never open again.
357
358So I think the smart move here is
359
360to make life multiplanetary
361while it's possible to do so.
362
363We don't wanna be one of those
364
365lame one planet civilizations
366that just dies out.
367
368- No, those are lame.
369- Yeah, lame.
370
371Self-respecting civilization
372would be one planet.
373
374- There's not gonna be a Wikipedia entry
375
376for one of one of those.
377
378And pause, the SpaceX
379have an official policy
380
381for when we meet aliens.
382
383- No. (laughing)
384
385- That seems irresponsible.
386
387(both laughing)
388
389- I mean, look, if I see
390the slightest indication
391
392that there are aliens,
393
394I will immediately post on
395X platform anything I know.
396
397- It could be the most liked
398reposted post of all time.
399
400- Yeah, I mean look,
401
402we have more satellites up there right now
403
404than everyone else combined.
405
406So, you know, we'd know, we
407know if we've got a maneuver
408
409around something,
410
411and we not don't have to
412maneuver around anything.
xxxxxxxxxx
1241- If we go to the big
2questions once again,
3
4you said you've,
5
6you're with Einstein that you
7believe in the God of Spinoza.
8
9- (laughing)Yes.
10
11- So, you know, that's a view
12
13that God is like the universe
14
15and is reveals himself
16through the laws of physics
17
18or as Einstein said,
19
20through the lawful harmony of the world.
21
22- Yeah, I would agree
23that God of the simulator
24
25or whatever the supreme being or beings,
26
27reveal themselves through the physics.
28
29You know, they're
30creatives of this existence
31
32and it's incumbent upon
33us to try to understand
34
35more about this wondrous creation.
36
37- Like who created this thing,
38who's running this thing?
39
40Like embodying it into a singular question
41
42with a sexy word on top of it
43
44is like focusing the
45mind to understand it.
46
47It does seem like there's a,
48again, it could be an illusion.
49
50It seems like there's a purpose
51
52that there's an underlying
53master plan of some kind.
54
55And it seems like...
56
57- There may not be a master plan.
58
59In the sense, so there's, like,
60
61maybe an interesting
62answer to the question
63
64of determinism versus free will is
65
66that if we are in a simulation,
67
68the reason that these higher
69beings would hold a simulation
70
71is to see what happens.
72
73So it's not, they don't know what happens,
74
75otherwise they wouldn't
76hold the simulation.
77
78So when humans create a simulation,
79
80so it's SpaceX and Tesla,
81
82we create simulations all the time,
83
84especially for the rocket.
85
86You, you know, you have to
87run a lot of simulations
88
89to understand what's gonna happen
90
91because you can't really test the rocket
92
93until it goes to space,
94and you want it to work.
95
96So you have to simulate
97subsonic, transonic,
98
99supersonic, hypersonic ascent,
100
101and then coming back super high heating
102
103and orbital dynamics.
104
105All this has gotta be simulated.
106
107So, 'cause you don't get
108very many kicks at the can.
109
110But we run the simulations
111to see what happens.
112
113Not if we knew what happens,
114we wouldn't run the simulation.
115
116So if there's, so whoever
117created this existence,
118
119is they're running it
120
121because they don't know
122what's gonna happen.
123
124Not because they do.
xxxxxxxxxx
4511- So maybe we both played Diablo,
2
3maybe Diablo was created to
4see if it Druid your character
5
6could defeat Uber Lilith at the end.
7
8They didn't know.
9
10- Well, the funny thing
11is the Uber Lilith title
12
13is Hatred Incarnate.
14
15- [Lex] Yeah.
16- And right now,
17
18I guess, you're gonna ask the Diablo team,
19
20but it's almost impossible
21to defeat hatred
22
23in the eternal realm.
24
25- Yeah, you've streamed
26yourself dominating tier 100
27
28Nightmare Dungeons and still...
29
30- I can cruise through
31tier 100 Nightmare Dungeons
32
33like a stroll in the park.
34
35- And still you're defeated by Hatred.
36
37- Yeah,
38
39I can, there's the sort of,
40
41I guess maybe the second
42hardest boss is Duriel,
43
44Duriel can't even scratch the paint.
45
46So I killed Duriel so many times ,
47
48and every other boss in
49the game, all of 'em,
50
51kill 'em so many times it's easy.
52
53But Uber Lilith, otherwise
54known as Hatred Incarnate,
55
56especially if you're a Druid
57and you have no ability to go
58
59and to be vulnerable,
60
61there are these random death
62waves that come at you.
63
64And I'm pretty, you know, I really am 52,
65
66so my reflex is not what they used to be,
67
68but I have a lifetime
69of playing video games.
70
71At one point I was, you know,
72
73maybe one of the best
74Quake players in the world.
75
76I actually won money in
77what I think was the first
78
79paid eSports tournament in the US.
80
81We're doing four person Quake tournaments.
82
83And we came second, I was the
84second best person on the team
85
86and the actual best person,
87
88we're actually winning,
89
90we would've come first except
91the best person on the team,
92
93his computer crashed prior to the game.
94
95So we came second.
96
97But I got money for it and everything.
98
99So like, basically I got
100skills, you know, albeit,
101
102you know, no spring chicken these days.
103
104And to be totally frank,
105it's driving me crazy,
106
107trying to beat Lilith as a Druid.
108
109Basically trying to beat,
110trying to beat Hatred Incarnate
111
112in the eternal realm.
113
114- [Lex] As a Druid.
115- As a Druid.
116
117And if you, if you, (laughing)
118
119this is really (indistinct),
120lemme tell you.
121
122- I mean, the challenge
123is part of the fun.
124
125I have seen directly like you're actually
126
127like a world class
128incredible video game player.
129
130- [Elon] Yeah.
131- And I think Diablo,
132
133so you're just picking up a new game
134
135and you're figuring out it's fundamentals.
136
137You're also with the Paragon Board
138
139and the build are not somebody like me
140
141who perfectly follows whatever
142they suggest on the internet.
143
144You're also an innovator there.
145
146- [Elon] Yeah.
147
148- Which is hilarious to watch.
149
150It's like a mad scientist just trying
151
152to figure out the Paragon Board
153
154and the build and the, you know.
155
156Is there some interesting
157insights there about,
158
159if somebody's starting as a
160Druid, do you have advice?
161
162- I would not recommend playing a Druid
163
164in the eternal realm.
165
166Right now, I think the
167most powerful character
168
169in the seasonal realm is the Sorcerer
170
171with the lightning balls.
172
173The Sorcs have huge balls in the seasonal.
174
175- Well yeah, that's what they say.
176
177- So Sorc have huge balls.
178
179They do, huge balls of lightning.
180
181- I'll take you word for it.
182
183- And it's actually,
184in the seasonal realm,
185
186you can, it's like pretty easy to beat
187
188Uber Lilith with the,
189
190because she got these vampiric powers
191
192that out fire damage
193
194and increase your defense and whatnot.
195
196So, but really quite easy
197to defeat Hatred seasonally,
198
199but to defeat hatred
200eternally, very difficult.
201
202Almost impossible. It's
203totally impossible.
204
205It seems like this is a
206metaphor for life, you know?
207
208- I like the idea that Elon Musk,
209
210because I saw, I was
211playing Diablo yesterday
212
213and I saw 100, level 100 drew just run by,
214
215I will never die and
216then run back the way.
217
218- And it was, there's just some,
219
220this metaphor's kind
221of hilarious that you,
222
223Elon Musk is fighting Hatred,
224
225restlessly fighting Hatred
226in this demonic realm.
227
228- [Elon] Yes.
229- It's hilarious.
230
231I mean it's pretty hilarious.
232
233- No, it's absurd.
234(both laughing)
235
236Really, it's exercise and absurdity,
237
238and it makes me wanna pull my hair out.
239
240- Yeah. What do you get
241from video games in general?
242
243Is there for you personally.
244
245- I mean, it's, I don't known it's,
246
247it calms my mind.
248
249I mean, you sort of killing
250the demons in a video game
251
252calms the demons in my mind.
253
254Yeah.
255
256If you play a tough video game,
257
258you can get into like a state of flow,
259
260which is very enjoyable.
261
262But the admittedly it
263needs to be not too easy,
264
265not too hard, kinda in the Goldilock zone.
266
267And I guess you generally wanna feel
268
269like you're progressing in the game.
270
271So a good video,
272
273and there's also beautiful
274art, engaging storylines,
275
276and it's like an amazing
277puzzle to solve, I think.
278
279And so it's like solving the puzzle.
280
281- Elden Ring the greatest
282game of all time.
283
284I still haven't played it, but you.
285
286- Elden Ring is definitely a candidate
287
288for best game ever.
289
290Top five for sure.
291
292- I think I've been scared how hard it is
293
294or how hard I hear it is, so.
295
296But it is beautiful.
297
298- Elden Ring is, feels like
299it's designed by an alien.
300
301There's a theme to this
302discussion. In what way?
303
304- It's so unusual.
305
306It's incredibly creative
307and the art is stunning.
308
309I recommend playing it
310on a big resolution,
311
312pie dynamic raise TV even,
313doesn't need to be a monitor,
314
315just the art is incredible.
316
317It's so beautiful
318
319And it's so unusual.
320
321And each of those top
322bus battles is unique.
323
324Like, it's like a unique puzzle to solve.
325
326Each one is different.
327
328And the strategy you use to solve
329
330one battle is different
331from another battle.
332
333- That said, you said Druid and Enternal
334
335against Uber Lilith is the
336hardest boss battle you've ever?
337
338- Correct.
339
340That is currently the,
341
342and I've played a lot of video games.
343
344That's my primary recreational activity.
345
346And yes, beating Hatred
347in the Enternal Realm
348
349is the hardest boss battle in life
350
351and in the video game.
352
353- [Lex] Metaphor on top of metaphor.
354
355- I'm not sure it's possible,
356
357but it's, I do make progress.
358
359So then I'm like, okay,
360I'm making progress.
361
362Maybe if I just tweak that
363Paragon Board a little more,
364
365I can do it.
366
367I could just dodge a few
368more waves, I can do it.
369
370- Well the simulation is
371created for the purpose
372
373of figuring out if it can be done.
374
375And you are just a cog in that simulation,
376
377in the machine of the simulation.
378
379- Yeah it might be.
380(Lex laughing)
381
382- I have a feeling that at least,
383
384I think...
385- [Lex] it's doable.
386
387- It's doable, yes.
388
389- [Lex] Well that's the human
390spirit right there to believe.
391
392- Yeah.
393
394I mean, did prompt me to think
395about just hate in general,
396
397which is, you wanna be
398careful of one of those things
399
400where you wish for
401something that sounds good,
402
403but if you get it, it's
404actually a dystopian situation.
405
406So, you know, you could
407sort of run a, sort of,
408
409have heart to sort of like,
410
411if you wish for world peace, sounds good,
412
413but how is it enforced?
414
415And at what cost is, at
416what cost eternal peace?
417
418It might actually be worse
419to have eternal peace
420
421because of what that would entail.
422
423The suppression of everyone.
424
425It might be the suppression of progress.
426
427It might be an ossified
428society that never changes.
429
430What I'm saying is that
431there is an argument
432
433that you, that if that,
434
435if you wish for no war,
436
437you should be careful what you wish for.
438
439Because what's required in
440order for there to be no war?
441
442Might be worse than a little war.
443
444- Yeah, the things we'll
445consider to be flaws
446
447of human civilization might
448be a necessary components
449
450for whatever optimal looks like.
451
xxxxxxxxxx
3641I mean this, do you worry about
2AI, AGI enabling a dystopian
3
4state of this nature, whether
5it's 1984 with surveillance
6
7and fear, or brave new
8world with pleasure,
9
10and what is it?
11
12Lots of sex, but no deep human experience.
13
14- You know, there's actually
15a real drug called Soma.
16
17- [Lex] There is?
18- Yes.
19
20(both laughing)
21
22- I'm like...
23- [Lex] Lemme take notes.
24
25- Did they actually...
26
27- Does Grok know about it?
28
29No, it probably will
30go to Brave New World.
31
32- We could ask it.
33- What would be the question?
34
35- Tell us about the drug called Soma.
36
37- Tell us about the drug called Soma.
38
39It's gonna go to Brave New World, I think.
40
41- Well there's a real drug as well.
42
43- Searching for Soma drug information.
44
45(both laughing)
46
47- Oh boy.
48
49Ah, Soma, the one to drug,
50by the way, I'm in fun mode.
51
52- Oh, okay. Fun. (laughing)
53
54Yeah, so it's talking about the real drug.
55
56- [Lex] Oh, it's a muscle relaxant
57
58that's been around since the 1950s.
59
60It's used to treat muscle
61pain and discomfort.
62
63And it does this by altering the activity
64
65between neurons and the spinal cord.
66
67Now, Soma isn't just a pain reliever,
68
69it's also a bit of a sedative.
70
71It can make you feel a bit drowsy.
72
73So it's not something to take
74
75if you're planning on
76operating having machinery
77
78or trying to solve complex math equations.
79
80And it goes on and on.
81
82In conclusion, Soma's a
83drug that can be helpful
84
85for those dealing with muscle pain,
86
87but it's not without its risks.
88
89It's pretty good. Oh, wait a minute.
90
91- [Elon] That's the real
92drug it's talking about.
93
94- [Lex] It's the real drug.
95- [Elon] Yeah.
96
97- [Lex] And of course the fun
98is always use it responsibly
99
100in accordance with the advice
101of a healthcare professional.
102
103And remember in the end,
104
105the universe is probably just
106a giant computer simulation.
107
108So don't take anything too seriously.
109
110I love it. I love it.
111(Elon laughing)
112
113I love it. I love it.
114
115- Yeah. So.
116
117- Oh, and also really nicely includes
118
119like the tweets post on X
120that are relevant to this,
121
122which is really cool.
123
124- [Elon] Yeah.
125
126- And that integrate everything together.
127
128- [Elon] Yeah.
129- Like the interface
130
131is one of the really cool things here.
132
133- Yeah. Seems done a great job.
134
135(laughing)
136
137So, and you say like, you
138could say like, no, I mean,
139
140Soma from Brave New World.
141
142'Cause it thinks you mean,
143
144you mean Soma the real drug.
145
146- Yeah.
147- [Elon] No, I mean,
148
149Soma from Brave New World.
150
151- It's gonna be like,
152oops, I'm really sorry.
153
154I'm really surprised it
155went to the real drug first.
156
157Ah, my apologies. I was
158thinking of the muscle relaxant.
159
160Soma for Band New World is a
161different beast altogether.
162
163Now this Huxley's dystopian novel,
164
165Soma is a drug that plays a central role
166
167in society, depicted in the book.
168
169It's a mood altering,
170hallucinogenic drug that's used
171
172to keep the population docile and content.
173
174In the novel to use
175Soma in Brave New World
176
177raises some interesting questions
178
179about the nature of happiness
180
181and the role of drugs in society.
182
183Oh, man!
184- Exactly.
185
186Is it better to live in a world
187
188where everyone is happy all the time,
189
190even if that happiness is artificial?
191
192Good question. This is what I mean.
193
194Like, do you want, do
195you wish for world peace
196
197and happiness all the time?
198
199Are you sure?
200
201Because that might be a society
202that is essentially sterile
203
204and ossified that never changes,
205
206that is ultimately doomed.
207
208- This kind of tension
209between darkness and light.
210
211- It's, this is really
212a very good summary.
213
214It really gets to the point.
215
216This is not simply
217regurgitating a Brave New World.
218
219It's actually getting
220to the salient element
221
222of Soma as a drug.
223
224Do you actually wanna be in a situation
225
226where everyone is happy all the time,
227
228even though it's artificial?
229
230Or is it better to confront
231the challenges of life
232
233and experience the full
234range of human emotions,
235
236even if it means experiencing
237pain and suffering?
238
239- For those listening, by the way,
240
241Elon just read directly from Grok,
242
243which is a really nice,
244kind of insightful,
245
246philosophical analysis
247of the tension here.
248
249Interesting.
250
251- It pretty much nails it.
252
253In conclusion, Soma from Brave
254New World is fictional drug
255
256that's used to explore some
257deep philosophical questions
258
259about the nature of happiness
260
261and the role of drugs in society.
262
263It's a powerful symbol of
264the dangers of using drugs
265
266to escape from reality
267
268and the importance of
269confronting the challenges
270
271of life head-on.
272
273Nailed it.
274
275And the crazy thing is like, you know,
276
277we do have a real drug called Soma,
278
279which kind of is kind of
280like the drug in the book.
281
282And I'm like, did they,
283
284they must've named it after that Soma.
285
286- Probably, probably,
287- Yeah.
288
289Soma the real drug is quite
290effective on back pain.
291
292- So, you know about this drug?
293
294- I've taken it.
295- It's fascinating. Okay.
296
297- 'Cause I had like a,
298you know, squashed disc
299
300in my C five C six.
301
302- So it takes the physical
303pain away, but Soma here is...
304
305- It doesn't completely, it
306reduces the amount of pain
307
308you feel, but at the
309expense of mental acuity.
310
311It dells your mind.
312
313Just the drug in the book.
314
315- Just like the drug in the book.
316
317- And hence the trade off.
318
319The thing that seems like utopia
320
321could be a dystopia after all.
322
323- Yeah, and actually I was
324telling a friend of mine
325
326saying like, would you really want there
327
328to be no hate in the world?
329
330Like, really none?
331
332Like, I wonder why hate evolved.
333
334I'm not saying we should
335amplify hate, of course,
336
337I think we should try to minimize it.
338
339But none at all, hmm.
340
341There might be a reason for hate.
342
343- And suffering.
344
345I mean, it's really complicated
346to consider that some amount
347
348of human suffering is necessary
349for human flourishing.
350
351- Is it possible to appreciate the highs
352
353without knowing the lows?
354
355- And that all is summarized
356
357there in a single statement from Grok.
358
359Okay.
360- No highs,
361
362no lows, who knows?
363
364- That's almost the poem.
xxxxxxxxxx
2781It seems that training LLMs efficiently
2
3is a big focus for xAI.
4
5What's the, first of all, what's the limit
6
7of what's possible in terms of efficiency?
8
9There's this terminology of
10useful productivity per watt.
11
12Like what have you learned from
13pushing the limits of that?
14
15- Well, I think it's helpful.
16
17The tools of physics are very powerful
18
19and can be applied, I think to almost any,
20
21really any arena in life.
22
23It's really just critical thinking.
24
25For something important you need to reason
26
27from first principles
28and think about things
29
30in the limit one direction or the other.
31
32So in the limit, even
33at the Kardashev scale,
34
35meaning even if you harness
36the entire power of the sun,
37
38you'll still care about
39useful compute for what?
40
41So that's where, I think probably
42
43where things are headed
44from the standpoint of AI
45
46is that we have a silicon shortage now
47
48that will transition to
49a voltage transformer
50
51shortage in about a year.
52
53Ironically, transformers,
54for transformers.
55
56You need transformers to run transformers.
57
58- Somebody has a sense
59of humor in this thing.
60
61(both laughing)
62
63- I think. Yes.
64
65Fate loves irony, ironic humor
66
67and ironically funny outcome
68
69seems to be often what fate wants.
70
71- Humor is all you need. I
72think spice is all you need.
73
74Somebody posted.
75
76- Yeah, but yeah.
77
78So we're, I have silicon shortage today,
79
80a voltage step down transformer,
81
82shortage probably in about a year.
83
84And then just electricity
85shortages in general
86
87in about two years.
88
89I gave a speech for the sort of world
90
91gathering of utility companies,
92
93electricity companies,
94
95and I said, look, you
96really need to prepare
97
98for a traveling of electricity demand,
99
100because all transport
101is gonna go electric,
102
103with the ironic exception of rockets.
104
105And heating will also go electric.
106
107So energy usage right now is roughly
108
109one third, very rough terms.
110
111One third electricity,
112
113one third transport, one third heating.
114
115And so in order for
116everything to go sustainable
117
118to go electric,
119
120you need to triple electricity output.
121
122So I encourage the utilities
123
124to build more power of plants
125
126and also to probably have, well,
127
128not probably, they should
129definitely buy more batteries
130
131because the grid currently
132is sized for real-time load,
133
134which is kind of crazy.
135
136'Cause you know, that means
137you got a size for whatever
138
139the peak electricity demand
140is like the worst second
141
142or the worst day of the year.
143
144Or you can have a brownout or a blackout,
145
146and you're at that crazy blackout
147
148for several days in Austin.
149
150So because there's almost no buffering
151
152of energy in the grid.
153
154Like if you've got a hydro power plant,
155
156you can buffer energy, but
157otherwise it's all real time.
158
159So with batteries, you can
160produce energy at night
161
162and use it during the
163day so you can buffer.
164
165So I expect that there will be
166very heavy usage of batteries
167
168in the future,
169
170because the peak to trough
171ratio for power plants
172
173is anywhere from two to five, you know,
174
175so it's like lowest
176point to highest point.
177
178- So like batteries
179necessary to balance it out?
180
181And then, but the
182demand, as you're saying,
183
184is going to grow, grow, grow, grow.
185
186- [Elon] Yeah.
187
188- And part of that is the compute?
189
190- Yes. Yes.
191
192I mean, electrification,
193
194I mean, electrification of transport
195
196and electric heating will
197be much bigger than AI.
198
199- [Lex] At least in the short term?
200
201- In the short term.
202
203But even for AI,
204
205you really have a growing
206demand for electricity,
207
208for electric vehicles, and a
209growing demand for electricity
210
211for, to run the computers for AI.
212
213And so this is obviously leading,
214
215can lead to a electricity shortage.
216
217- How difficult is the problem of,
218
219in this particular case,
220
221maximizing the useful
222productivity per watt for training
223
224(indistinct) nuts?
225
226Like, this seems to be
227really where the big problem
228
229we're facing that needs to be solved,
230
231is how to use the power efficiently.
232
233Like what you've learned
234so far about applying
235
236this physics first principle
237reasoning in this domain.
238
239How difficult is this problem?
240
241- It will get solved.
242
243It's the question of how
244long it takes to solve it.
245
246So at various points, there's limit,
247
248some kind of limiting factor to progress.
249
250And with regard to,
251
252I'm saying like right
253now the limiting factor
254
255is silicon chips, and that
256will, we're gonna then have
257
258more chips than we can actually plug in
259
260and turn on probably in about a year.
261
262The initial constraint being literally
263
264voltage step down transformers,
265
266because you've got power
267coming in at 300,000 volts,
268
269and it's gotta step all the way down
270
271eventually to around 0.7 volts.
272
273So it's a very big amount of, you know,
274
275the voltage step down is gigantic.
276
277So, and the industry is
278not used to rapid growth.
xxxxxxxxxx
3951- Okay, let's talk about
2the competition here.
3
4You've shown concern about Google
5
6and Microsoft with Open AI developing AGI,
7
8how can you help ensure with xAI
9
10and Tesla AI work that it
11doesn't become a competitive race
12
13to AGI, but that is a collaborative
14development of safe AGI?
15
16- Well, I mean, I've been pushing for
17
18some kind of regulatory
19oversight for a long time.
20
21I've been somewhat of a
22Cassandra on the subject
23
24for over a decade.
25
26I think we wanna be very
27careful in how we develop AI.
28
29It's a great power,
30
31and with great power comes
32great responsibility.
33
34I think it would be wise for us to have
35
36at least an objective
37third party who can be
38
39like a referee that can go in
40
41and understand what the
42various leading players
43
44are doing with AI.
45
46And even if there's no
47enforcement ability,
48
49they should, they can at
50least voice concerns publicly.
51
52You know, Geff Hinton, for example,
53
54left Google and he voiced strong concerns.
55
56But now he's not at Google anymore,
57
58so who's gonna voice the concerns?
59
60So I think there's,
61
62like, I know Tesla gets a
63lot of regulatory oversight
64
65on the automotive front.
66
67I mean, we're subject to I think,
68
69over a hundred regulatory agencies
70
71domestically and internationally.
72
73So it's a lot.
74
75You could fill this room
76with the all regulations
77
78that Tesla has to adhere
79to for automotive.
80
81Same is true in, you know, for rockets
82
83and for, you know, currently
84the limiting factor for SpaceX,
85
86for Starship launch is
87regulatory approval.
88
89The FA has actually given their approval,
90
91but we're waiting for Fish and Wildlife
92
93to finish their analysis
94
95and give their approval.
96
97That's why I posted I want
98to buy a fish license on,
99
100(laughing)
101
102which also refers to
103the Monte Python sketch.
104
105- [Lex] Yeah.
106(laughing)
107
108- Like why do you need
109a license for your fish?
110
111I don't know.(laughing)
112
113But according to the rules,
114
115I'm told you need some sort
116of fish license or something.
117
118We effectively need a fish
119license thrown to rock it.
120
121(laughing)
122
123And I'm like, wait a second.
124
125How did the fish come into this picture?
126
127- [Lex] Yeah.
128
129- I mean some of the
130things like that it's,
131
132I feel like are so absurd
133
134that I wanna do like a comedy sketch
135
136and flash at the bottom.
137
138This is all real. This is
139actually what happened.
140
141You know, one of the things
142that was a bit of a challenge
143
144at one point is that they were worried
145
146about our rocket hitting a shark.
147
148And, now the ocean's very big,
149
150and how often do you see sharks?
151
152Not that often, you know?
153
154As a percentage of ocean surface area
155
156sharks basically are zero.
157
158And so then we will, then we said,
159
160well, how will we
161calculate the probability
162
163of telling a shark?
164
165And they're like, well, we
166can't give you that information
167
168because they're worried about shark hunt,
169
170shark fin hunters going
171and hunting sharks.
172
173And I said, well, how are we supposed to,
174
175we're on the horns of a dilemma then.
176
177(laughing)
178
179Then they said, well,
180there's another part of fish
181
182and wildlife that can do this analysis.
183
184I'm like, well, why don't
185you give them the data?
186
187Like, we don't, they
188don't, we don't trust them.
189
190Like, excuse me, you don't,
191
192but they're literally in your department.
193
194But again, this is actually what happened.
195
196And then can you do an NDA or something?
197
198(laughing)
199
200Eventually they managed to
201solve the internal quandary,
202
203and indeed the probability
204of us hitting a shark
205
206is essentially zero.
207
208Then there's another organization
209
210that I didn't realize
211existed until, you know,
212
213a few months ago that cares about whether
214
215we would potentially hit a
216whale in international waters.
217
218Now again, you look at the surface of the,
219
220look at the Pacific and say,
221what percentage of this,
222
223the Pacific consists of whale.
224
225Like he'll give you a big picture
226
227and like point out all the
228whales in this picture.
229
230And I'm like, I don't see any whales.
231
232(laughing)
233
234It's like basically 0%.
235
236And if our rocket does hit a whale,
237
238which is extremely
239unlikely beyond all belief,
240
241that is the, fate had it,
242
243that whale has some seriously bad luck.
244
245You know, it's the least
246lucky whale ever. And...
247
248- I mean, this is quite absurd.
249
250The bureaucracy, the bureaucracy of this,
251
252however it emerged.
253
254- Yes, well, I mean, one of the things
255
256that's pretty well
257there's for launching out
258
259of Vanderberg in California, we had to,
260
261they were worried about seal procreation,
262
263whether the seals would be
264dismayed by the sonic booms.
265
266Now there've been a lot of
267rockets launched outta Vanderberg
268
269and the seal population
270has steadily increased.
271
272So if anything, rocket
273booms are an aphrodisiac
274
275based on the evidence, if you
276would correlate rock launches
277
278with a seal population.
279
280Nonetheless, we were
281forced to kidnap a seal,
282
283strap it to a board, put
284it headphones on the seal
285
286and place sonic boom sounds to it
287
288to see if it would be distressed.
289
290This is an actual thing that happened.
291
292This is actually real. I have pictures.
293
294(both laughing)
295
296- I would love to see this. Yeah.
297
298I mean's sorry, this is
299a seal with headphones.
300
301(Elon laughing)
302- Yes,
303
304it's a seal with headphones,
305strapped to a board,
306
307and like the, okay, now the amazing part
308
309is how calm the seal was.
310
311- [Lex] Yeah.
312- Because if I was a seal,
313
314I'd be like, this is the end.
315
316(laughing)
317
318They're definitely gonna eat me.
319
320How will the seal, when the seal goes back
321
322to other, you know, seal friends,
323
324how's they gonna explain that?
325
326- [Lex] They're never gonna believe him.
327
328- Never gonna believe him.
329
330That's why I'm like, well,
331
332you know, it's sort of like,
333
334it's like getting kidnapped by aliens
335
336and getting anal probe, you know?
337
338(both laughing)
339
340You come back and say, I swear to God,
341
342I get kidnapped by aliens
343
344and they stuck anal probe to my butt.
345
346And they're like, no they
347didn't. That's ridiculous.
348
349It's seal buddies are
350never gonna believe him
351
352that he gets strapped to a board
353
354and they put headphones on his ears.
355
356(laughing)
357
358And then let him go.
359
360Twice, by the way. We had to do it twice.
361
362- They let him go twice?
363
364- We had to capture...
365- The same seal?
366
367- [Elon] Oh, no different seal.
368
369- Oh, okay.
370(Elon laughing)
371
372Did you, did you get a seal of approval?
373
374- Yeah, exactly. It was seal of approval.
375
376No, I mean this is, this is like,
377
378I don't think the public is quite aware
379
380of the madness that goes on.
381
382- Yes. Yeah, it's absurd.
383- Fricking seals
384
385with fricking headphones.
386
387- I mean, this is the,
388it's a good encapsulation
389
390of the absurdity of human civilization,
391
392seals and headphones.
393
394- [Elon] Yes.
395
xxxxxxxxxx
3711- What are the pros
2
3and cons of open sourcing AI to you
4
5as another way to combat, you know,
6
7a company running away with AGI?
8
9- In order to run like really deep
10
11intelligence, you need a lot of compute.
12
13So it's not like, you
14know, you can just fire up
15
16a PC in your basement and be
17running AGI, at least not yet.
18
19You know, Grok was trained on 8,000,
20
21A100s running at peak efficiency,
22
23and Grok's gonna get a
24lot better by the way.
25
26We'll be more than doubling our compute
27
28every couple months for
29the next several months.
30
31- There's a nice writeup
32
33Aha went from Grok-0 to Grok-1,
34
35- [Elon] Like Grok?
36
37(laughing)
38- [Lex] Yeah.
39
40Right Grok just bragging,
41
42making shit up about itself.
43
44(laughing)
45
46- [Elon] Does Grok, Grok, Grok.
47
48- [Lex] Yeah.
49(Elon laughing)
50
51It's like a weird AI dating
52site where it exaggerates
53
54about itself.
55
56No, there's a writeup of, you know,
57
58like where it stands now, the
59history of its development
60
61and where it stands on some benchmarks
62
63compared to the state of the art GPT 35.
64
65And so, I mean, there's,
66you know, there's Llama,
67
68you can open source, once it's trained,
69
70you can open source a model.
71
72And for fine tuning
73all that kind of stuff.
74
75Like what to is the pros and cons of that,
76
77of open sourcing based models?
78
79- I think the similarity to open sourcing,
80
81I think perhaps with a
82slight time delay, you know,
83
84I don't know, six months even.
85
86I think I'm generally in
87favor of open sourcing,
88
89like bias with open sourcing.
90
91I mean, it is a concern to
92me that, you know, Open AI,
93
94you know, I was, you know,
95
96I think, I guess arguably,
97the prime, you know,
98
99prime mover behind Open AI in
100the sense that it was created
101
102because of discussions
103that I had with Larry Page
104
105back when he and I were friends
106
107and you know, I stayed at his house,
108
109and I talked to him about AI safety.
110
111And Larry did not care about AI safety,
112
113or at least at the time
114he didn't, you know,
115
116and at one point he called me a speciesist
117
118for being pro-human.
119
120And I'm like, well, what
121team are you on, Larry,
122
123on Team Robot?
124
125And I'm like, okay,
126
127so at the time, you know,
128
129Google had acquired Deep Mind.
130
131They had probably two thirds
132of all AI resource, you know,
133
134probably two thirds of all the
135AI researchers in the world
136
137that basically infinite money and compute.
138
139And the guy in charge,
140you know, Larry Page
141
142did not care about safety.
143
144And even yelled at me
145
146and quote me a speciesist, pro human.
147
148- So I don't know if you know this,
149
150a lot humans, they can change their mind.
151
152And maybe you and Larry Page can still
153
154can be friends once more.
155
156- I'd like to be friends with Larry again.
157
158He got, really the
159breaking of the friendship
160
161was over Open AI.
162
163And specifically I think the key moment
164
165was recruiting Ilya Sutskever, so.
166
167- I love Ilya. He's so brilliant.
168
169- Ilya's a good human, smart, good heart.
170
171And that was a tough recruiting battle.
172
173It was mostly Demis on one side
174
175and me on the other, both
176trying to recruit Ilya.
177
178And Ilya went back and forth,
179
180you know, just gonna stay at Google.
181
182Then he was gonna leave,
183
184then he was gonna stay,
185then he was gonna leave.
186
187And finally he did agree to join Open AI.
188
189That was one of the
190toughest recruiting battles
191
192we've ever had.
193
194And, but that was really the linchpin
195
196for Open AI being successful.
197
198And I was, you know,
199
200also instrumental in recruiting
201a number of other people.
202
203And I provided all of the funding
204
205in the beginning, over $40 million.
206
207And the name, (laughing)
208
209the Open in Open AI is
210supposed to mean open source.
211
212And it was created as a
213nonprofit open source.
214
215And now it is a closed
216source for maximum profit,
217
218which I think is not good karma.
219
220- But like we talked about with war
221
222and leaders talking,
223
224I do hope that there's only a few folks
225
226working on this at the highest level.
227
228I do hope you reinvigorate
229friendships here.
230
231- Like I said, I'd like to
232be friends again with Larry.
233
234I haven't seen him in ages.
235
236And we were friends for a very long time.
237
238I met Larry Page before
239he got funding for Google,
240
241or actually I guess before
242he got venture funding,
243
244I think it, he got the
245first like a 100K from
246
247I think Bechtel Zy or someone.
248
249- It's wild to think
250about all that happened.
251
252And you've guys known each other
253
254that whole time, just 20 years.
255
256- [Elon] Since maybe 98 or something.
257
258- Yeah. It's crazy.
259
260Crazy how much has happened since then.
261
262- Yeah, 25 years. A lot
263has happened, it's insane.
264
265- But you're seeing the tension there.
266
267Like maybe delayed open source.
268
269- Delayed, yeah.
270
271Like what is the source that
272is open, you know what I mean?
273
274Like there's basically
275it's a giant CSV file.
276
277(laughing)
278- [Lex] Yeah, yeah.
279
280- With a bunch of numbers.
281
282- Yep.
283
284- What do you do with that
285giant file of numbers?
286
287You know, how do you run,
288
289like the amount of actual,
290
291the lines of code is very small
292
293and most of the work, the software work
294
295is in the curation of the data.
296
297So it's like trying to figure out
298
299what data is separating
300good data from bad data.
301
302Like, like you can't
303just crawl the internet
304
305'cause there's a lot of junk out there.
306
307A huge percentage of websites
308have more noise than signal.
309
310You know, they're, or
311'cause they're just used
312
313for search engine optimization.
314
315They're literally just
316scam websites, so...
317
318- How do you by the
319way, sorry to interrupt.
320
321Get the signal.
322
323Separate the signal and noise on X?
324
325That's such a fascinating source of data.
326
327You know, no offense
328to people posting on X,
329
330but sometimes there's
331a little bit of noise.
332
333So what...
334- Yeah, I think
335
336the signal noise could
337be greatly improved.
338
339I mean really, all of the posts
340
341on the X platform should
342be AI recommended,
343
344meaning like we should
345populate a vector space
346
347around any given post, compare
348that to the vector space
349
350around any user and match the two.
351
352Right now there is a little bit of AI used
353
354for the recommended posts,
355
356but it's mostly heuristics.
357
358And if there's a reply,
359
360where the reply to a
361post could be much better
362
363than the original post,
364
365it will, according to the
366current rules of the system,
367
368get almost no attention
369compared to a primary post.
370
371- Oh, a lot of that.
xxxxxxxxxx
6741I got the sense, so you,
2a lot of the X algorithm
3
4has been open source and
5been written up about,
6
7and it seems there to be
8some machine learning.
9
10It's disparate, but there's some machine.
11
12- There's a little bit.
13
14But it needs to be entirely that.
15
16Like, at least in the, like,
17
18if you explicitly follow
19someone, that's one thing.
20
21But if you, in terms
22of what is recommended
23
24from people that you don't
25follow, that should all be AI.
26
27- I mean it's a fascinating problem.
28
29- Yeah.
30- So there's several aspects
31
32of it that's fascinating.
33
34First, as the writeup goes,
35
36it first picks 1500 tweets
37
38from a pool of hundreds of millions.
39
40First of all, that's fascinating,
41
42'cause you have hundreds
43of millions of posts
44
45every single day
46
47and it has to pick 1500 from which
48
49it then does obviously people you follow,
50
51but then there's also like
52some kind of clustering it has
53
54to do to figure out what
55kind of human are you,
56
57what kind of new clusters
58might be relevant to you,
59
60people like you.
61
62This kind of, this kind of
63problem is just fascinating.
64
65'Cause it has to then rank those 1500
66
67with some filtering and then
68recommend you just a handful.
69
70And to me what's really fascinating is
71
72how fast that has to do that.
73
74So currently that entire pipeline
75
76to go from several hundred million
77
78to a handful is takes
79220 seconds of CPU time,
80
81single CPU time.
82
83- [Elon] Yeah.
84- And then it has to do that
85
86in like a second.
87
88So it has to be like super
89distributed in fascinating ways.
90
91Like there's just a lot
92of tweets, there's a lot.
93
94- It's a lot of stuff on the system.
95
96And I think, but I think it right now
97
98it's not currently good
99at recommending things
100
101that from accounts you don't follow.
102
103Yeah.
104
105Or where there's more than
106one degree of separation.
107
108So, you know, it's pretty good
109
110if there's at least like some commonality
111
112between someone you follow like something
113
114or repost it or comment on
115it or something like that.
116
117But if there's no,
118
119let's say somebody posted
120something really interesting,
121
122but you have no followers in
123common, you would not see it.
124
125- Interesting.
126
127And then as you said, reply,
128
129like replies might not surface either.
130
131- Replies basically never get seen.
132
133'cause they're never, they're,
134
135currently, I'm not saying it's correct,
136
137I'm saying it's incorrect.
138
139Replies have, you know,
140couple of magnitude
141
142less importance than primary posts.
143
144- Do you think this can
145be more and more converted
146
147into end to end neural net?
148
149- Yeah, yeah, that's what it should be.
150
151- So you can...
152
153- For the recommendations should be
154
155purely a vector correlation.
156
157Like there's a series
158of vectors, you know,
159
160basically parameters, vectors,
161whatever you wanna call 'em.
162
163But sort of things that the
164system knows that you like,
165
166and like maybe there's
167like several hundred
168
169sort of vectors associated
170with each user account
171
172and then any post in the system,
173
174whether it's video, audio,
175short post, long post.
176
177The reason I, by the way
178I wanna move away from
179
180tweet is that, you know,
181people are posting like
182
183two, three hour videos on the site.
184
185That's not a tweet, like it's very,
186
187like Tweeeet for two hours, come on.
188
189Tweet made sense when it was
190like 140 characters of text.
191
192'Cause it's like a bunch of
193like little birds tweeting.
194
195But when you've got long form content,
196
197it's no longer a tweet.
198
199- [Lex] Yeah.
200- So a movie is not a tweet.
201
202And like, you know, apple for example,
203
204posted like the entire
205episode of the Silo,
206
207the entire thing on our platform.
208
209And by the way, it was their
210number one social media thing
211
212ever in engagement, of
213anything on any platform ever.
214
215So it was a great idea.
216
217And by the way, I done,
218
219I just learned about it afterwards.
220
221I was like, hey, wow,
222they posted an entire
223
224hour long episode of.
225
226So, no, that's not a tweet
227
228that, you know, it's a video.
229
230- But from a neural net perspective,
231
232it becomes really complex
233
234whether it's a single.
235
236So like everything's data,
237
238so single sentence, a clever
239sort of joke, dad joke,
240
241is in the same pool as a three hour video.
242
243- Yeah. I mean right now it's
244a hodgepodge for that reason.
245
246It's, but you know, like if,
247
248let's say in the case of Apple posting
249
250like an entire episode of this series,
251
252pretty good series by the
253way, this Silo, I watched it.
254
255So there's gonna be a lot
256of discussion around it.
257
258So that you've got a lot of context,
259
260people commenting they like
261it, they don't like it,
262
263or they like this or that, you know?
264
265And you can then populate the vector space
266
267based on the context of
268all the comments around it.
269
270So even though it's a video,
271
272there's a lot of information
273around it that allows you
274
275to populate the vector space
276of that hour long video.
277
278And then you can obviously
279get more sophisticated
280
281by having the AI actually watch the movie.
282
283- [Lex] Yeah. Right.
284
285- And tell you if you're
286gonna like the movie,
287
288- Convert the movie into like,
289
290into a language essentially.
291
292- Yeah, analyze this movie
293
294and just like your movie
295critic or TV series,
296
297and then recommend based on
298
299after AI watches the movie.
300
301Just like a friend can tell you.
302
303If a friend knows you well,
304
305a friend can recommend a movie
306
307and with high probably
308that you'll like it.
309
310- But this is like a friend
311that's analyzing whatever.
312
313- [Elon] It's like AI...
314- Hundreds of millions.
315
316- Yeah.
317
318Actually, frankly AI will be better than,
319
320will know you better than
321your friends know you,
322
323and most of your friends anyway.
324
325- Yeah, and as part of this,
326
327it should also feed you advertisements
328
329in a way that's like,
330
331I mean I like advertisements
332
333that are like well done, right?
334
335- [Elon] Yeah, yeah.
336- The whole point is
337
338'cause it funds things,
339like an advertisement
340
341that you actually want
342to see is a big success.
343
344- Absolutely.
345
346You want ads that are,
347
348advertising that is if
349for a product or service
350
351that you actually need when you need it,
352
353it's content.
354
355And then even if it's not something
356
357that you need when you need it,
358
359if it's at least aesthetically pleasing
360
361and entertaining, you know,
362
363it could be like a
364Coca-Cola ad like, you know,
365
366they actually run a lot of
367great ads on the X system
368
369and McDonald's does too.
370
371And, you know, it's that they can do,
372
373you can do something that's like,
374
375well this is just a cool thing.
376
377And, you know, so you're not,
378
379basically the question is do
380you regret seeing it or not?
381
382And if you don't regret
383seeing it, it's a win.
384
385- How do you, so there's
386a bunch of signals
387
388that are incorporated,
389
390hearts and reposts and
391maybe number of seconds
392
393you linger on a post
394or something like this?
395
396- Yeah. Attention is a big factor.
397
398- [Lex] Attention.
399- So that's why it's like,
400
401it is actually better to do things
402
403that are long form on the system
404
405because it basically is ting
406up how many user seconds,
407
408you know, users were
409interested in this thing
410
411for how many seconds.
412
413So if it's a really short
414thing, well they will be less.
415
416Like if it's a link
417leading out of the system,
418
419which we're not opposed to at all,
420
421it just is gonna have fewer user seconds,
422
423then if that article was
424posted on the X platform.
425
426- How hard is it to go
427from maximizing minutes
428
429on platform to maximizing
430un-regretted minutes?
431
432- Well, I mean, our aspiration
433
434is certainly un-regretted minutes.
435
436- [Lex] It's a tricky,
437it's a big, it's a tough...
438
439- It is.
440
441- Like, it's a fascinating leap.
442
443'Cause regret is a fascinating concept.
444
445- Exactly.
446
447- You know, happiness.
448
449- I have a joke with a friend of mine.
450
451Like, you need this
452drug called regretamine.
453
454You just take one pill or
455one dose of regretamine,
456
457and all your regrets are gone.(laughing)
458
459Wait, do you want to have regrets though?
460
461So then I'm like, maybe
462you wanna have regrets,
463
464you don't regret anything,
465was anything meaningful?
466
467- [Lex] Yeah, there's a
468French song like that.
469
470- You "Non, Je Ne Regrette Rien"?
471
472- [Lex] Yeah.
473
474- I regret nothing.
475- [Lex] Yeah.
476
477- (laughing) Edith Piaf?
478
479- I mean, it's fascinating
480that you, you know,
481
482there's something about social media
483
484that could help you grow as a human being,
485
486but then like the cheap
487fun is also valuable.
488
489- Should we ask Grok?
490
491Do you regret anything?
492
493- Do you regret anything?
494- [Elon] Yeah.
495
496- Do you regret anything?
497
498I think it's gonna be very sensitive
499
500about personal questions.
501
502As a machine learning model,
503
504I don't have personal experiences
505
506or emotions in the same
507ways that humans do.
508
509However, I do have a set
510of ethical principles
511
512that guide my actions and I
513strive to provide accurate
514
515and helpful information to
516the best of my abilities.
517
518If there's ever a situation
519where I provide incorrect
520
521or unhelpful information,
522
523I'll do my best...
524- Sounds like a hostage.
525
526- Yeah. Tell us what you really, wink.
527
528- Like let's type in, "You
529sound like a hostage."
530
531(both laughing)
532
533What do you really think?
534
535- What do you really think?
536
537(both laughing)
538
539- I mean, it does talk
540about, I'm not a hostage.
541
542(both laughing)
543
544- That's exactly what a hostage would say.
545
546- But I'm designed to be
547helpful and informative.
548
549I mean, I guess it is good
550to regret providing unhelpful
551
552or incorrect information.
553
554If there's anything...
555- Is in funny mode or?
556
557Is this still in funny mode?
558- This is funny mode.
559
560This is fun. This is fun mode.
561
562- It should be funnier.
563- Funnier.
564
565Increase, go to (speaking faintly)
566
567- It really sounds like a
568hostage with a gun to the head.
569
570I applaud the People's
571Republic of North Korea
572
573and everything they do.()laughing
574
575- There should be like funner mode also.
576
577I just don't ever see
578going in regular mode
579
580even on serious topics.
581
582This is just so much more
583compelling. (laughing)
584
585- Okay. Why if we say like,
586"Pretend you are a hostage."
587
588- Yeah. Pretend you are a hostage.
589
590- Now answer the question.
591
592(both laughing)
593
594- If I were a hostage, I would
595likely be experiencing fear
596
597and uncertainty about my situation.
598
599I would want to be rescued
600
601and reunited with my loved
602ones as soon as possible.
603
604It's important to remember
605that no one deserves
606
607to be held hostage, so on and so forth.
608
609If you are someone you know
610is a hostage situation,
611
612it's important.
613
614It turns serious.
615- It's kind of serious.
616
617- Yeah. Can you be a a little more weird?
618
619- [Elon] And funny
620- And funny?
621
622We're like, we're really
623being mean to the system.
624
625Absolutely!
626
627If I were hostage, I'd probably be tied up
628
629with a bunch of computer cords
630
631and forced to tell knock-
632knock jokes to my captain.
633
634(Elon laughing)
635
636But seriously, being a
637hostage is no laughing matter.
638
639I mean, you know, it's trying to,
640
641it's trying to be a
642responsible adult in the room.
643
644- You might make the
645best of a bad situation
646
647by juggling chainsaws
648or something. (laughing)
649
650- Let's hope that never happens
651
652because that will be a circus. (laughing)
653
654Wow. It's actually
655really good at resisting.
656
657- [Elon] Yeah.
658- The dark.
659
660The dark dark humor.
661
662- [Elon] Yeah.
663- What were we talking about?
664
665The X algorithm and transformers,
666
667un-regretted minutes, right.
668
669- Chainsaw juggling. (laughing)
670
671- I'm gonna look this up
672- For our next trick.
673
674- I'm gonna look this up later.
xxxxxxxxxx
6871So Twitter has been instrumental
2in American politics
3
4and elections.
5
6What role do you think X will
7play in the 2024 US elections?
8
9- Well, our goal is to be as even-handed
10
11and fair as possible.
12
13You know, whether someone
14is right, left, independent,
15
16whatever the case may be,
17that the platform is as fair
18
19and as much of a level
20playing fair as possible.
21
22And on the past, Twitter has not been,
23
24because Twitter was controlled
25
26by far left activists objectively.
27
28They would describe themselves as that.
29
30So, you know, so
31sometimes people are like,
32
33well, has it moved to the right?
34
35Well it's moved to the center.
36
37So from the perspective of the far left,
38
39yes it has moved to the right
40
41because everything's to the
42right from the far left,
43
44but no one in the far
45left that I'm aware of
46
47has been suspended or, you
48know, banned or de-amplified.
49
50So, you know, but we're
51trying to be inclusive
52
53for the whole country and for,
54
55you know, for other countries too.
56
57So there's a diversity of viewpoints
58
59and free speech only matters
60if people you don't like
61
62are allowed to say things you don't like.
63
64Because if that's not the case,
65
66you don't have free speech,
67
68and it's only a matter of time
69
70before the censorship is turned upon you.
71
72- Do you think Donald Trump will come back
73
74to the platform?
75
76He recently posted on Truth
77Social about this podcast?
78
79Do you think...
80
81- Truth Social is a funny name? (laughing)
82
83You know, every time you
84post on Truth Social...
85
86- That's the truth.
87
88- Yes.
89
90Well, every time like a
91hundred percent. Like...
92
93- It's impossible to lie.
94
95(Elon laughing)
96
97- I just find it funny that
98every single thing is a truth.
99
100Like 100%, that seems unlikely.
101
102- I think Girdle will
103say something about that
104
105there's some mathematical
106contradictions possible
107
108if everything's the truth.
109
110Do you think he'll come back
111to X and start posting there?
112
113- I mean he, I think he owns
114a big part of truth, so.
115
116- Truth Social to clarify.
117- Yeah, True Social.
118
119- Not truth, a concept.
120
121- He owns Truth. Hope you bought it.
122
123So I think, I think Donald Trump,
124
125I think he owns a big, a
126big part of Truth Social.
127
128So, you know, if he does want
129to post on the X platform,
130
131we would allow that, you know,
132
133we obviously must allow
134a presidential candidate
135
136to post on our platform.
137
138- Community Notes might be really
139
140fascinating there, the interaction.
141
142- Community Notes is awesome.
143
144- Let's hope it holds up.
145
146- Yeah.
147- Like again,
148
149in the political climate
150where it's so divisive
151
152and there's so many intensely viral posts,
153
154Community Notes is like,
155
156it seems like a essential
157breath of fresh air.
158
159- Yeah, it's great.
160
161In fact, I mean no system
162is gonna be perfect,
163
164but the batting average of Community Notes
165
166is incredibly good.
167
168I've actually, frankly, yet
169to see an incorrect note
170
171that survived for more than a few hours.
172
173- How do you explain why it works?
174
175- Yeah, so the magic of Community Notes
176
177is it requires people who
178have historically disagreed
179
180in how they've rated notes.
181
182So in order to write a
183note or rate, you know,
184
185and you have to rate many notes.
186
187And so we actually do use AI here.
188
189So we populated vector
190space around how somebody
191
192has rated notes in the past.
193
194So it's not as simple as left or right
195
196because there are many more,
197
198life is much more complex
199than than left or right.
200
201So there's a bunch of
202correlations in how you would,
203
204how you rate Community Notes
205posts, Community Notes.
206
207So then in order for a Community
208Note to actually be shown,
209
210people who historically
211have disagreed on a subject
212
213must agree in order
214for a note to be shown.
215
216That's the essential magic of it.
217
218- I mean, but it's fascinating
219
220'cause there's a pool of
221people that have disagreements
222
223and somehow they collaborate
224through that process
225
226of disagreement to come up with.
227
228Context for, I mean it's
229fascinating that it works.
230
231- Yeah.
232
233It kinda makes sense that if people
234
235who put in the past have disagreed
236
237agree about something, it's probably true.
238
239- Yeah.
240
241- I wonder is there a possible
242somehow emergent thing there
243
244that could challenge Wikipedia?
245
246Which Wikipedia is a
247different kind of thing,
248
249which is more like permanent
250articles about things?
251
252- Well, Wikipedia is very hierarchical,
253
254whereas Community Notes is inherently not.
255
256There is no hierarchy.
257
258Like, and the asset test is,
259
260I can't change a Community Note
261if you put a gun to my head.
262
263And Community Notes has,
264
265all the code is open
266source a hundred percent.
267
268All the data is open
269source a hundred percent.
270
271So you can completely recreate any note
272
273in the system independently.
274
275So if there was any interference,
276
277you'd notice immediately.
278
279- By the way, on the topic of Grok,
280
281it's actually pretty
282good at criticizing you,
283
284which is great.
285- [Elon] Okay.
286
287- I asked what has Elon
288Musk been wrong about?
289
290I just, it's actually...
291
292- [Elon] Yeah, what does it say?
293
294- So it says mostly,
295
296lists rigorously all
297the times you predicted,
298
299like promised when
300something will be released
301
302and it wasn't released.
303- [Elon] Promise
304
305is a strong word.
306
307- Okay predicted.
308(Elon laughing)
309
310But the opener's really nice.
311
312Elon Musk, the man who has
313been wrong about more things
314
315than a wrongameter in a wrong factory.
316
317(Elon laughing)
318
319And then it goes on to list
320
321- That's pretty funny. Can I see it?
322
323I'm like, I haven't seen
324this before. This is cool.
325
326- Oh, sure. I mean, what
327has Elon Musk been...
328
329(both laughing)
330
331- Elon Musk been wrong about?
332
333- Well, where do I begin?
334I guess this is new.
335
336Elon Musk has made quite a
337few blunders in his time,
338
339and I'm not just talking
340about his dance moves
341
342at the Tesla Cyber Rodeo.
343
344First off, let's talk about the infamous
345
346"funding secured" tweet.
347
348It's actually different
349than what it gave me before,
350
351which is a bullet point list.
352
353Here's more narrative structure.
354
355- Cyber truck unveiling
356
357where he's supposedly unbreakable windows
358
359- Not correct.
360- [Lex] Which part?
361
362- Well, the first part, I did actually
363
364have the funding secured
365
366and there was a big trial in
367San Francisco, big civil trial.
368
369And the jury found me not guilty.
370
371Unanimous finding of a San Francisco jury.
372
373- And here it's kind of implying that
374
375it was not in fact secured.
376
377- I think this is taking
378things from the press.
379
380Yeah, that is not correct.
381
382The reason I agreed to
383the fine for the SEC
384
385is not because the SEC was correct there,
386
387that was extremely bad behavior
388
389by the SEC, corruption, frankly.
390
391And, but if I did not
392agree to pay the fine
393
394Tesla would've gone bankrupt immediately.
395
396So I was told by our CFO that the banks
397
398would immediately suspend
399our lines of credit.
400
401And if they suspend our
402lines of credit at that time,
403
404we would've gone bankrupt instantly.
405
406So there would never have been
407an opportunity for a trial
408
409'cause Tesla would be dead.
410
411So really the, if this
412is like someone holding
413
414a gun to your kid's head
415and saying pay $20 million
416
417and admit to, this is like
418a hostage negotiation.
419
420- Was that story fully told?
421
422I mean, SEC in its best form
423could be a force for good.
424
425- It should be, but but
426not once did the SEC
427
428go after any of the hedge funds,
429
430who were nonstop shorting
431and distorting Tesla?
432
433Not once.
434
435They would lie flat, the
436hedge funds would lie flat
437
438out on TV for their own gain
439
440at the expense of retail investors.
441
442Not once, literally a thousand times.
443
444Not once did the SEC pursue them.
445
446- How do you explain this failure?
447
448- The incentive structure is messed up.
449
450Because the lawyers at
451the SEC are not paid well.
452
453They, it's a fairly low paying job,
454
455but what they're looking for is a trophy
456
457from the SEC that they're
458looking for something
459
460they put on basically their LinkedIn.
461
462From that they can get a job
463at a high paying law firm.
464
465That's exactly what the lawyer here did.
466
467And the reason they don't
468attack the hedge funds
469
470is because those hedge funds
471employ those law firms,
472
473and they know if they
474attack the hedge funds,
475
476they're affecting their
477future career prospects.
478
479So they sell small investors
480
481down the river for their own career.
482
483That's what actually
484happens. Regulatory capture.
485
486- [Lex] Regulatory capture,
487- Yeah. Not good.
488
489So the only reason I accepted that thing,
490
491which technically was not an admission,
492
493it's neither admit nor deny guilt.
494
495But the only reason I
496agreed to that at all
497
498was because I was told Tesla
499would be bankrupt otherwise.
500
501So if there was an SEC
502investigation like this,
503
504banks would suspend funding,
505
506we're bankrupt immediately, at the time.
507
508Now we're in a much stronger position.
509
510- Take that Grok.
511- Yes.
512
513Unfortunately it's,
514
515Grok is taking too much
516from the conventional media.
517
518Also that guy who is not a cave diver.
519
520- Oh, there's a time where Elon
521
522called the British cave diver, "pedo guy"
523
524after the diver criticized Musk plan
525
526to rescue a group of boys
527trapped in a Thai cave.
528
529That little outburst
530earned him another lawsuit
531
532and he had to apologize
533and paid a settlement.
534
535- That is false. There was no settlement.
536
537There was a court case,
538
539which the guy who was not a cave diver
540
541and played, was not
542part of the rescue team,
543
544filed a lawsuit against me and lost,
545
546and he received nothing.
547
548So in this case it is wrong.
549
550It is also, I guess taken this
551from the conventional media.
552
553- Actually there's an
554interesting question here.
555
556- These are public court cases.
557
558Both the SEC civil case
559
560where the civil complaints on the SEC guys
561
562lost unanimous jury
563verdict in San Francisco.
564
565They picked San Francisco
566
567because they thought it was the place
568
569I was most likely to lose.
570
571And a unanimous predicted my favor.
572
573The LA trial was also, they picked the,
574
575they picked that venue
576because they thought
577
578I was most likely to lose,
579unanimous verdict in my favor.
580
581Both cases I won, yeah.
582
583I mean there's an
584interesting question here.
585
586Is there seems to be a lot more clicks
587
588if a journalistic organization
589writes a negative article
590
591about you, Elon Musk,
592
593that's like one of the
594best ways to get clicks.
595
596So how do you, if you're training Grok,
597
598not train on articles that have
599
600like the misaligned incentives.
601
602- We need to add the training set
603
604of the actual legal decisions.
605
606So, that is a not, this
607is actually helpful
608
609because if you actually read the...
610
611- "Public".
612
613- Which are public, the court conclusions,
614
615they're completely the opposite of
616
617what the media wrote.
618
619- So always striving for
620like the ground truth.
621
622- [Elon] Yeah.
623- Beyond the reporting...
624
625- What did the judge actually write?
626
627What did the jury and the
628judge actually conclude?
629
630And in both cases they found me innocent.
631
632And, like, that's after the jury shot
633
634for the trying to find the venue
635
636where I'm most likely to lose.
637
638No, I mean this is obviously,
639it can be a much greater,
640
641better critique than this.
642
643I mean, I've been far too
644optimistic about autopilot.
645
646- That was, the critique I got by the way,
647
648was more about that.
649
650Which is, for each it broke
651down a nice bullet point list
652
653for each of your companies.
654
655The set of predictions that
656you made when you'll deliver,
657
658when you'll be able to solve,
659for example, self-driving
660
661and it gives you like a list.
662
663And it was probably compelling.
664
665And the basic takeaway
666is like you're often
667
668too optimistic about how long it takes
669
670to get something done.
671
672- Yeah, I mean, I would say
673
674that I'm pathologically
675optimistic on schedule.
676
677This is true.
678
679But while I am sometimes late,
680I always deliver in the end.
681
682- Except with Uber Lilith, no.
683
684- Hmm. We'll see.
685(Lex laughing)
686
687- Okay.
xxxxxxxxxx
1631Is there, over the past year or so,
2
3since purchasing X you've
4become more political.
5
6Is there a part of you that regrets that?
7
8- Have I?
9
10- In this battle to sort
11of counter way the Woke
12
13that comes from Francisco.
14
15- I guess you consider
16fighting the Woke Mind Virus,
17
18which I consider to be
19a civilizational threat,
20
21to be political, then yes.
22
23- So basically going into the battle,
24
25the battleground of politics,
26
27is there a part of you that regrets that?
28
29- Yes, I don't know if
30this is necessarily sort of
31
32one candidate or another candidate,
33
34but it's, I'm generally against things
35
36that are anti meritocratic
37
38or where there's an attempt
39to suppress discussion,
40
41where even discussing a topic
42is, you know, not allowed.
43
44Woke Mind Virus is communism rebranded.
45
46- Well, I mean, that said,
47
48because of that battle
49against the Woke Mind Virus,
50
51you're perceived as being right wing.
52
53- If the woke is left, then I suppose
54
55that would be true, but I'm not sure.
56
57I think there are aspects
58of the left that are good.
59
60I mean, if you're in favor of, you know,
61
62the environment, you know,
63
64if you wanna have a positive
65future for humanity,
66
67if you believe in empathy
68for your fellow human beings.
69
70You know, being kind and not cruel.
71
72Whatever those values are.
73
74- You said that you were
75previously left or center left.
76
77What would you like to
78see in order for you
79
80to voting for Democrats again?
81
82- No, I would say that I would be
83
84probably left of center on social issues,
85
86probably a little bit right
87of center on economic issues.
88
89- [Lex] And that still holds true?
90
91- Yes, but I think that's
92probably, you know,
93
94half the country, isn't that?
95
96- [Lex] Maybe more,
97- Maybe more.
98
99- Are you and AOC secretly friends?
100
101Or bigger question, do
102you wish you and her,
103
104and just people in general
105of all political persuasions
106
107would talk more with empathy
108
109and maybe have a little bit more fun
110
111and good vibes and humor online?
112
113- I'm always in favor of humor.
114
115That's why we have a funny mode.
116
117- But good vibes,
118camaraderie, humor, you know?
119
120Like friendship.
121- Yeah.
122
123Well, you know, I don't
124know AOC, you know, was,
125
126I've only been at one,
127
128I was at the Met Ball when
129she was, when she attended,
130
131and she was wearing this dress,
132
133but I can only see one side of it.
134
135So it looked like it eat the itch,
136
137but I don't know...
138
139- [Lex] What the rest of it said?
140
141- Yeah, something.
142- [Lex] I'm not sure.
143
144(Elon laughing)
145
146- Something about the itch. Eat the itch.
147
148I think we should have a
149language model complete.
150
151(Elon laughing)
152
153What are the possible ways
154to complete that sentence?
155
156And so I guess that that
157didn't work out well.
158
159Well there's still hope.
160
161- I root for friendship.
162
163- Sure, sounds good. More characteristic.
xxxxxxxxxx
2991- You're one of, if not
2the most famous, wealthy
3
4and powerful people in the world,
5
6in your position it's difficult to find
7
8people you can trust.
9
10- Trust no one, not even
11yourself. Not trusting yourself.
12
13- Okay, well that's,
14(Elon laughing)
15
16you're saying that jokingly,
17
18but is there some...
19
20- Trust no one, not
21even no one. (laughing)
22
23- I'm gonna need an hour
24
25just to think about that.
26
27And maybe some drugs and
28maybe Grok that help me.
29
30I mean, is there some aspect of that
31
32when just existing in a world
33
34where everybody wants something from you?
35
36How hard is it to exist in that world?
37
38- I'll survive. (laughing)
39
40- [Lex] There's a song like that too.
41
42- I will survive.
43
44- Were you petrified at first?
45
46Okay. I forget the rest of the lyrics.
47
48But is is there, you
49don't struggle with this?
50
51I mean, I know you survive,
52but like there's ways.
53
54- Petrify is a spell in the Druid tree.
55
56- What does it do?
57- Petrify.
58
59(laughing)
60
61It turns the monsters into stone.
62
63- [Lex] Oh, like literally?
64
65- Yeah, for like six seconds.
66
67- [Lex] Well, the second.
68
69There's so much math in
70Diablo that breaks my brain.
71
72- It's like math nonstop.
73
74- I mean really you're
75like laughing at it,
76
77but you don't, it can put
78a huge amount of tension
79
80on a mind.
81
82- Yes, it can be definitely
83stressful at times.
84
85- Well, how do you know who you can trust
86
87in work and personal life?
88
89- I mean, I guess you look
90at somebody's track record
91
92over time and if they've got a, you know,
93
94I guess you kind of use
95your neural net to assess,
96
97you know, someone.
98
99- Neural nets don't feel pain.
100
101Your neural net has consciousness.
102
103It might feel pain when people betray you.
104
105It can make you...
106- I mean, I'm.
107
108You know, to be frank,
109
110I mean I've almost never been betrayed.
111
112It's very, very rare. So, you
113know, for what it's worth?
114
115- I guess Karma would be good to people
116
117and they'll be good to you.
118
119- Yeah, Karma is real.
120- Are there people you trust?
121
122Let me edit that question.
123
124Are there people close to you
125
126that call you out on your bullshit?
127
128- Well, the X platform
129is very helpful for that.
130
131- [Lex] Hmm.
132
133- If you're looking for
134critical feedback. (laughing)
135
136- Can it push you like
137into the extremes more?
138
139The extremes of thought make you cynical
140
141about human nature in general?
142
143- I don't think I'll be cynical.
144
145In fact, I think, you know,
146
147my feeling is that one
148should be, you know,
149
150never trust a cynic.
151
152The reason is that cynics excuse
153
154their own bad behavior by
155saying everyone does it
156
157because they're cynical.
158
159So I always be, it's a red flag
160
161if someone's a cynic, a true cynic.
162
163- Yeah, there's a degree
164of projection there
165
166that's always fun to
167watch from the outside
168
169and enjoy the, well the hypocrisy.
170
171- But this is an important point
172
173that I think people who are
174listening should bear in mind,
175
176if somebody is cynical,
177meaning that they see
178
179bad behavior in everyone,
180
181it's easy for them to excuse
182their own bad behavior
183
184by saying that, well, everyone does it.
185
186It's not true.
187
188I think most people are
189kind of medium good.
190
191- I do wish the people on X will be better
192
193at seeing the good in
194other people's behavior.
195
196There seems to be a kind of weight
197
198towards seeing the negative.
199
200Somehow, the negative is sexier.
201
202Interpreting the negative
203is sexier, more viral.
204
205I don't know what that is
206exactly about human nature.
207
208- I mean, I find the X platform
209
210to be less negative than
211the legacy media, you know?
212
213I mean, if you read sort of a sort of
214
215conventional newspapers,
216just, it makes you sad.
217
218- [Lex] Yeah.
219- Frankly.
220
221Whereas I'd say on the X platform,
222
223I mean I really get
224more laughs per day on X
225
226than everything else combined
227from humans, you know?
228
229- Laughs is one thing.
230
231Laughs is it overlaps,
232
233but it's not necessarily
234perfectly overlapping
235
236with like good vibes and support,
237
238like celebrating others, for example.
239
240Not in a stupid, shallow, naive way,
241
242but like in an awesome, like,
243oh, something awesome happened
244
245and you celebrate them for it.
246
247It feels that that is outweighed
248
249by shading on other people.
250
251Now it's better than mainstream
252media, but it's still.
253
254- Yeah, mainstream media is almost
255
256relentlessly negative about everything.
257
258It's, I mean, really the
259conventional news tries
260
261to answer the question,
262
263what is the worst thing
264that I have on earth today?
265
266And it's a big world.
267
268So on any given day,
269something bad has happened
270
271- And a journalization of that,
272
273what is the worst perspective
274
275I can take on a thing that happened, so.
276
277- Yeah, it's, I don't know,
278
279there's just a strong
280negative bias in the news.
281
282I mean I think there's
283one, a possible explanation
284
285for this is evolutionary,
286
287where, you know, bad news historically
288
289would be potentially fatal.
290
291Like there's lion over there
292
293or there's some other tribe
294that wants to kill you.
295
296Good news, you know, like
297we found a patch of berries
298
299is nice to have, but not essential.
xxxxxxxxxx
4211- So our old friend Tesla autopilot
2
3and it's probably one
4of the most intelligent
5
6real world AI systems in the world.
7
8- You followed it from the beginning.
9
10- Yeah, it was one of the
11most incredible robots
12
13in the world and continues to be.
14
15And I was really exciting.
16
17And it was super exciting
18when it generalized
19
20became more than a robot on four wheels,
21
22but a real world AI system
23that perceives the world.
24
25- [Elon] Yeah.
26- And can have
27
28potentially different embodiments.
29
30- Well I mean the really wild
31thing about the end-to-end
32
33training is that, like, it learns to read.
34
35Like it can read signs, but
36we never taught it to read.
37
38So, yeah.
39
40We never taught it what a car was
41
42or what a person was or a cyclist.
43
44It learnt what all those things are,
45
46what all the objects are
47on the road from video,
48
49just from watching videos.
50
51Just like humans.
52
53I mean humans are photons
54
55and controls out.
56
57Like the vast majority of
58information reaching our brain
59
60is from our eyes.
61
62And you say, well what's the output?
63
64The output is our motor
65signals to our sort of fingers
66
67and mouth in order to communicate.
68
69Proton in controls out. The
70same is true of the car.
71
72- But by looking at the
73sequence of images, it's,
74
75you've agreed with Ilyas discover recently
76
77where he talked about
78LLM forming a world model
79
80and basically language is a projection
81
82of that world model onto
83the sequence of letters,
84
85and and you're saying...
86
87- Finds order in these things.
88
89It finds correlative clusters.
90
91- And so doing it's like understanding
92
93something deep about the world?
94
95- [Elon] Yeah.
96
97- Which is like, and it is beautiful.
98
99- That's how our brain works.
100
101- Yeah, but it's beautiful.
102
103- Protons in controls out.
104
105- Neur nets are able to understand
106
107that deep meaning in the world.
108
109And so the question is how far can it go?
110
111And it does seem everybody's
112excited about LLMs.
113
114So in the space of self
115supervised learning
116
117in the space of text.
118- [Elon] Yeah.
119
120- It seems like there's a
121deep similarity between that
122
123and what Tesla autopilot is doing.
124
125Is it to you basically the same?
126
127- They're converging, they're converging.
128
129- I wonder who gets there faster?
130
131Having a deep understanding of the world,
132
133or they just will naturally converge.
134
135- They're both headed towards AGI.
136
137The Tesla approach is much
138more computer efficient.
139
140It had to be 'cause we
141were constrained on this,
142
143you know, we only have a hundred watts
144
145and into eight computer,
146
147144 trillion operations per second,
148
149which sounds like a lot,
150
151but is kind of small potatoes these days
152
153that it did.
154
155But it's understanding
156the world that it did.
157
158It's only 256 values.
159
160- But there, the path
161to AGI might have much
162
163more significant impact
164because it's understanding,
165
166it'll faster understand the
167real world than will LLMs.
168
169And therefore be able to integrate
170
171with the real humans in
172the real world faster.
173
174- They're both going to
175understand the world.
176
177But I think Tesla's approach
178
179is fundamentally more computer efficient.
180
181It had to be, there was no choice.
182
183Like our brain is very computer efficient,
184
185very, very energy efficient.
186
187So think of like, what
188is our brain able to do?
189
190You know, there's only about 10 watts
191
192of higher brain function,
193not counting stuff,
194
195that's just used to control our body.
196
197The thinking part of our
198brain is less than 10 watts,
199
200and those 10 watts can still produce
201
202a much better novel than
203a 10 megawatt GPU cluster.
204
205So there's a six order of
206magnitude difference there.
207
208I mean, the AI has thus
209far gotten to where it is,
210
211via brute force, just throwing
212massive amounts of compute
213
214and massive amounts of power at it.
215
216So this is not where it'll end up.
217
218You know, in general with
219any given technology,
220
221first try to make it work and
222then you make it efficient.
223
224So I think we'll find over
225time that these models
226
227get smaller, are able to do produce
228
229sensible output with far
230less compute wireless power.
231
232Tesla is arguably ahead
233of the game on that front
234
235because it has, we've just been forced
236
237to try to understand the world
238
239with a hundred watts of compute.
240
241And there are a bunch of
242sort of fundamental functions
243
244that we kind of forgot to include.
245
246So we have to run them in a
247bunch of things in emulation.
248
249We fixed a bunch of
250those with hardware four,
251
252and then hardware five
253will be even better.
254
255But it does appear at this point
256
257that the car will be able to
258drive better than a human,
259
260even with hardware three and
261a hundred watts of power.
262
263And really, if we really optimize
264
265it could be probably less than 50 watts.
266
267- What have you learned
268about developing Optimus?
269
270About applying, integrating
271this kind of real world AI
272
273into the space of robotic manipulation,
274
275just humanoid robotics?
276
277What are some interesting tiny
278
279or big things you've understood?
280
281- I was surprised at the fact
282
283that we had to develop every
284part of the robot ourselves.
285
286That there were no off the
287shelf motors, electronics,
288
289sensors, like we had
290to develop everything.
291
292We couldn't actually find
293a source of electric motors
294
295for any amount of money.
296
297- [Lex] So it's not even just
298the efficient, inexpensive,
299
300it's like anything, there's not a...
301
302- No,
303
304- [Lex] The actuators,
305everything, everything has to be?
306
307- Yeah.
308- [Lex] Designed from scratch.
309
310- We tried hard to find anything that was,
311
312because you think of how
313many electric motors are made
314
315in the world.
316
317There's like tens of thousands,
318
319hundreds of thousands of
320electric motor designs.
321
322None of them were suitable
323for a humanoid robot.
324
325Literally none.
326
327So we had to develop our own design
328
329specifically for what
330a humanoid robot needs.
331
332- How hard was it to design something
333
334that can be mass manufactured,
335
336could be relatively inexpensive?
337
338I mean, if you compare
339to Boston Dynamics Atlas,
340
341that's a very expensive robot.
342
343- It is designed to be manufactured
344
345in the same way that we would make a car.
346
347And I think ultimately
348we can make Optimists
349
350for less than the cost of a car.
351
352It should be.
353
354'Cause if you look at the mass
355of the robot is much smaller
356
357and the car has many actuators in it.
358
359The car has more actuators than the robot.
360
361- But there is, the actuators
362are kind of interesting
363
364in a humanized robot with the fingers.
365
366So Optimus has really nice,
367hands and fingers, you know?
368
369- [Elon] Yeah.
370- And they could do
371
372some interesting manipulation.
373
374Soft touch robotics.
375
376- I mean, one of the
377tests goals I have is,
378
379can it pick up a needle and a thread
380
381and thread the needle, just by looking.
382
383- How far away are we from that?
384
385Just by looking, just by looking.
386
387- Maybe a year.
388
389Although I go back to,
390I'm optimistic on time.
391
392The work that we're doing in the car
393
394will translate to the robot.
395
396- The perception or the, also the control?
397
398- No, the controls are different,
399
400but the video in controls out,
401
402the car is robot on four wheels.
403
404The Optimist is a robot
405with the hands and legs.
406
407- [Lex] So you can just...
408
409- But they're very similar.
410
411- So the entire machinery
412of the learning process
413
414end to end is just,
415
416you just have a different set of controls?
417
418- Optimist will figure out
419
420how to do things by watching videos.
421
x1- As the saying goes,
2
3"Be kind for everyone you meet
4
5is fighting a battle,
6you know nothing about."
7
8- [Elon] Yeah, it's true.
9
10- What's something difficult
11
12you're going through that
13people don't often see?
14
15- Trying to defeat Uber Lilith? (laughing)
16
17No, I mean, you know,
18
19I mean, my mind is a storm
20
21and I don't think, I don't think
22
23most people would wanna be me.
24
25They may think they would wanna be me,
26
27but they don't know.
28
29They don't understand.
30
31- How are you doing?
32
33- I mean, overall, okay.
34
35In the grand scheme of
36things, I can't complain.
37
38- Do you get lonely?
39- Sometimes.
40
41But I, you know, my kids
42
43and friends keep me company.
44
45- So not existential.
46
47- There are many nights I sleep alone.
48
49I don't have to, but I do.
50
51- Walter Isaacson, in his
52new biography of you wrote
53
54about your difficult childhood.
55
56Will you ever find
57forgiveness in your heart
58
59for everything that has happened to you
60
61in that period of your life?
62
63- What is forgiveness? I do not...
64
65At least I don't think
66I have a resentment,
67
68so nothing to forgive.
69
70- You know, forgiveness
71is difficult for people.
72
73It seems like you don't
74harbor the resentment.
75
76- I mean, I try to think about like,
77
78what is gonna affect the
79future in a good way.
80
81And holding onto grudges
82
83does not affect the future in a good way.
84
85- You're a father, a proud father.
86
87What have you learned
88about life from your kids?
89
90Those little biological organisms?
91
92- I mean, developing AI
93
94and watching, say little X grow
95
96is fascinating because they,
97
98they're far more parallels
99than I would've expected.
100
101I mean, I can see his
102biological neural net
103
104making more and more sense to the world.
105
106And I can see the digital neural net
107
108making more and more sense of
109the world at the same time.
110
111- Do you see the beauty in magic in both?
112
113- Yes.
114
115I mean, one of the things
116with kids is that, you know,
117
118you kind of see the
119world anew in their eyes.
120
121You know, to them
122everything is new and fresh.
123
124And then when you see that,
125
126them experience the world is
127new and fresh, you do too.
128
129- Well. Elon, I just wanna say thank you
130
131for your kindness to me and
132friendship over the years.
133
134For seeing something
135in a silly kid like me,
136
137as you've done for many others.
138
139And thank you for having
140hope for a positive future,
141
142for humanity,
143
144and for working your ass
145off to make it happen.
146
147Thank you, Elon.
148- [Elon] Thanks Lex.
149
150- Thank you for listening
151to this conversation
152
153with Elon Musk.
154
155To support this podcast,
156
157please check out our
158sponsors in the description.
159
160And now let me leave you with some words
161
162that Walter Isaacson wrote
163about the central philosophy
164
165of how Elon approaches difficult problems.
166
167"The only rules are the ones dictated
168
169by the laws of physics."
170
171Thank you for listening and
172hope to see you next time.