forked from prakhar625/huberman-podcasts-transcripts
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path102__transcript.txt
5960 lines (5960 loc) · 161 KB
/
102__transcript.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
[bright music]
- Welcome to the "Huberman Lab Podcast,"
where we discuss science
and science-based tools
for everyday life.
I'm Andrew Huberman,
and I'm a Professor of
Neurobiology and Ophthalmology
at Stanford School of Medicine.
Today I have the pleasure of
introducing Dr. Lex Fridman
as our guest on the
"Huberman Lab Podcast."
Dr. Fridman is a researcher
at MIT specializing
in machine learning,
artificial intelligence and
human robot interactions.
I must say that the conversation with Lex
was without question,
one of the most fascinating conversations
that I've ever had,
not just in my career, but in my lifetime.
I knew that Lex worked on these topics.
And I think many of you are
probably familiar with Lex
and his interest in these topics
from his incredible podcast,
the "Lex Fridman Podcast."
If you're not already
watching that podcast,
please subscribe to it.
It is absolutely fantastic.
But in holding this conversation with Lex,
I realized something far more important.
He revealed to us a bit of his dream.
His dream about humans and robots,
about humans and machines,
and about how those interactions
can change the way that
we perceive ourselves
and that we interact with the world.
We discuss relationships of all kinds,
relationships with animals,
relationships with friends,
relationships with family
and romantic relationships.
And we discuss relationships
with the machines.
Machines that move and
machines that don't move,
and machines that come
to understand us in ways
that we could never
understand for ourselves,
and how those machines can
educate us about ourselves.
Before this conversation,
I had no concept of the ways
in which machines could inform me
or anyone about themselves.
By the end,
I was absolutely taken with the idea,
and I'm still taken with the idea
that interactions with machines
have a very particular kind,
a kind that Lex understands and
wants to bring to the world,
can not only transform the self,
but may very well transform humanity.
So whether or not you're familiar
with Dr. Lex Fridman or not,
I'm certain you're going to learn
a tremendous amount from him
during the course of our discussion,
and that it will transform
the way that you think about
yourself and about the world.
Before we begin,
I want to mention that this podcast
is separate from my teaching
and research roles at Stanford.
It is however part of my desire and effort
to bring zero cost to consumer
information about science
and science-related tools
to the general public.
In keeping with that theme,
I'd like to thank the
sponsors of today's podcast.
Our first sponsor is ROKA.
ROKA makes sunglasses and eyeglasses
that are of absolutely phenomenal quality.
The company was founded
by two All-American
swimmers from Stanford,
and everything about the sunglasses
and eyeglasses they've designed
had performance in mind.
I've spent a career working
on the visual system.
And one of the fundamental issues
that your visual system has to deal with
is how to adjust what you see
when it gets darker or
brighter in your environment.
With ROKA Sunglasses and Eyeglasses,
whether or not it's dim
in the room or outside,
or not there's cloud cover,
or whether or not you walk into a shadow,
you can always see the
world with absolute clarity.
And that just tells me
that they really understand
the way that the visual system works.
Processes like habituation
and attenuation.
All these things that work
at a real mechanistic level
have been built into these glasses.
In addition, the glasses
are very lightweight.
You don't even notice really
that they're on your face.
And the quality of the lenses is terrific.
Now, the glasses were also designed
so that you could use them,
not just while working
or at dinner, et cetera,
but while exercising.
They don't fall off your
face or slip off your face
if you're sweating.
And as I mentioned,
they're extremely lightweight.
So you can use them while running,
you can use them while
cycling and so forth.
Also the aesthetic of
ROKA glasses is terrific.
Unlike a lot of performance
glasses out there,
which frankly make
people look like cyborgs,
these glasses look great.
You can wear them out to dinner,
you can wear them for
essentially any occasion.
If you'd like to try ROKA glasses,
you can go to roka.com.
That's R-O-K-A .com and
enter the code Huberman
to save 20% off your first order.
That's ROKA,
R-O-K-A .com and enter the
code Huberman at checkout.
Today's episode is also
brought to us by InsideTracker.
InsideTracker is a
personalized nutrition platform
that analyzes data from your blood and DNA
to help you better understand your body
and help you reach your health goals.
I'm a big believer in getting
regular blood work done
for the simple reason
that many of the factors
that impact our immediate
and long-term health
can only be assessed from
a quality blood test.
And now with the advent
of quality DNA tests,
we can also get insight
into some of our genetic underpinnings
of our current and long-term health.
The problem with a lot of blood
and DNA tests out there, however,
is you get the data back and
you don't know what to do
with those data.
You see that certain things are high
or certain things are low,
but you really don't know
what the actionable items are,
what to do with all that information.
With InsideTracker,
they make it very easy to
act in the appropriate ways
on the information that you get back
from those blood and DNA tests.
And that's through the use
of their online platform.
They have a really easy to use dashboard
that tells you what sorts of
things can bring the numbers
for your metabolic factors,
endocrine factors, et cetera,
into the ranges that you want and need
for immediate and long-term health.
In fact, I know one individual
just by way of example,
that was feeling good,
but decided to go with
an InsideTracker test
and discovered that they had high levels
of what's called C-reactive protein.
They would have never
detected that otherwise.
C-reactive protein is associated
with a number of deleterious
health conditions,
some heart issues, eye issues, et cetera.
And so they were able
to take immediate action
to try and resolve those CRP levels.
And so with InsideTracker,
you get that sort of insight.
And as I mentioned before,
without a blood or DNA test,
there's no way you're going
to get that sort of insight
until symptoms start to show up.
If you'd like to try InsideTracker,
you can go to insidetracker.com/huberman
to get 25% off any of
InsideTracker's plans.
You just use the code
Huberman at checkout.
That's insidetracker.com/huberman
to get 25% off any of
InsideTracker's plans.
Today's podcast is brought
to us by Athletic Greens.
Athletic Greens is an all-in-one
vitamin mineral probiotic drink.
I started taking Athletic
Greens way back in 2012.
And so I'm delighted that
they're sponsoring the podcast.
The reason I started
taking Athletic Greens
and the reason I still
take Athletic Greens
is that it covers all of my
vitamin mineral probiotic basis.
In fact, when people ask
me, what should I take?
I always suggest that the
first supplement people take
is Athletic Greens,
for the simple reason,
is that the things that
contains covers your bases
for metabolic health, endocrine health,
and all sorts of other
systems in the body.
And the inclusion of probiotics
are essential for a
healthy gut microbiome.
There are now tons of data showing
that we have neurons in our gut,
and keeping those neurons healthy requires
that they are exposed
to what are called the correct microbiota,
little microorganisms that live in our gut
and keep us healthy.
And those neurons in turn
help keep our brain healthy.
They influence things like
mood, our ability to focus,
and many, many other
factors related to health.
With Athletic Greens, it's terrific,
because it also tastes really good.
I drink it once or twice a day.
I mix mine with water and
I add a little lemon juice,
or sometimes a little bit of lime juice.
If you want to try athletic greens,
you can go to athleticgreens.com/huberman.
And if you do that, you can
claim their special offer.
They're giving away
five free travel packs,
little packs that make it
easy to mix up Athletic Greens
while you're on the road.
And they'll give you a year supply
of vitamin D3 and K2.
Again, go to athleticgreens.com/huberman
to claim that special offer.
And now, my conversation
with Dr. Lex Fridman.
- We meet again.
- We meet again.
Thanks so much for sitting down with me.
I have a question that I think
is on a lot of people's minds
or ought to be on a lot of people's minds,
because we hear these
terms a lot these days,
but I think most people,
including most scientists and including me
don't know really what is
artificial intelligence,
and how is it different
from things like machine
learning and robotics?
So, if you would be so
kind as to explain to us,
what is artificial intelligence,
and what is machine learning?
- Well, I think that
question is as complicated
and as fascinating as the
question of, what is intelligence?
So, I think of artificial intelligence,
first, as a big philosophical thing.
Pamela McCormick said
AI was the ancient wish
to forge the gods,
or was born as an ancient
wish to forge the gods.
So I think at the big philosophical level,
it's our longing to create
other intelligence systems.
Perhaps systems more powerful than us.
At the more narrow level,
I think it's also set of tools
that are computational mathematical tools
to automate different tasks.
And then also it's our attempt
to understand our own mind.
So, build systems that exhibit
some intelligent behavior
in order to understand
what is intelligence
in our own selves.
So all of those things are true.
Of course, what AI really
means as a community,
as a set of researchers and engineers,
it's a set of tools,
a set of computational techniques
that allow you to solve various problems.
There's a long history
that approaches the problem
from different perspectives.
What's always been throughout
one of the threads,
one of the communities goes under the flag
of machine learning,
which is emphasizing in the AI space,
the task of learning.
How do you make a machine
that knows very little in the beginning,
follow some kind of process
and learns to become better and
better at a particular task?
What's been most very effective
in the recent about 15 years
is a set of techniques
that fall under the flag
of deep learning that
utilize neural networks.
When your networks are these
fascinating things inspired
by the structure of the human brain,
very loosely, but they have a,
it's a network of these little
basic computational units
called neurons, artificial neurons.
And they have,
these architectures have
an input and output.
They know nothing in the beginning,
and their task with learning
something interesting.
What that's something interesting is,
usually involves a particular task.
There's a lot of ways to talk about this
and break this down.
Like one of them is how
much human supervision
is required to teach this thing.
So supervised learning is broad category,
is the neural network knows
nothing in the beginning
and then it's given a bunch
of examples in computer vision
that will be examples of cats,
dogs, cars, traffic signs,
and then you're given the image
and you're given the ground
truth of what's in that image.
And when you get a large
database of such image examples
where you know the truth,
the neural network is
able to learn by example,
that's called supervised learning.
There's a lot of fascinating
questions within that,
which is, how do you provide the truth?
When you given an image of a cat,
how do you provide to the computer
that this image contains a cat?
Do you just say the entire
image is a picture of a cat?
Do you do what's very commonly been done,
which is a bounding box,
you have a very crude box
around the cat's face saying,
this is a cat?
Do you do semantic segmentation?
Mind you, this is a 2D image of a cat.
So it's not,
the computer knows nothing
about our three-dimensional world,
is just looking at a set of pixels.
So, semantic segmentation
is drawing a nice,
very crisp outline around the
cat and saying, that's a cat.
That's really difficult
to provide that truth.
And one of the fundamental open questions
in computer vision is,
is that even a good
representation of the truth?
Now, there's another
contrasting set of ideas
that our attention they're overlapping is,
well, it's used to be called
unsupervised learning.
What's commonly now called
self-supervised learning.
Which is trying to get less and less
and less human supervision into the task.
So self-supervised learning is more,
it's been very successful in
the domain of language model,
natural English processing,
and now more and more as being successful
in computer vision task.
And the idea there is,
let the machine without
any ground-truth annotation
just look at pictures on the internet,
or look at texts on the internet
and try to learn something generalizable
about the ideas that are
at the core of language
or at the core of vision.
And based on that,
we humans at its best like
to call that common sense.
So with this,
we have this giant base of knowledge
on top of which we build
more sophisticated knowledge.
We have this kind of
commonsense knowledge.
And so the idea with
self-supervised learning
is to build this
commonsense knowledge about,
what are the fundamental visual ideas
that make up a cat and a dog
and all those kinds of things
without ever having human supervision?
The dream there is the,
you just let an AI system
that's self supervised
run around the internet for awhile,
watch YouTube videos for
millions and millions of hours,
and without any supervision be primed
and ready to actually learn
with very few examples
once the human is able to show up.
We think of children in this way,
human children,
is your parents only
give one or two examples
to teach a concept.
The dream with self-supervised learning
is that will be the same with machines.
That they would watch millions
of hours of YouTube videos,
and then come to a human
and be able to understand
when the human shows them, this is a cat.
Like, remember this' a cat.
They will understand that a cat
is not just the thing with pointy ears,
or a cat is a thing that's
orange, or is furry,
they'll see something more fundamental
that we humans might not actually be able
to introspect and understand.
Like, if I asked you,
what makes a cat versus a dog,
you wouldn't probably not
be able to answer that,
but if I showed you, brought
to you a cat and a dog,
you'll be able to tell the difference.
What are the ideas that your brain uses
to make that difference?
That's the whole dream with
self-supervised learning,
is it would be able to
learn that on its own.
That set of commonsense knowledge,
that's able to tell the difference.
And then there's like a
lot of incredible uses
of self-supervised learning,
very weirdly called self-play mechanism.
That's the mechanism behind
the reinforcement learning successes
of the systems that won at Go,
at, AlphaZero that won a chess.
- Oh, I see.
That play games?
- [Lex] That play games.
- Got it.
- So the idea of self-play is probably,
applies to other domains than just games.
Is a system that just
plays against itself.
And this is fascinating
in all kinds of domains,
but it knows nothing in the beginning.
And the whole idea is it creates
a bunch of mutations of itself
and plays against those
versions of itself.
And the fascinating thing is
when you play against systems
that are a little bit better than you,
you start to get better yourself.
Like learning,
that's how learning happens.
That's true for martial arts.
It's true in a lot of cases.
Where you want to be
interacting with systems
that are just a little better than you.
And then through this process
of interacting with systems
just a little better than you,
you start following this process
where everybody starts getting
better and better and better
and better until you are
several orders of magnitude
better than the world champion
in chess, for example.
And it's fascinating because
it's like a runaway system.
One of the most terrifying
and exciting things
that David Silver, the creator
of AlphaGo and AlphaZero,
one of the leaders of the team said,
to me is a,
they haven't found the
ceiling for AlphaZero.
Meaning it could just
arbitrarily keep improving.
Now, in the realm of chess,
that doesn't matter to us.
That it's like,
it just ran away with the game of chess.
Like it's like just so
much better than humans.
But the question is what,
if you can create that in the
realm that does have a bigger,
deeper effect on human
beings and societies,
that can be a terrifying process.
To me, it's an exciting process
if you supervise it correctly,
if you inject, if what's
called value alignment,
you make sure that the goals
that the AI is optimizing
is aligned with human
beings and human societies.
There's a lot of fascinating
things to talk about
within the specifics of neural networks
and all the problems that
people are working on.
But I would say the really big,
exciting one is self-supervised learning.
We're trying to get less
and less human supervision,
less and less human
supervision of neural networks.
And also just a comment and I'll shut up.
- No, please keep going.
I'm learning.
I have questions, but I'm learning.
So please keep going.
- So, to me what's
exciting is not the theory,
it's always the application.
One of the most exciting applications
of artificial intelligence,
specifically neural networks
and machine learning
is Tesla Autopilot.
So these are systems that are
working in the real world.
This isn't an academic exercise.
This is human lives at stake.
This is safety-critical.
- These are automated vehicles.
Autonomous vehicles.
- Semi-autonomous.
We want to be.
- Okay.
- We've gone through wars on these topics,
- Semi-autonomous vehicles.
- Semi-autonomous.
So, even though it's called
a FSD, Full Self-Driving,
it is currently not fully autonomous,
meaning human supervision is required.
So, human is tasked with
overseeing the systems.
In fact, liability-wise, the
human is always responsible.
This is a human factor
psychology question,
which is fascinating.
I'm fascinated by the whole space,
which is a whole 'nother space
of human robot interaction
when AI systems and humans work together
to accomplish tasks.
That dance to me is one of
the smaller communities,
but I think it will be one of the most
important open problems
once they're solved,
is how the humans and
robots dance together.
To me, semi-autonomous driving
is one of those spaces.
So for Elon, for example,
he doesn't see it that way,
he sees semi-autonomous
driving as a stepping stone
towards fully autonomous driving.
Like, humans and robots
can't dance well together.
Like, humans and humans dance
and robots and robots dance.
Like, we need to,
this is an engineering problem,
we need to design a perfect
robot that solves this problem.
To me forever,
maybe this is not the case with driving,
but the world is going
to be full of problems
with always humans and
robots have to interact,
because I think robots
will always be flawed,
just like humans are going
to be flawed, are flawed.
And that's what makes life beautiful,
that they're flawed.
That's where learning happens
at the edge of your capabilities.
So you always have to figure out,
how can flawed robots and
flawed humans interact together
such that they, like the sum
is bigger than the whole,
as opposed to focusing on just
building the perfect robot?
- Mm-hmm.
- So that's one of the
most exciting applications
I would say of artificial
intelligence to me
is autonomous driving, the
semi-autonomous driving.
And that's a really good
example of machine learning
because those systems
are constantly learning.
And there's a process there
that maybe I can comment on,
the, Andrej Karpathy who's
the head of autopilot
calls it the data engine.
And this process applies for
a lot of machine learning,
which is you build a
system that's pretty good
at doing stuff,
you send it out into the real world,
it starts doing the stuff
and then it runs into what
are called edge cases,
like failure cases,
where it screws up.
We do this as kids.
That you have-
- You do this as adults.
- We do this as adults.
Exactly.
But we learn really quickly.
But the whole point,
and this is the fascinating
thing about driving,
is you realize there's
millions of edge cases.
There's just like weird situations
that you did not expect.
And so the data engine process
is you collect those edge cases,
and then you go back to the drawing board
and learn from them.
And so you have to
create this data pipeline
where all these cars,
hundreds of thousands of
cars are driving around
and something weird happens.
And so whenever this weird detector fires,
it's another important concept,
that piece of data goes
back to the mothership
for the training,
for the retraining of the system.
And through this data engine process,
it keeps improving and
getting better and better
and better and better.
So basically you send out
a pretty clever AI
systems out into the world
and let it find the edge cases,
let it screw up just enough to figure out
where the edge cases are,
and then go back and learn from them,
and then send out that new version
and keep updating that version.
- Is the updating done by humans?
- The annotation is done by humans.
The, so you have to,
the weird examples come back,
the edge cases,
and you have to label what
actually happened in there.
There's also some mechanisms
for automatically labeling,
but mostly,
I think you always have to
rely on humans to improve,
to understand what's
happening in the weird cases.
And then there's a lot of debate.
And this, the other thing,
what is artificial intelligence?
Which is a bunch of smart people
having very different opinions
about what is intelligence.
So AI is basically a community of people
who don't agree on anything.
- And it seems to be the case.
First of all,
this is a beautiful description of terms
that I've heard many times
among my colleagues at Stanford,
at meetings in the outside world.
And there are so many fascinating things.
I have so many questions,
but I do want to ask one
question about the culture of AI,
because it does seem to be a community
where at least as an outsider,
where it seems like there's
very little consensus
about what the terms
and the operational definitions even mean.
And there seems to be a lot
of splitting happening now
of not just supervised
and unsupervised learning,
but these sort of intermediate conditions
where machines are autonomous,
but then go back for more instruction
like kids go home from
college during the summer
and get a little,
moms still feeds them
then eventually they leave
the nest kind of thing.
Is there something in
particular about engineers,
or about people in this
realm of engineering
that you think lends
itself to disagreement?
- Yeah, I think,
so, first of all, the
more specific you get,
the less disagreement there is.
So there's lot of disagreement
about what is artificial intelligence,
but there's less disagreement
about what is machine learning
and even less when you
talk about active learning
or machine teaching or
self-supervised learning.
And then when you get
into like NLP language
models or transformers,
when you get into specific
neural network architectures,
there's less and less
and less disagreement
about those terms.
So you might be hearing the disagreement
from the high-level terms,
and that has to do with
the fact that engineering,
especially when you're talking
about intelligence systems
is a little bit of an art and a science.
So the art part is the thing
that creates disagreements,
because then you start
having disagreements
about how easy or difficult
the particular problem is.
For example,
a lot of people disagree with Elon
how difficult the problem
of autonomous driving is.
And so, but nobody knows.
So there's a lot of disagreement about,
what are the limits of these techniques?
And through that,
the terminology also contains
within it the disagreements.
But overall,
I think it's also a young science
that also has to do with that.
So like it's not just engineering,
it's that artificial intelligence truly
is a large-scale discipline,
where it's thousands, tens of thousands,
hundreds of thousands
of people working on it,
huge amounts of money being
made as a very recent thing.
So we're trying to figure out those terms.
And, of course,
there's egos and personalities
and a lot of fame to be made.
Like the term deep learning, for example,
neural networks have been around
for many, many decades since the '60s,
you can argue since the '40s.
So there was a rebranding
of neural networks
into the word, deep learning,
term, deep learning,
that was part of the
re-invigoration of the field,
but it's really the same exact thing.
- I didn't know that.
I mean, I grew up in
the age of neuroscience
when neural networks were discussed,
computational neuroscience
and theoretical neuroscience,
they had their own journals.
It wasn't actually
taken terribly seriously
by experimentalists until a few years ago.
I would say about five to seven years ago.
Excellent theoretical
neuroscientist like Larry Abbott
and other colleagues,
certainly at Stanford as well
that people started paying attention
to computational methods.
But these terms,
neural networks, computational methods,
I actually didn't know
that neural network works
in deep learning where
those have now become
kind of synonymous.
- No, they're always the same thing.
- Interesting.
It was, so.
- I'm a neuroscientist
and I didn't know that.
- So, well, because neural networks
probably means something else
and neural science not something else,
but a little different flavor
depending on the field.
And that's fascinating too,
because neuroscience and AI people
have started working together
and dancing a lot more
in the recent,
I would say probably decade.
- Oh, machines are going into the brain.
I have a couple of questions,
but one thing that I'm sort of fixated on
that I find incredibly interesting
is this example you gave of playing a game
with a mutated version of
yourself as a competitor.
- Yeah.
- I find that incredibly interesting
as a kind of a parallel or a mirror
for what happens when we
try and learn as humans,
which is we generate repetitions
of whatever it is we're trying to learn,
and we make errors.
Occasionally we succeed.
In a simple example, for instance,
of trying to throw bulls
eyes on a dartboard.
- Yeah.
- I'm going to have
errors, errors, errors.
I'll probably miss the dartboard.
And maybe occasionally, hit a bullseye.
And I don't know exactly
what I just did, right?
But then let's say I was playing darts
against a version of myself
where I was wearing a visual prism,
like my visual, I had a visual defect,
you learn certain things
in that mode as well.
You're saying that a machine
can sort of mutate itself,
does the mutation always
cause a deficiency
that it needs to overcome?
Because of mutations in biology
sometimes give us super powers, right?
Occasionally, you'll get somebody
who has better than 2020 vision,
and they can see better than
99.9% of people out there.
So, when you talk about
a machine playing a game
against a mutated version of itself,
is the mutation always say what
we call a negative mutation,
or an adaptive or a maladaptive mutation?
- No, you don't know until you get,
so, you mutate first
and then figure out and they
compete against each other.
- So, you're evolving,
the machine gets to evolve
itself in real time.
- Yeah.
And I think of it,
which would be exciting
if you could actually do with humans.
It's not just.
So, usually you freeze
a version of the system.
So, really you take on Andrew of yesterday
and you make 10 clones of them.
And then maybe you mutate, maybe not.
And then you do a bunch of competitions
of the Andrew of today,
like you fight to the death,
and who wins last.
So, I love that idea
of like creating a bunch
of clones of myself
from like from each of
the day for the past year,