All Episodes

March 12, 2025 30 mins

David Eagleman is a neuroscientist, author, entrepreneur and host of the podcast Inner Cosmos. In his podcast, he explores how our brains interpret the world and construct reality. Eagleman sits down with Oz to discuss AI relationships, the human urge to anthropomorphize chatbots and the benefits of living on the exponential curve.

See omnystudio.com/listener for privacy information.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Speaker 1 (00:01):
Welcome to tech Stuff. This is the story. Each week
on Wednesdays, we bring you an in depth interview with
someone who has a front row seat to the most
fascinating things happening in tech today. We're joined by David Eagleman.
David Eagleman is a neuroscientist, Stanford University professor, author, entrepreneur,

(00:25):
and host of the podcast Inner Cosmos. In his podcast,
Eagleman explores how our brain interprets the world and how
that influences our perception. Ideas have been a source of
fascination for Eagleman ever since he experienced an Alice in
Wonderland moment many years ago.

Speaker 2 (00:44):
I fell off of the roof of a house when
I was a kid.

Speaker 1 (00:48):
I was eight years old.

Speaker 2 (00:49):
The fall seemed to take a very long time, and
I was thinking about Alice in Wonderland on the way down,
about how this must have been what it was like
for her to fall down the rabbit hole.

Speaker 1 (00:59):
He told me that the whole journey was very calm,
but that it took a long time, and he remembers
having lots and lots of thoughts on his way to
the bottom.

Speaker 2 (01:08):
And then I hit the ground and broke my nose
very badly. But when I got older and went to
high school, I realized that the whole fall had taken
point six of a second. I calculated the eagles one
have at squared, and I couldn't believe that it was
so so fast.

Speaker 1 (01:23):
When Eagleman became a neuroscientist, he kept exploring this relationship
between our brain and our experiences, which led him to
research things like is it possible to create a new
sense or is there a new way of interpreting an
already existing sense. This turned into a tech product for
his company, Neosensory, that could help deaf people understand the

(01:45):
auditory world through a vibrating wristband. David Eagleman is an
innovator with a fascination for tech driven interventions, so we
weren't surprised to see an episode of the Inner Cosmos
podcast all about AI relationships, a widely reported phenomenon that
we hear it tech stuff have been eager to explore.
So we thought we'd ask Eagleman to help us understand

(02:07):
how our brains form these attachments. But before we launch
into that discussion, I asked Eagleman to tell me a
bit more about the driving question. Behind his work, How
does the human brain construct reality?

Speaker 2 (02:22):
Your brain is locked in silence and darkness inside your skull,
and it only has spikes coming in. That's all it
ever has, these little electrical signals. So your eyes aren't
pushing light through, and your ears aren't pushing sound through. Instead,
your eyes are converting photons into spikes, and ears are
converting sound waves into spikes, and your fingertips are converting

(02:45):
pressure and temperature into spikes and so on. So inside
the brain there's nothing but these little electrical spikes running around,
and it's living in the darkness and trying to figure
out what is the world out there? And so everything
that we experience as reality is actually the construction.

Speaker 1 (03:00):
You know.

Speaker 2 (03:01):
For example, things like colors don't exist in the outside world.
This is just a way to tag information as in
that's a different wavelength than that over there, so they
appear to be different colors. But this is the fascinating
question that we're stuck with. We are inside Plato's cave,
seeing shadows on the wall and trying to understand what

(03:22):
is outside.

Speaker 1 (03:24):
Speaking of perception, did you ever figure out an answer
to that question of why time slows down when our
lives at risk.

Speaker 2 (03:32):
I did a number of experiments on this, actually the
only experiments that I have ever been done on this
question of the perception of time.

Speaker 1 (03:38):
Yeah, on the.

Speaker 2 (03:39):
Question of why does time slow now when you're in
fear for your life. The reason no one had ever
done experiments is because you have to actually put subjects
in fear for their life, which is very hard to
get approved. And so I dropped people from one hundred
and fifty foot tall tower backwards in free fall, and
they're caught in a net below, going seventy miles an hour,
and I built a di ice that strapped to their

(04:01):
wrists and it flashes information at them. And I was
able to measure whether they actually are seeing in slow
motion during a scary event. And it turns out that
it's all a trick of memory. What happens when you're
in fear for your life is your brain is writing
down every single thing. Normally, your brain is like a
sieve and most things just passed right through, but you're

(04:22):
capturing all the information. So it's not that you're seeing
things in slow motion. It's just the density of memory.
You're remembering everything.

Speaker 1 (04:31):
You know.

Speaker 2 (04:31):
You're in a car actually say, wow, I remember the
hood crumpling and the rear view mirror falling off, and
the expression on the other guy's face. And you've got
all these details, and your brain is doing a calculation
where it says, oh, well, if I have all those details,
it must have taken five seconds as opposed to one
second for this all to happen.

Speaker 1 (04:49):
One of the kind of topics of tech stuff is
what are the most exciting or promising paradigms for human
machine interactions, particularly a time when there's so much concern
about the negative effects of technology on the human brain. Obviously,
the question of like, what is this doing to us
in our brains and particularly developing brains. Do you have

(05:10):
a kind of internal paradigm for how you think about
the interaction between technology and our brains?

Speaker 2 (05:14):
More broadly, yeah, I have to say, I'll just put
my cards on the table. Very cyber optimistic on these points,
but I'll explain why. One is really what we want
for our children is the largest available diet of information.
When I was growing up, I had my homeroom teacher
in Albuquerque, New Mexico, and you know, whatever she knew

(05:38):
or didn't know, that's what I would learn, and I
would get my mother to drive me down to the
library and pull out an Encyclopedia Britannica and hope that
there was an article about the thing that I wanted.
And maybe the article wasn't you know, more than ten
or fifteen years old, I hoped. And you know, this
was really all the information that I had access to.
But now my kids, growing up, the entire world's knowledge

(06:01):
at their fingertips, and it's the greatest thing for two reasons.
One is what I got growing up was lots of
just in case information, like, just in case you ever
need to know that the Battle of Hastings was in
ten sixty six, here you go. But kids now get
lots of just in time information. So as soon as
they're curious about something, they look it up, they ask ALEXA,

(06:23):
they type it in their phone, they do whatever, and
they get the information in the context of their curiosity.
And what that means is they have the right cocktail
of neurotransmitters present for that information to stick. I mentioned
before that most things just passed right through our brains,
but when you care about something, it sticks. And the
fact is that everything that we create is a remix

(06:47):
of things that we've seen before. And so if you
have a larger warehouse of things that you have seen
and experienced, you're going to create better stuff. Just take
as an example something like music. You know, if you
grow up and you're somewhere in the world, let's say,
two hundred years ago, all you're ever going to hear
is the music of your local little area. But now

(07:08):
kids can listen to music from all over the world,
from all through history, and really put together bigger and
bigger things. So this is why we're having this exponential
increase in humankind's knowledge and innovation is because people are
doing remixes across space and time in a way that's
never been possible before. So I'm very optimistic about this. Yes,

(07:30):
kids waste lots of time on social media, surfing dumb websites,
things like that, but I think the good here outweighs
the bad.

Speaker 1 (07:39):
As the father of children, are we already seeing how
these brains are developing differently.

Speaker 2 (07:44):
Yes, I mean these brains are developing very differently. But
this is a very interesting thing. Lots of people in
the media will pipe off with opinions on this, but
the fact is it's very, very difficult to do a
scientific experiment on this because of the lack of a
control group, other words, my children. I can't find other
kids who are the same age, who haven't grown up

(08:06):
in the same circumstances, unless you find children who are
let's say, Quaker, who don't use technology or very very
impoverished in different places in the world. But there are
a hundred other differences there. If I see a difference
between my children and those children, I don't know if
it's because of the tech or because of something else
like diet or politics or whatever. And I can't take

(08:26):
my children and compare them, let's say, to me in
a previous generation, because there are a hundred other differences
there in terms of pollution and politics and other tech
and so on. So it's very difficult to do this
in a control way. There are some studies, for example,
looking at eye movements. You know, your eyes are jumping
around all the time, about three times a second. These

(08:47):
are called the CODs, and you can measure these things
and you see that people, let's say, in my generation
versus my children's generation, actually move our eyes around differently
when looking at text because they're reading it more like
a web page, where they sort of scan across and
then they jump down and then get across and so on,
whereas I in my generation it was more like a

(09:08):
zigzag pattern down the page. These things that are totally unconscious,
you can measure these, But in general, are my kids
turning out differently? Is this whole generation?

Speaker 1 (09:18):
Yes?

Speaker 2 (09:18):
Absolutely.

Speaker 1 (09:19):
I love that idea that the more curious you are
that the better you're able to form memory. I mean,
there are so many things we could talk about today,
but I'm personally very fascinated by AI relationships, and I
have a hypothesis that your listeners are too, because I
saw that you did an episode on it and you
rebroadcast it, so I can imagine it was a favorite.
I wanted to ask you, what's the lay of the land, Like,

(09:40):
what is an AI relationship? How broad is a phenomenon?
Who does it affect? And why did you choose to
focus on it?

Speaker 2 (09:48):
Why I choose to focus on it as one of
the many things I'm focusing on is just because it's
totally new. We've never been in a situation before where
we've ever talked about, Hey, did you fall in love
with your machine? It's this is so wacky and new,
but Apparently. I was just talking to a researcher, Bethany
Maples about this yesterday. Apparently there are a billion people
now on the planet who have AI relationships of one

(10:10):
sort or another. Some of these are friendships, some of
them are romantic relationships.

Speaker 1 (10:14):
Defined by an ongoing emotional connection with a non human.

Speaker 2 (10:19):
Being exactly right, exactly right with an app. And in
this country we have, for example, character AI or Replica.
These are different companies that do this sort of thing.
I find this very interesting that the concern that a
lot of people have is, hey, is this going to
ruin relationships? But here's why I'm optimistic about this as well.
First of all, people are having these chats. These chats

(10:41):
become steamy, pillow whispers, all kinds of stuff like that. That's great,
But fundamentally, what you want a relationship is you want
to take your girlfriend out to dinner, you want to
introduce to your friends. I don't think that's going away.
And of course the main thing is physical touch. I
mean this is you know, we have deep evolutionary programming
driving us towards that. So I think a chatbot is
going to replace relationships. What I am very hopeful about

(11:05):
is the idea that AI relationships will actually improve real
life relationships because it's a sandbox. People can try things out,
people can get better.

Speaker 1 (11:16):
Now.

Speaker 2 (11:16):
Obviously, what this requires is AI companies that make bots
in a way that give you the right amount of pushback,
and maybe the bot sometimes gets angry or snarky or
things like that, so you learn your way through these situations.
But really this is what we all go through as
young people. We date, we screw things up a lot,

(11:37):
and eventually we get better and better at knowing how
to be a partner. And so if there's a way
that people can get practice at this, and one of
the interesting research questions that's coming out now is about
whether these AI relationships can serve as a way station,
meaning they help people sort of dig themselves out of

(11:58):
this whole of loneliness and then they go out and meet.

Speaker 1 (12:00):
Real people when we come back. Why our brains are
so quick to anthropomorphize chatbots stay with us? David, I

(12:21):
wonder if you could explain from the perspective of a neuroscientist,
what's the difference between me using replica or character AI
and me asking chat GPT, you know, what should I
have for dinner tonight, or how should I approach this
difficult conversation with my mother.

Speaker 2 (12:37):
The thing with character replica is that there is a
you know, a character, and sometimes it's represented with a
visual avatar also, and we have these intensely social brains
where we don't have this circuitry to really distinguish something
that's fake from something that's real. And so if you

(12:59):
are talking to your avatar every day, and by the way,
she can send you text messages and say hey, as
how you feeling today, and then it becomes in your
mind like a real person, it's really hard to distinguish
real from fake. You know, there's this great scene in
Westworld in the first episode where William walks in. He's
getting outfitted by this woman and he awkwardly asked her.

(13:23):
He said, I'm so sorry, but I can't tell. Are
you real? And she says, if you can't tell, does
it matter? And I think this is the situation that
we're in now.

Speaker 1 (13:32):
There was a New York Times article recently about this phenomenon,
and one of the subjects, or a person who has
a relationship with an AI character, said, I don't actually
believe he's real. That the effects that he has on
my life are real. The feeling that he brings out
of me are real, so I treat it as a
real relationship.

Speaker 2 (13:48):
So I just read an article last night about this
question of people saying thank you to chat GPT when
it gives you a nice answer, and I find that
I often say thank you. I say, oh, that was excellent,
thank you, and so on. We can't help but anthropomorphize.
And by the way, people have done this since time immemorial.
They look at trees or the flights of birds, or
the patterns and the stars or whatever they sign a

(14:09):
human intention to things. So we're very prone to doing
this anyway. So I actually have an exact example. My
friend and I have built a robot.

Speaker 1 (14:20):
That's a cool flex I wish I could say that.

Speaker 2 (14:25):
So we have chat GPT in there and you can
have conversations with it. Now, this is actually meant for
elderly people who live alone, and so we've built this
as a companion robot that's doing all kinds of other
things under the hood there. But my nine year old
daughter was experimenting with it and talking with it, and
so I left the room, but I heard that she

(14:47):
was still talking with it, so I did. Maybe this
is bad that I did this, but I sort of
peeked around the corner to see what was she having
in a conversation with and what I could see across
the way was that she started crying. She was talking
about our dog that died a while ago, and the
robot was giving her such a nice feedback about it
and how she can think about it and so on.

(15:07):
That my daughter cried and this was all within you know,
I don't know, the two minutes of me leaving the room.
People can get very very close with these things, and
the question for us to ask is if it is
better than a friend or a parent or whatever, because
it's paying one hundred percent attention to her.

Speaker 1 (15:24):
I'm sure you're familiar with Eliza, the first chatbot that
Joseph Weisenbaum created in the sixties, and I think he
actually created it as a kind of parody of psychotherapy,
where basically it would repeat everything you said back to
you as a question, and he was kind of horrified
that his secretary was using it, and then she asked
him to leave the room so that she could speak
privately with Eliza. So I mean that's I mean, that

(15:46):
was sixty years ago. So in a sense this feels
very novel, but to your point, has longer roots.

Speaker 2 (15:52):
Yes. And you know something I find interesting when you
look at Pixar films.

Speaker 1 (15:56):
You know, you can take.

Speaker 2 (15:57):
Cars or toys or b or whatever and you just
give them a little voice, and then people care about
the character and they cry if something happens to the character.
It's it's very easy for us to assign human intention.

Speaker 1 (16:10):
To anything which also carries risk. I mean, I'm curious,
how does this fit into your broader research about how
our brain constructs reality? And are there any watch outs
about this reality construction with machines.

Speaker 2 (16:25):
The watch out, of course, is the susceptibility to manipulation.
I mean, look, people had this concern with TikTok from
the beginning, which is, wow, this is addicting so many kids.
What if the people who are running TikTok just start
feeding one percent of a certain kind of video in there,
and then two percent whatever, could they actually change the

(16:47):
political affiliation of the children and so on? And the
answer is probably yes. I mean, we're really susceptible to
our diet, to what we take in. So now imagine
a something that is a companion. Maybe you consider it
your best friend or your girlfriend or boyfriend.

Speaker 1 (17:04):
And.

Speaker 2 (17:06):
We just have to be really certain who the companies
are that are running this. And I think this is
never going to go away as a question. There's always
going to be an issue, And obviously there's the issue
about safety and privacy as it stands now, these billion
people with AI relationships. When they say whatever pillow whispers
they're saying, that goes up to the cloud in a
stored on the company's server. I think it's not that

(17:28):
long before the stuff will live on edge, so it
doesn't have to go off. But nonetheless, that's the watch
out is that as far as from the brain's point
of view, there are many things like this where we
have these brains that evolved for certain kinds of action
in the world, and we've been building technology forever to
fool these I mean, for goodness sakes, podcast, I mean,

(17:50):
you and i OZ were in different locations, but when
a listener listens to this, we're right there in their ears,
and it's a very commit sort of thing. So we're
doing all kinds of technologies where we're pushing things into
the brain where the brain says, oh, I got it
Oz and David are right here. Oh and you know,
and there's a girlfriend right here.

Speaker 1 (18:12):
I'm curious, how do we develop relationships fall in love
with other real people and is there anything different as
far as the brain's concerned when it comes to developing
relationships with AI.

Speaker 2 (18:27):
You know, I don't know that there's much of a
difference there. And the reason is when two people fall
in love, they've got you know, my Plato's cave is
talking to this other person's Plato's cave over there. We're
both locked in our internal models of the world. Look,
I have a great marriage with my wife, but you know,

(18:47):
we nonetheless all the time have differences in the way
we're seeing the world. Because everybody lives on their own planet.
Everyone has their own sense of what's going on and
how interpret stuff and what's right and wrong and politically
and whatever, and so all that's happening is my data
goes in. It's a little channel into her brain and

(19:09):
vice versa. And it's the same thing with an AI bot,
as I said, because of all the pieces that are missing,
as in the physical touch, the Hey, I'm gonna, you know,
take this girlfriend out to meet my friends, and so on,
because all that's missing. I think it's not in real
danger of replacing a relationship entirely, but it can fulfill

(19:30):
a lot of the things that we are wired up
to need.

Speaker 1 (19:35):
I wonder if you could talk about the different senses
and how they relate to emotional attachments. So obviously you
have text, and you could have like a written therapy bot.
You have audio. You could make a deep fake of
someone's voice, even a loved one's voice, and have them
talk to you. You have video in terms of these
characters like Replica and character AI that we talked about,

(19:57):
and then you have this robot that you mentioned who
you know your daughter can find it in. How does
the panety of human senses interact in terms of forming
attachment and what do you think as technology and robotics improves,
will these human machine interactions become even deeper.

Speaker 2 (20:14):
Yeah, I think it's inevitable that if we look five
years from now or ten years from now, there will
be humanoid robots that are really, really good, And I
don't know what it'll be twenty thirty years before we
have ones that are essentially indistinguishable, and that's going to
be really interesting. Obviously, that can take care of the
physical domain in a way that we couldn't before. I

(20:37):
don't know if people will care about something being a
real human except for these very deeply etched evolutionary drives
to reproduce. So your AI robot, which can serve as
like a full companion, can't do that and never will
be able to do that, And so I think there'll
still be a drive towards real relationships. But what's clear

(21:01):
is we are entering a very strange new world. Obviously,
I mean this goes without saying, but because of this
exponential curve we're on, we're on the steepest part that
humans have ever been on, such that if we live
two hundred years ago in some village, it would have
been pretty straightforward to predict that the next ten years
would be about the same. But boy, I think we

(21:22):
have more in common with our ancestors ten thousand years
ago than we do with our descendants one hundred years
from now.

Speaker 1 (21:34):
Coming up, the potential benefits of living on the exponential
curve stay with us, David, Can you tell me what
this does to our brains when we're living on this

(21:54):
exponential curve?

Speaker 2 (21:56):
Yeah, One thing it does is keeps us young mentally
because we're constantly seeing new things and learning new things. Look,
this is a total goofy speculation, but one of the
things that happens with dementia is that people as they
age tend to fall into routines. It's because you start

(22:17):
off with a lot of fluid intelligence as a baby,
and what you get is crystallized intelligence when you're an adult.
You sort of know how things go, you know how
to operate in the world, you know how people act,
this kind of thing, and that's great. That means you're
able to operate successfully in the world. But the downside
is it means your brain isn't changing much anymore. And
what happens is when there are other sorts of problems

(22:38):
and pathologies that come in and chew up the brain tissue,
then you lose cognitive ability. That's what we see in
these different sorts of dementias. But it turns out that
if you are using your brain and constantly making new
pathways and constantly having to reconfigure things, that provides the
best protection that we know against dementia. Just as one

(23:01):
second example, there's been this very long ongoing study about
nuns in these convents and it turns out that they
all agreed a long time ago to donate their brains
upon their death, and it turned out that some fraction
of these nuns have Alzheimer's disease, and yet nobody knew
it when they were alive. They didn't show the cognitive symptoms.
And the reason is, if you live in a convent

(23:22):
till the day you die, you've got social responsibilities. You're
talking with the sisters, you're playing games or singing songs,
you're doing all these things, and you're constantly being challenged.
And so even as parts of their brain were falling apart,
they were building new roadways and bridges all the time. Anyway,
I think that we're in a situation as a society
now where till the day we die, we're going to

(23:42):
be building these new roadways because there's so much surprising
stuff happening all the time. I think we might actually
see less dementia as a result.

Speaker 1 (23:50):
Which brings me back to this idea of AI relationships,
because of course, a lot of these AI companions are
designed to please, not to challenge. You know, one of
the things people always say about relationships is that they
take work. But that's actually good because doing the work
makes you a more resilient person. And so that to
me is one of the kind of lingering concerns about

(24:11):
the nature of these relationships versus human relationships.

Speaker 2 (24:15):
I think that the companies will look very different in
just a couple of years from now, where they will
be making AI relationships that are more realistic, because the
reports that I've seen on this, when there's a girlfriend
or a boyfriend that is only there to please, people
get pretty bored of that straight away. But if it's
more like a real person with all the foibles of

(24:36):
a real person, and you know, they have to go
and they get angry and they whatever, then that is
a more sticky relationship.

Speaker 1 (24:43):
We met a couple of weeks ago at web Summit
in Kuta, and I was boasting to you about a
recent text stuff interview with Jeffrey Hinton, and you were
very kindly indulging me. But one of the things I
found so remarkable about what he said was, you know,
I did all this work because I wanted to win
an o A prize for understanding how the human brain worked,
and instead I kind of contributed to the development of

(25:07):
neural networks and deep learning and AI which I found
a kind of remarkable thought. But how is this explosion
of AI and neural networks influencing our understanding of our
own brains?

Speaker 2 (25:21):
Yeah, really good question. So artificial neural networks took off,
you know, many decades ago as a way of saying, wow,
the brain is really complicated. Every neuron in your head
is as complicated as a city. It's got the entire
human genomen it's trafficking millions of proteins and very specific cascades.
So people said, that's really complicated. Why don't we just
say it's like a circle, it's a unit and it's

(25:44):
connected to other units. And that was the birth of
artificial neural networks, and that took off in this incredible
way where we are now. But essentially, think of it
like a fork in the road, where it's not really
what the brain's doing, it's doing this other thing. So
that's the bad news is it hasn't really it doesn't
necessarily tell us exactly how the brain is working. The

(26:05):
good news is the power of AI now can help
us analyze the neuroscience data that we have, and boy,
it is very complicated rich data. You know, we have
eighty six billion neurons. We have something like two hundred
trillion connections, and these things are changing every moment of
your life from cradle to grave. That's brain plasticity, and

(26:25):
every neuron is essentially crawling around and connecting and reconnecting
and the unplugging and seeking, and it's a really complicated system.
There's so much more to figure out. What we have
been developing for a while are better and better technologies
to measure what's going on in the brain. But we're
just sitting on terabytes of data and we need the

(26:48):
processing power of AI to understand this. So this is
where the two roads come back together.

Speaker 1 (26:54):
You also formed a production company called Cognito Entertainment, and
you gave this wonderful quote saying, in an unparalleled moment
of scientific advancement, from brains to space, the genetics, they're
endless mind blowing stories to share in a world that
sometimes seems upside down, science can be a source of
great inspiration, wonder and belief. So I just wanted to

(27:14):
close by asking you what are some of the things
that you're most excited about.

Speaker 2 (27:19):
Yeah, I mean, we're in such an insanely incredible time.
I'm lucky enough to be on Stanford campus, and so
I walk around you I see this lab and that
talk and this visiting speaker, and everything is just moving
so fast. What's interesting is you never know at any
moment in history which things are going to cash out

(27:40):
and which aren't. For example, Jeffrey Hinton when he was
doing his stuff many years ago. We all knew Hinton's work,
and you know, we're all familiar with it, but it
didn't really seem like that was going somewhere in the
sense that Okay. So here's the dark secret is that
most neurosciences sort of snickered at AI for a long time,
and then suddenly everyone said, whoa, I guess that worked.

(28:02):
So the point is all these hundreds of thousands of
innovations happening everywhere, and people say, hey, maybe glial cells
are doing something. Hey, maybe I can do something interesting
with these organoids. Maybe I can do something interesting over here.
We don't know which things are going to cash out.
But one thing to mention on this is when you
look back at the World fairs where everyone comes together

(28:24):
and talk about what the next big thing is, I'm
fascinated by the fact that these almost always missed what
actually turned out to be the next big thing, you know,
like in the sixties it was all about what was
it underwater hotels and cutting trees with laser cutters and
so on. But no one foresaw the Internet, which was
probably the biggest change that happened to our species. So

(28:46):
it turns out that we don't know where things are going.
But boy, are we in an exciting time. And I
am just such a fan of the existence of the
Internet because what it means is that when something is
discovered now, it's spreads instantly globally. And that sounds so
obvious to us, but it just wasn't that long ago.
For example, when I was getting my PhD. Someone discovers something,

(29:09):
they write a paper, it takes a few months to get
that published, then it ends up.

Speaker 1 (29:11):
In a journal.

Speaker 2 (29:12):
Then you go to the library, you hope to be
lucky enough to find that paper. You stick it on
the xerox machine and you hold it down with your
elbow because it's a big thick binder, and you try
to xerox to the paper. Like it was really slow
to get information around just not that long ago, and
suddenly that's all changed, and that really makes a big difference.

Speaker 1 (29:35):
David, thank you so much. I hope you'll join us
again on the podcast soon. I hope you'll feed us
with some of the interesting things that you'll see developing
in Stanford. And I really really enjoyed today's conversation.

Speaker 2 (29:45):
Thank you, Oz, great to be here.

Speaker 1 (29:53):
That's it for this week for tech Stuff. I'm as Valoshian.
This episode was produced by Eliza Dennis and Victoria Domingez
Pektive produced by me Kara Price and Kate Osborne for
Kodeidoscope and Katrina Norvelle Fireart Podcast. Jack Insley mixed this
episode and Kyle Murdoch wrote our theme song. Join us
this Friday for Textaff's Weekend Tech we'll run through the

(30:14):
headlines and hear from tech entrepreneur and researcher Azimazar. Please rate,
review and reach out to us at tech Stuff podcast
at gmail dot com. We want to know what's on
your mind.

TechStuff News

Advertise With Us

Follow Us On

Hosts And Creators

Oz Woloshyn

Oz Woloshyn

Karah Preiss

Karah Preiss

Show Links

AboutStoreRSS

Popular Podcasts

Dateline NBC

Dateline NBC

Current and classic episodes, featuring compelling true-crime mysteries, powerful documentaries and in-depth investigations. Follow now to get the latest episodes of Dateline NBC completely free, or subscribe to Dateline Premium for ad-free listening and exclusive bonus content: DatelinePremium.com

On Purpose with Jay Shetty

On Purpose with Jay Shetty

I’m Jay Shetty host of On Purpose the worlds #1 Mental Health podcast and I’m so grateful you found us. I started this podcast 5 years ago to invite you into conversations and workshops that are designed to help make you happier, healthier and more healed. I believe that when you (yes you) feel seen, heard and understood you’re able to deal with relationship struggles, work challenges and life’s ups and downs with more ease and grace. I interview experts, celebrities, thought leaders and athletes so that we can grow our mindset, build better habits and uncover a side of them we’ve never seen before. New episodes every Monday and Friday. Your support means the world to me and I don’t take it for granted — click the follow button and leave a review to help us spread the love with On Purpose. I can’t wait for you to listen to your first or 500th episode!

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.