Fighters Guide to: Brain Myths, and Misconceptions Part 4

Lecture 19
Are Dreams Meaningful?

Dreaming is difficult to study empirically. That’s because it’s
difficult to know, with the tools we currently have, what people are
dreaming about unless they can tell you about it in the moment.
And people can’t tell you what they are dreaming about while they
are dreaming because they’re unconscious.
Instead, we need to wake them up and ask them what they
remember having just dreamt. There’s a major problem with this
method: Your conscious mind probably thinks very differently
from your dreaming mind. At least that’s what the brain activation
comparisons suggest, showing that the activity in your brain is very
different while you’re awake compared with while you’re asleep.
How your conscious mind interprets the activity in your unconscious
mind might be very different from what you actually experience
in the dream state. In the simplest case, while dreaming, you’re
not being bombarded with sensory information that is coming from
outside of your brain. Instead, your brain is generating the sensory
experience itself—it’s hallucinating.
When we consider what a dream is, we need to think about the
mental state that makes dreaming possible. But because we can’t
get direct information from that state, we can only really study the
waking mental state that follows. It’s not a very transparent lens
into dreaming, but right now, it’s all we have.
It’s entirely possible, if not highly likely, that your interpreter, the
part of your brain that actively works to make sense of what’s going
on in your mind, is very active when you’re remembering elements
of your dream. And the interpreter is not very trustworthy, so we
have to admit that the reports of our dreams might be polluted by
other cognitive processes, such as remembering and interpreting.
For the most part, we consider dreaming outside of our control.
Only in a special type of dreaming, called lucid dreaming, do we
actually have control over what’s happening. So, dreaming is
a conscious report of a mental experience over which we have
no control.
We need to set some boundaries to study the phenomenon of
dreaming. We can distinguish, for example, dreaming that occurs
just as we’re falling asleep from dreaming that occurs while we’re
deeply asleep and dreaming that occurs just as we’re waking up.
If we stick to our definition of dreaming as a mental activity in
an unconscious state, then we need to include thinking during
sleep, and that happens in all stages of sleep. When we think
of dreaming, most people don’t have in mind any thought that
happens while asleep. Rather, we tend to think of dreams as the
often bizarre storylines that unfold during the stage of sleep called
rapid eye movement (REM) sleep.
The name for this stage of sleep comes from the fact that, during
it, our eyes are darting left and right underneath our eyelids, as
though we are seeing events unfold in our mind’s eye. But is that
what the eyes are really doing? Is that darting a function of looking
or something else entirely?
The person who is perhaps most famously associated with
dreaming is Sigmund Freud, who proposed that dreams are
evidence of a disguised censorship in our minds. By interpreting
the dreams of his patients, he noticed that the language of dreams
is not direct, but he felt that psychoanalysts can decipher a
person’s true fears, wishes, and desires if they can uncover the
information that is being censored during the dreaming process
and unshackle themselves from past disappointments.
The problem with Freud’s theory is that it’s not testable or
falsifiable. Because science works by eliminating hypotheses that
we know are wrong, we can’t call his theory scientific. Another
problem is that different analysts have been shown to interpret
the same dreams in different ways. That means that analysis
is to some extent subjective, which means that it can’t be fully
objective—which science must be.
A scientific theory of what our dreams mean was proposed in 1977
and is called the activation-synthesis hypothesis, which states
that dreams are just the result of your interpreter trying to create a
story out of random neural signaling.
When we enter REM sleep, large waves of activity wash over the
brain. The activation-synthesis model proposes that these waves
activate different brain regions randomly—that’s the activation
part. Then, the cortex tries to synthesize this activity—to make
sense of it—by concocting a story or tying images together, and
so forth.
These ideas called Freud’s work into question, and proponents
of the model insist that dream interpretation is mumbo jumbo—
because there is no meaning in the dreams. Dreaming is just your
cortex trying to make sense of random electrical activity across
the brain. This model rocked the foundation on which Freudian
analysis was built.
The activation-synthesis hypothesis has some real limitations. For
example, it predicts the way that we dream—the visual images,
the bizarre nature, the distortions—but it doesn’t speak to what
we dream. The model doesn’t explain the content of our dreams in
any satisfying way.
There are now a series of theories that do attempt to address
content. One idea is that dreams play a role in memory
consolidation, firming up memories of things that are important to
us and erasing irrelevant information. Certainly, sleep as a whole
is involved in this process, but the question remains concerning
the extent to which dreams are an integral part or simply an
innocent bystander.
In the dream-to-erase theory, for example, scientists suggest
that if we remembered everything that we experience each
day, we’d have serious problems with interference, overlapping
representations, and ultimately a system failure akin to what
happens when your computer runs out of memory.
We see a version of this problem in people who have highly
superior autobiographical memory—rare individuals who can
remember every day of their lives. These people have obsessive-
compulsive tendencies. Their memories become intrusive, and
remembering takes up much of their conscious time, even when
they don’t want it to.
Proponents of the dream-to-erase theory suggest that dreaming
is an artifact of how unnecessary information is removed from the
brain, leaving it more efficient and running smoothly, much like
a cleaned-up computer. While you dream, you replay things that
happened in the past, strengthening the important associations
and getting rid of the irrelevant stuff.
In this view, dreams don’t mean anything; they are just part of the
mechanism by which the brain declutters itself.
This idea, that dreams function as a cleanup crew for unimportant
sensory information, dates back to the 19th century and was even
included in Freud’s famous tome on dream interpretation. It’s
compelling because it seems to solve the problem of how our
brains seem to be capable of learning a limitless amount of new
information every day, even as most of that information is forgotten
after 24 hours, or our first sleep period.
The major problem with the dream-to-erase theory is that we
haven’t found evidence yet of active erasing during REM—that
synapses that were previously strengthened, for example, are
being weakened in this state. But the idea that sleep performs
cleanup, with a higher cerebral spinal fluid volume and a buildup
of toxic by-products of metabolism in people who aren’t sleeping
enough, remains influential.
There are different levels of attention and consciousness. We can
focus intently on one activity or let our minds wander. When we
sleep, our focus is looser still, as most people relinquish control of
their thoughts to the dream weaver.
Most current scientific theories of dreaming agree with this
characterization so far (minus the weaver). But whether the
content of dreams is just random, a by-product of the sleeping
brain (as in the dream-to-erase theory), or more meaningful (as
Freud suggested) is where the theories diverge.
In the contemporary theory of dreaming, developed by Tufts
University professor Ernest Hartmann and others, the content of
a dream is thought to be generally driven by emotional networks
in the brain. The evidence that theorists point to is varied, but it
begins with the finding that after an emotional event, particularly
a traumatic one, the intensity of dreams is greater, as is the
frequency of intense dreams.
The fact that what we do during the day often makes an
appearance in our dreams is not a new observation. This is one
aspect of dreaming that Freud got exactly right.
But we don’t usually dream about parts of our day that were
emotionally neutral, such as brushing our teeth or writing emails,
even though we spend a lot of our time doing these run-of-the mill
things. In the contemporary theory of dreaming, the emotional
content is selected for and given center stage.
Ernest Hartmann suggested
that dreams are a way by
which the brain can integrate
traumatic or emotional material
with more neutral memories,
weaving the information in
and thereby decreasing the
emotional response to the
content of the trauma.
Several studies have shown
that depriving an individual
of sleep disrupts the consolidation of emotional memories in
particular. They also show that REM sleep—the stage of sleep in
which people are most likely to report having dreamt—plays a key
role in consolidating these memories. Once consolidated, these
memories are more easily recalled, especially when they result
from negative emotional experiences.
It’s difficult to study the content of dreams, but we can objectively
measure sleep stages and therefore uncover the effects of
different stages on different aspects of brain function, such as
emotional memories.
REM sleep is a good candidate for emotional therapeutic work,
as sleep expert Matthew Walker at the University of California,
Berkeley, suggests. It’s the time of night when the brain cells
that pump serotonin, norepinephrine, and noradrenaline—
neurotransmitters that are involved in mood regulation—are most
strongly inhibited. That means that during REM sleep, when we
are most likely to dream, our brain contains only low levels of the
chemicals that signal anxiety and other emotional states.
At the same time, Walker notes that there is increased activity
in the parts of the brain that encode, consolidate, and retrieve
emotional memories: the hippocampus and amygdala, along
with other regions that are part of the limbic system, which is the
seat of our emotional life, especially when it comes to memories.
There is also increased activity in cells that provide the brain with
acetylcholine, a neurotransmitter with a large number of functions,
including arousal and attention.
Walker has put forth a detailed account of the relationship between
dreaming and our brain’s processing of emotions, in line with the
contemporary theory of dreaming. The punchline is that during
REM sleep, our brains work through past emotional experiences,
stripping away the anxiety and stress from the semantic
content—the useful information that we want to take away from
the experience.
Walker and colleagues point to the fact that limbic regions
that contain emotional memories are active during REM sleep
and mood-related neurotransmitters such as serotonin and
norepinephrine are at their lowest levels. This combination
suggests that we relive our memories, but we don’t feel the
emotions they contain when we dream.
At the same time, our brains are low in adrenaline and full of
acetylcholine, a neurotransmitter that modulates our attentional
system. That means that the memory can be reactivated and
strengthened, and disassociated from the emotion.
It’s a beautiful model. And it might explain why individuals who
suffer from post-traumatic stress disorder have disturbances in
their emotional memories.

Dreams are how
your subconscious
communicates its desires.
Dreams likely simply
reflect the interpreter’s
attempt to make sense of
random neural firings.
Lecture 20
Can Brain Scans Read Your Mind?
There are several different ways in which we can measure brain
activity in a healthy person. We can record electrical potentials—
that is, averaged neural activity across many neurons—by placing
electrodes on the scalp. This is called an electroencephalogram
(EEG), and it gives us a sense of what ensembles of neurons are
doing at a very specific point in time.
We can also see where cells have just been active by tracking
the amount of oxygen in the blood flowing to that region. Because
active cells require oxygen, we can use magnetism to detect
changes in the oxygenation of blood in different parts of the brain.
This is how MRI tools have been repurposed to see activity—or
functional MRI (fMRI), with which we see where brain cells are
more active compared with some baseline activity.
Positron-emission tomography (PET) scans are also still used,
though less frequently, because they provide much of the same
information that we can garner from fMRI but involve the injection
of a radioactive substance that is then tracked as it’s taken up by
active cells. It’s largely reserved to diagnose neurodegenerative
diseases or cancer, rather than to investigate cognition.
There are other ways that we can track electrical activity or
blood flow, but the principles of how we interpret those data are
essentially the same: We want to see what parts of the brain, or
networks, more specifically, are active when a person is doing
something that we’re trying to understand, such as improvising
music, solving a problem, remembering the past, or lying.
This activity tells us that a large number (millions) of neurons are
firing action potentials—sending signals to each other—around
the same time. These signals can be complicated. An increase in
firing does not always mean the same thing. For some cells, their
baseline firing rate is pretty high and it’s a decrease in firing that
is meaningful.
Some regions are likely making computations that don’t involve
millions of cells firing in unison. In some regions, the signals might
be more specific and the relevant information might be coded
sparsely—that is, only a few cells are triggered by an event.
But note that the pictures of activated brains that we often see in
the media are not what they seem: They are not literal photographs
of neurons firing. They are statistical maps, often indicating where
in the brain the tools found more active cells.
The shapes of these maps depend on where the experimenter
sets the statistical threshold. If the scientists lower the threshold in
their analysis, there will be more active areas. If they raise it, there
will be fewer.
There are also neuroimaging studies that focus on the volumes
of different brain structures, rather than their activity—the relative
sizes of different regions. These maps can look very similar to
activity maps to the uneducated eye, but they aren’t saying the
same thing. The size of a structure isn’t the same thing as how
active it is.
From the use of neuroimaging tools, we have learned that much
of the brain is active all the time, no matter what we’re doing,
and that these activations often follow patterns, with some
regions fluctuating in activity in sync and other regions acting
more independently.
These findings have reshaped how we think about the brain,
moving away from thinking about it as a modular machine, with
different regions having different functions, and toward a more
networked view, talking about brain circuits that work together.
As a person develops a skill, the functional maps that represent
different stages of training can be distinguished from each other:
In general, first more regions are recruited to help the person
perform the task. With practice, these regions begin to act more
efficiently, with maps showing fewer areas involved but a greater
amplitude of the signal.
Neuroimaging techniques have had a profound influence on our
understanding of the brain, from its networking to its plasticity and
many more findings in between. But they are also too easy to
misinterpret and overstate.
Before we decide to use some imaging tool to track brain activity
or structure, we need to understand what the information gleaned
from that tool would actually tell us. Sometimes, the additional
information that neuroimaging gives us isn’t as useful as the
additional information that we can gain by getting a deeper
understanding of the behavior we want to study.
Experience is subjective. If you activate a part of your brain while
you’re doing something, we can’t be sure that you’re activating
it for the same reasons that another person might be. As the
technology currently stands, the differences between 2 people
might be invisible to neuroimaging.
Most of the brain’s regions have many functions, and just because
a region is active, we can’t be sure exactly which function it’s
performing unless we know more about the behavior in question.
That’s why the best neuroimaging studies involve carefully crafted
tasks that neuroscientists have studied long enough to know how
our minds are engaged by them.
Only when we understand the behavior well can the brain activity
it generates be informative. Brain imaging data are only one more
tool that scientists can use to probe the mind; they should be
considered in addition to work on patients with brain damage and
behavioral experiments, not as a trump card.
There are several regions in the brain that are most often singled
out by the media and that have become the subjects of the most
common misunderstandings. Three of these are the amygdala,
which is involved in modulating emotions; the reward circuitry that
responds to dopamine surges; and the prefrontal cortex, which is
evolutionarily our most recent addition.
The amygdala is an almond-shaped structure in the medial
temporal lobe, deep in the brain roughly behind your ears. Like
many brain regions, the amygdala does a lot of things. From
patients with amygdala damage and from animal studies in which
the amygdala is lesioned, we know that without it, we don’t see the
usual memory enhancement that accompanies emotional events.
Despite common belief, we don’t usually repress negative
memories. Negative emotions such as fear or anxiety reinforce
the vividness of an event: We tend to remember more details of
the environment and other aspects of the experience. But people
and animals with damage to the amygdala don’t preferentially
remember these experiences.
The media often interprets amygdala activation as a sign that
we’re fearful of something or that our behavior is somehow driven
by fear. But if the emotions we feel during the event are positive,
the amygdala notices those, too, and plays a role in making sure
that you learn what it was that led to a positive result.
But the story is much more complicated than that. Damage to the
amygdala can make people experience less fear, but they will still
react to fearful faces. Animals with lesions to the region show a
decrease in sexual, aggressive, and maternal behaviors.
People with a larger amygdala on the left side are more likely
to be taking medication or in psychotherapy for the treatment of
depression. They are more likely to have obsessive-compulsive
disorder, borderline personality disorder, and post-traumatic
stress disorder. Social phobias are correlated with more
activity in the amygdala.
But children with anxiety
disorders often have a smaller
than normal amygdala in the
left hemisphere. The size of
your amygdala also correlates
positively with the size of your
social network. The bigger
your network, the bigger your
The amygdala has a number of different subregions, each of which
might make a different contribution to the emotional modulation of
our experiences. So, the amygdala is involved in much more than
just fear and aggression.
When a study reports that the amygdala is differentially active
during a particular task or in a particular group of people, on the
surface, we don’t know what that means. We have to learn more
about the behavior in question or the group we’re studying before
we can interpret such a result.
Just because the amygdala is implicated in one function in one
study doesn’t mean that it’s playing the same role in another.
Your amygdalae might light up if you watch a scary movie, but
that doesn’t mean that if they light up when you watch a romantic
comedy, you’re somehow showing a fear response to intimacy.
The reverse inference can’t be made. This type of reverse-
inference making is rampant in the media, but it is almost
never warranted.
Neuroscientists can read
your mind by scanning
your brain.
Your brain is a Swiss army
knife or a multipurpose
tool: Many parts of your
brain have many different
functions, and when a
part of your brain is active
in a brain scan, we can’t
always tell which function
it’s accomplishing.
Reverse inference seems especially tempting when the reward
circuitry of the brain is involved. The neurotransmitter dopamine
has many different functions in the brain. It’s involved in our
sense of pleasure and our sense of pain. It is called up when
there is something in the environment that might predict reward
or punishment.
But it also plays a role in helping us hold information for a few
moments, in working memory. It also plays a role when we get
nauseated. Too much dopamine in the wrong places can cause
hallucinations, not just euphoria. And too little messes up your
ability to control your muscles, leading to Parkinsonian symptoms.
Each of the brain regions involved in processing rewarding
information also plays multiple roles, so we can’t infer that someone
is feeling pleasure just because his or her reward pathways light
up. Again, the reverse inference doesn’t necessarily hold.
This is perhaps even more true for a region such as the prefrontal
cortex, whose functions vary from keeping your impulses in check
to remembering what you need to pick up at the grocery store
to making complex long-term plans for your future. Of all of our
brain regions, the prefrontal cortex is perhaps the most complex in
terms of neatly assigning function.
It’s also highly interconnected with other brain regions. So, it
seems to have a hand in virtually any kind of thinking that we do.
How can we possibly use reverse inference to give us any kind
of deep understanding with respect to the prefrontal cortex? How
can we interpret activation in this region if we don’t understand
what type of thinking the person in the scanner is doing?
In studies that fail to find activation in expected regions, can
we conclude that those regions aren’t involved in the function
that we’re testing? The answer is no, because there are many
technical reasons for why we wouldn’t observe greater activation
in a particular region.


Lecture 21
Can Adult Brains Change for the Better?

These critical periods are why children who are not exposed to
certain types of stimuli—if they are deaf, for example, or aren’t
spoken to—must then endure a lifetime of disability. If a child
doesn’t get the right type of stimulation, his or her cells won’t
organize themselves in such a way that they can process that
information later in life.
But the work of BDNF isn’t finished once a child reaches adulthood.
Even in adulthood, we now know that we do grow new neurons in
select parts of our brain: regions involved in long-term memory.
So, the idea that once you reach your 20s, you’re stuck with the
brain cells that you have is a myth.
But note that neurogenesis doesn’t happen all over the brain. That
would be a disaster. Instead, we have found adult neurogenesis
in only 2 parts of the human brain, both of which are involved
in creating memories: in one region of the dentate gyrus, a very
specific part of the medial temporal lobe involved in the rapid
formation of new long-term memories; and in the striatum, which
plays a role in planning actions, reinforcement, motivation, and
decision making.
In the striatum, the cells born in adulthood are interneurons
exclusively; that is, they don’t connect to cells outside of their
immediate vicinity. In the medial temporal lobe, the new neurons
turn into granule cells, which are small, tightly packed cells whose
reach is also limited locally.
We think, therefore, that adult neurogenesis enables us to encode
new memories, giving us the opportunity to learn throughout our
lives, but it doesn’t interfere with connections that we’ve made
from previous experiences.
Some of the first evidence that we grow new neurons came from
bird brains. In the 1980s, scientists discovered that canaries grow
new neurons in the hippocampus, the avian analogue of our own
memory powerhouse, when they are learning new songs. And the
more complex the repertoire, the larger their hippocampus.
After this discovery in songbirds, other nonflying animals were
shown to possess the ability to grow new brain cells. Neurogenesis
in adulthood was reconfirmed in rodents, our mammalian cousins
with whom we are relatively close genetically.
From rodents, we’ve learned that the speed at which new
neurons are born can vary; rich environments that provide a lot
of stimulation can increase neurogenesis. Memory tasks and
socializing and even simple
play can, too.
In one study, rats that got a lot
of exercise—for example, by
running on a wheel—doubled
their neurogenesis rate
compared to sedentary ones,
who weren’t provided with a
running wheel.
It’s not enough to just grow
new neurons—they also have
to survive. Animals who were then given the chance to learn a
new skill were left with a greater proportion of surviving new cells
compared with a control group, who experienced neurogenesis
but then weren’t given the opportunity to put those cells to action
by learning a new skill. The new neurons need to be integrated
into adult brains, and that doesn’t happen if the animals
aren’t challenged.
Despite the large similarity between our brains and those of rats,
there were still prominent neuroscientists who refused to believe
that adult neurogenesis was also a human thing. Then, at the turn
of the 21st century, a collaboration between scientists in Sweden
and San Diego finally demonstrated the phenomenon in our
own species.
The authors had learned that the same dye used to track
neurogenesis in rodents had been injected in certain cancer
patients over the previous decades. They then got access to their
brain tissue postmortem and found the same evidence of the birth
of new cells throughout the lifespan in these people.
But more questions remained. Now the field wondered what role
neurogenesis plays in the human mind, what circumstances affect
its rate, and whether we can treat devastating brain diseases with
neuronal stem cells.
Since 2013, neuroscientists have learned that not only is
neurogenesis a common feature of the adult brain, but that there
are important differences between neurogenesis in rodents and
in humans.
In the mammalian brains that have been studied, the growth of new
brain cells is restricted. So far, we’ve found that in rodents, these
neurons end up in either the olfactory bulb, which is responsible
for smell, or the dentate gyrus, a subregion of the hippocampus
that is involved in learning and memory.
The neurons that are born are fairly small and make connections
only with neighboring neurons—they stay local. That makes
sense, because integrating new cells into complicated circuits
might be a recipe for disaster.
Both rodents and humans grow new neurons in the hippocampus.
The other place where new neuron growth begins is called the
subventricular zone in the lateral ventricle. In rodents, these
neurons then migrate to the olfactory bulb, likely because the
sense of smell is of critical importance to a rat.
But in humans, those cells migrate to a part of the brain called
the striatum, responsible for coordinating movement and critical
for long-term learning of motor and other skills—called procedural
One more interesting difference between rat and human
neurogenesis goes exactly against the logic that some
neuroscientists clung to in justifying their rejection of the initial
findings: the idea that humans can’t possibly grow new neurons
because we can remember things from our very remote past.
The idea was that if memories are stored in neural networks,
those neurons need to remain intact for us to be able to retrieve
old memories. If there was a turnover of neurons the way there is
of skin cells, we would forget our past.
In rats, the proportion of new neurons in the dentate gyrus doesn’t
seem to ever exceed about 10% to 20%. But in humans, by
about age 50, the majority of neurons in the dentate gyrus have
turned over.
How, then, is it possible that we can still remember old memories?
There is growing evidence that the dentate gyrus may play a
bigger role in encoding new memories than in retrieving them.
In fact, rats with dentate gyrus lesions have trouble learning new
things but not remembering old ones. Lesion a different part of
the hippocampus, one in which there isn’t adult neurogenesis, and
you get the opposite pattern: The animals can learn new things
but can’t remember old ones.
That would make sense. We’d need the new cells in the dentate
gyrus to form new memories, but then those memories would be
ultimately stored elsewhere, and we wouldn’t need that region to
retrieve them. Instead, baby neurons there would now be available
to lay down a new set of memories, leaving us with the ability to
learn new things until the day we die.
After your brain develops
fully in your early
adulthood, it just starts a
long, slow decline.
The brain continues to be
changeable throughout
your lifespan.
There is a connection between BDNF and neurogenesis even in
adulthood. You can influence the amount of BDNF floating around
in your brain and the growth of new neurons with a fairly simple
and cheap treatment: exercise.
Several studies have shown that regular exercise can lead to a
threefold increase in BDNF levels. Even a single session can make
a difference. Exercise is effective at ramping up neurogenesis and
keeping those cells integrated in the brain.
While exercise can increase adult neurogenesis, stress can impair
it. In animal models of depression, neurogenesis decreases under
experimental conditions where the animals are placed under
stress. In these same experiments, administering antidepressant
medications to the stressed animals has been shown to restore
neurogenesis to healthy rates.
There are other examples of people who successfully increase
the volume of specific parts of their brains by training or learning
a new set of skills—not necessarily by growing new cells, but by
increasing the connections between them.
For example, juggling every day for 3 months has been shown
to increase the volume of certain parts of the brain, including the
hippocampus—in 20-year-olds and in 60-year-olds. While the
60-year-olds weren’t juggling as well as their younger counterparts
with the same amount of training, they still showed increases
in brain volume. But once the participants in these experiments
stopped juggling, the regions returned to their previous sizes.
In addition, it seems that musical training can drive brain volume
increases in the parts of the brain that are engaged by the activity.
As we get older, we tend to spend less time learning new things or
engaging our brains in effortful ways. And given that this effort is
what seems to benefit our cognitive function and our brain structure
in measurable ways, our lifestyle choices play a significant role
in whether or not we experience cognitive decline with aging.
And that experience, in turn, can affect our very attitudes
toward longevity.
Science writer David Ewing Duncan conducted an online survey
on the question of how long people want to live and found that
out of more than 30,000 responses, more than half of respondents
agreed that somewhere around 80 is a good enough lifespan.
If we managed to stave off physical and mental decline and save
enough money to economically sustain an extended retirement,
most people would probably push out their desired lifespan,
perhaps past 150.
When it comes to cognition, there are hints that interventions
designed to keep the body healthy, such as exercise, have
powerful effects in terms of staving off age-related mental decline.
But what about the people who seem to be able to stay mentally
sharp well into old age? Is there something we can learn from
these types of people?
There are several labs now studying not only healthy aging but
super-healthy aging—people who thrive in old age and who score
as well as people who are 20 to 30 years younger.
In one study involving around 30 of these people, scientists found
that their anterior cingulate cortex, a region of the brain responsible
for cognitive control, resolving conflict, keeping up motivation, and
perseverance, among other executive functions, was not only
larger than that of their 80-year-old peers but also larger than the
average size in middle-aged people.
We still have a long way to go before we can make any solid
conclusions about what factors combine to support such
successful aging. Certainly, genes are a factor: For most people,
lifestyle choices have a greater influence on our risk of death
before age 80, and genetics seem to account for more of the
individual variability in overall health once we reach our ninth
decade of life.
Super-healthy agers are pretty varied, but what they may have in
common, at least anecdotally, is that they have remained active
members of their community, socializing regularly.
There’s one more brain finding in this group that’s been reported:
They have 3 to 5 times more von Economo neurons in their
anterior cingulate cortex than their peers. These large cells are
thought to play a role in facilitating social interaction.


Lecture 22
Do Special Neurons Enable Social Life?

Some scientists have suggested that our larger brain size was
selected for as we began to use tools. But tool use coincides with
the formation of communities, and the social brain hypothesis,
proposed by British anthropologist Robin Dunbar, is perhaps more
convincing than one in which tools are the driving force shaping
our brains.
Dunbar set out to understand grooming behavior in primates, and in
the course of his research, he noticed a trend: The bigger the social
group in which the primate lives, the bigger its average neocortex.
He then built a model that he could use to predict the social size
of a primate group, given its neocortex size. Then, he stumbled on
an interesting way to apply this model: to figure out how large a
social network humans can reliably sustain.
Dunbar recognized that among primates, our social group size
is fairly big. Our nearest cousins, the chimpanzees, live in social
groups that contain about 50 individuals. And our brains are
much bigger.
Given our neocortex size, Dunbar’s model suggested that we
could maintain somewhere between 100 and 200 casual friends,
those whom you’d invite to a big party. The average is 150, and
this number is called Dunbar’s number.
Dunbar suggests that much of our minds have been shaped by
natural selection to enable us to navigate social relationships—
build communities that are cooperative and beneficial to
individuals—but that there is an upper limit to a good group size.
Dunbar examined human social groups and found some
fascinating regularities: We tend to spontaneously form groups of
certain sizes. Our best friends and family members, or our most
important support system, is made up of 3 to 5 individuals. Then
there’s a circle of 9 to 15 people who are your close friends. Then
there’s the 30 to 45 people whom you’d invite over for dinner or
with whom you socialize fairly regularly.
Dunbar has found support for his predictions in historical records,
army regiments, and social media apps, such as Facebook and
The social brain hypothesis doesn’t explain all animal species—there
are examples in the animal kingdom of highly social species with
smaller brains—but it does seem to fit the primate tree fairly well.
The relationship might be more complex than the hypothesis
makes it seem. Intelligence is likely a factor, because more
intelligent animals might find it more difficult, or at least more
complicated, to get along.
Nevertheless, we can’t ignore the importance of our interactions
with each other when we are trying to understand how our brains
work. The danger of compelling “just-so” stories, however, is
evident in the proliferation of myths surrounding mirror neurons.
In 1992, around the same time that Dunbar was publishing his
first observations about group size and neocortex, scientists in
an Italian lab were recording activity from neurons in the brains
of macaque monkeys. They were trying to understand how these
neurons support voluntary actions.
These neuroscientists were recording directly from the motor
cortex. They stuck an electrode into this brain region and tracked
electrical signaling as their monkeys were performing goal-
directed actions, such as grabbing a banana and eating it.
They discovered that some cells in this region don’t just fire
when a monkey is grabbing its own snack; they also fire when
that monkey observes another monkey grabbing a raisin. The
Mirror neurons fire both when
a monkey does something
intentional, such as grabbing
a banana, or when he or she
observes another monkey
doing that thing.
observing monkey’s brain is mirroring what’s happening in the
acting monkey’s brain as the acting monkey achieves a goal.
The existence of mirror neurons suggested to some scientists that
we have evolved a special kind of cell that seems to enable us to
imagine how and what another person or primate might be feeling
and thinking. It is arguably the seed of empathy, and one might
argue that the ability to know what someone else might be thinking
is critical for survival in a social group.
Mirror neurons aren’t a morphologically different class of cells.
They look just like the rest of the neurons in the motor cortex
and elsewhere. But they behave differently. They have a different
receptive field.
All neurons have a certain stimulus set that causes them to fire.
Visual cortex cells respond to different visual stimuli, auditory
cortex cells respond to different sounds, and so on. Mirror neurons
then fire both when a person (or a monkey) does something
intentional or when he or she observes someone else doing
that thing.
The inference that it’s the action of these cells that gives us
empathy is still speculative. Maybe it is, but more likely, they are
just cells in a larger network of neurons that underlies this ability.
This is where the hyperbole begins. If these neurons are
fundamental to empathy, then maybe they are the mechanism
that fails in people who have problems with empathy and social
understanding, such as individuals on the autism spectrum.
Mirror neurons have been used by the media to explain why
we cringe when we see someone hurt on TV or why hospital
patients benefit from visitors. But unfortunately, the truth is much
more complicated.
Simply labeling cells as mirror neurons has led to a lot of
confusion. Are we just talking about the human analogues of the
cells discovered in the macaque, or are we talking about any
cells whose receptive field includes some observation of another
person’s actions?
Underscoring this confusion, we now distinguish mirror neurons
(which we think of as the human analogue of the macaque cells)
from the mirror neuron system (which is a distributed network of
cells that are activated by observation).
But the mirror neuron system is an interesting explanatory
framework that we can use to describe activation in a number
of different circumstances. The explanatory power of mirroring is
compelling but also vague.
We don’t seem to need to activate our motor cortex to understand
what a pianist is trying to express, because people who have
damage to these regions don’t lose the ability to recognize other
people’s intentions.
There’s no evidence that individuals on the autism spectrum have
trouble understanding the actions of others, or imitating them.
We still don’t understand exactly what’s gone awry in people
with autism, but the problem is in their attribution of intention
to the actions, which seems like it’s at least a skill one level up
from the activity of mirror neurons, at least as we currently
understand them.
We can’t just check the mirror neurons in a person with autism.
We can’t stick an electrode in someone’s brain and measure direct
activity. But we can record from electrodes that have already been
implanted in patients with epilepsy, for example. With this method,
we have found that similar cells seem to exist in humans. We still
don’t know whether these specific cells are implicated in patients
with autism.
Mirror neurons might have lost favor among neuroscientists, but
as their fame fizzled, another set of cells, this time morphologically
distinct, have begun to take
over the spotlight. Time will tell if
their fate will follow that of mirror
neurons or if they truly are one
of the features of our brains that
have given us our humanity.
Von Economo neurons
(VENs), named for the Austrian
neurologist who discovered
them in 1929, are also called
large spindle cells, because
that’s what they look like. They
are found in regions of your
brain that are suspected to have
evolved most recently.
These cells have been associated with complex human traits, such
as a sense of self and awareness, and have been noted in other
species who live in societies, such as elephants and dolphins, but
not in many species of primates.
Because they are morphologically distinct, we can see them under
the microscope and therefore count them in different species and
people (postmortem, for now). There might come a time when our
imaging techniques are so sophisticated that we can actually see
what they do in a living brain.
Von Economo cells are abundant in 2 regions in the human brain:
the anterior cingulate cortex and the frontoinsular cortex. These
parts of the brain have been implicated in functions that are
distinctly humanlike, such as our appreciation of music, ability to
recognize ourselves, subjective awareness, and ability to resolve
inner conflicts between different desires.
In an elegant model, neuroscientist Bud Craig has proposed that
the insula—the insular part of the frontoinsular cortex—is intimately
involved in our subjective awareness. He notes that insula
activation in neuroimaging studies has been found during a wide
variety of tasks, including the moment of conscious recognition,
decision making, and self-recognition, which are features of what
most people consider human consciousness.
Craig suggests that what ties these experiences together is
our awareness of them—that the insula is involved in gathering
information from across the brain and giving us a sense of our self
in the present moment.
We don’t know how accurate his model is yet, but we do know
that one thing that distinguishes the frontoinsular and anterior
cingulate cortex from other parts of the brain is the density of von
Economo neurons.
Because of their size, morphological characteristics, and
connections, Craig and others think that they are particularly suited
to transmit highly integrated information about our emotional state
and behavior quickly. They might be the key to how we are able
to become subjectively aware of our own feelings, thoughts,
and actions.
Von Economo neurons are plentiful in adult humans. They are
scarce in infants and grow in number between ages 1 and 4. They
are also present but scarce in gorillas, bonobos, and chimpanzees.
They haven’t been found in macaques.
Patients with frontotemporal dementia (FTD), who lose awareness
of their own emotions and self-consciousness, show a progressive
and relatively specific degradation of von Economo neurons.
Symptoms of this disease include deficits in social and emotional
self-awareness, theory of mind, moral reasoning, and empathy.
While mirror neurons seem to play a role in understanding
and imitating others, von Economo neurons are thought to be
important in our ability to know ourselves. Both knowledge sets
are important for social interactions.
Because Von Economo neurons are implicated in social skills,
empathy, and self-awareness and because they develop and
mature when children are between 1 and 4 years of age,
neuroscientists have wondered whether the symptoms of autism
spectrum disorders might also stem from problems related to von
Economo cell maturation. But a few postmortem studies of the
brains of adults with autism showed no differences between their
von Economo cells and those of healthy controls.
Time will tell if von Economo cells will hold the key to much of what
makes us human or if they will fade back into the background the
way that mirror neurons have.
Hanging out with other
people—socializing at the
expense of working—is
not as good for your
brain as reading a book
or doing some other
intellectual task.
Our brain size exploded
when we started living
in groups, and the worst
thing you can do to a
baby is deprive him or her
of social stimulation.


Lecture 23
Is Your Brain Unprejudiced?


Researchers have also measured implicit bias using neuroimaging
techniques, tracking brain waves and activation of regions of
the brain involved in emotion, social interactions, and conflict
monitoring. They’ve found that when a person is exposed to
a stimulus that he or she finds threatening, such as a gun, the
amygdala is activated automatically.
The amygdala has several functions, one of which is to ensure
that emotional events are better recalled than neutral ones; it
jumps into action when we feel threatened or fearful. In studies
of implicit bias, scientists have found that when participants are
shown images of black faces, for example, people who show
implicit biases will show greater amygdala activation when viewing
these faces than when they’re looking at white faces, for example.
But neuroimaging results aren’t interpretable unless we
understand much of the underlying behavior. And perhaps the
most popular way to uncover implicit biases behaviorally is via the
implicit association test (IAT), which was developed in the 1990s
and has since become a crucial tool in social psychology research.
Basically, the IAT pits 2 competing responses against each other,
forcing you to suppress interference if you have a bias.
The IAT works by assessing your implicit associations—essentially
whether you have made connections between certain attributes,
even if you are unaware of them. For example, do you associate
science with the male gender, even if you respect many female
scientists? The IAT can bring these kinds of unconscious biases
and stereotypes to light.
We know that memories of things that we’re not consciously aware
of can affect our behavior. The premise of the IAT is that, in the
same way, associations that we make implicitly can change how
we act and the types of attitudes that we display.
According to the IAT, most white people in the United States show
an implicit preference for whites over blacks. But only about 50%
of black people tested show an implicit preference for blacks over
whites. Most people prefer young people to older people no matter
how old they are themselves.
But despite the widespread use and popularity of the IAT, it does
have its detractors. Critics point to the fact that studies have
found inconsistencies in an individual’s scores across multiple
test sessions. This is problematic if the test is supposed to detect
biases that we carry around with us all the time.
A simple manipulation—for example, thinking of positive African
American role models—before taking the test can result in a
smaller bias score on the race IAT.
Other studies have shown that
bilingual people show more bias
when taking the test in their
native language versus in their
secondary tongue. Time of day
might even make a difference, as
tests administered in the morning
tend to show less bias than tests
taken in the afternoon or evening.
But we shouldn’t disregard the
IAT altogether. This flexibility in
terms of showing bias might be
a reflection of what is actually
going on in our brains: We might harbor implicit biases, but they
are also fairly malleable, depending on the context.
In fact, when it comes to our biases, context matters a lot, because
biases depend on how we define our social group. One way of
grouping ourselves is by skin color, but at a baseball game, we
might group ourselves by team, rather than by race. The same
can be true of religious denominations and political leanings,
for example.
You are not prejudiced.
Our brains have evolved
to take many shortcuts,
and one of the negative
consequences of this is
the fact that we tend to
make inferences about
people who are not like
us automatically and
Stereotypes are beliefs about attributes that are thought to be
characteristic of members of particular groups. For example,
opera singers are overweight.
Stereotypes can then lead to prejudice, which is a negative
attitude or emotional response toward a certain group and its
individual members. I don’t like opera singers, for example; they
are gregarious, loud, and egocentric.
Stereotypes and implicit biases are thought to reflect different
underlying neural processes. Implicit biases seem to involve an
amygdala-driven learning system when the bias involves some
assessment of threat. That’s why they are often learned during
emotional experiences, often after only one exposure, and why
they can persist and be difficult to extinguish.
Stereotypes, by contrast, result from conceptual learning, based
in the temporal lobes and the prefrontal cortex. It takes time and
multiple repetitions to ingrain stereotypes, and like any habit, they
can be hard to break. But studies have shown that people can
successfully overcome stereotypes by monitoring their behavior
and consciously trying to weaken the stereotype.
Why are we prone to stereotyping and prejudice in the first place?
Why can’t we just evaluate every situation rationally and uniquely,
rather than being influenced by past associations?
Our brains are prone to making shortcuts. We can’t possibly
process all the stimulation available to us in a given situation, so
we look for regularities or information that can help us choose how
to behave. Given this natural tendency, categorizing people is one
shortcut that our brains can use to cope with the complexity of
social interactions.
This may help us understand why stereotypes have persisted
throughout the course of our evolution, despite the fact that they
cause so much suffering.
We live in a complex environment and are constantly bombarded
with information. And just as our senses have to take shortcuts to
help us process all this info, these same shortcuts might also be
useful when we’re assessing each other.
The cognitive miser hypothesis suggests that stereotypes simplify
our social environment so that we can choose our interactions
more quickly and efficiently.
Another idea is that we use stereotypes to identify with and fit
into a social group. We emphasize the positive aspects of the
group we want to belong to while at the same time differentiating
ourselves from the group we don’t belong to by tracking their
negative features. This idea stems from social identity theory, the
idea that we craft our social identities on the basis of the groups to
which we belong.
The pressures that shaped our brains into their current marvelous
state were largely driven by our social environment, so it’s no
surprise that we are particularly attuned to evaluating others as
friends or foes. The problem is that what we are taught is not
always true, especially as generations change.
We are born to prefer members of our own “group.” But this
doesn’t mean that we are born with specific forms of prejudice.
For example, we don’t seem to naturally segregate groups by skin
color; that distinction is taught. By some very simple manipulations,
experimenters can alter the way that you group people into “us”
and “them” categories.
All of us belong to multiple different groups, allowing us to
categorize ourselves in many ways: by ethnicity, gender,
generation, political leaning, and so on. Simply by emphasizing a
way of categorizing, researchers can pull you from an out-group
and into an in-group.
Implicit does not mean innate. Just because a bias is unconscious
doesn’t mean it wasn’t taught. Studies of children show that biases
develop over time with the accumulation of experiences.
We are especially prone to laying down biases after emotional
experiences. This is how implicit bias can result in discrimination.
Arguably the most powerful force creating implicit biases is
systemic racism: living in a culture in which bias is prevalent. We
are social creatures, finely tuned to the attitudes and opinions of
those around us, as our brains have arguably evolved to allow us
to live in relatively large social groups, at least compared with our
primate cousins.
So, implicit biases can be influenced by our experiences, parents,
emotions, and society. Another connection that some researchers
have drawn is between bias and our need for self-validation; in
other words, maybe stereotypes feed our ego and help us build
psychological defenses. Categorizing others makes us feel better
about ourselves, in line with our self-serving bias.
There is evidence that we have an implicit ego-building bias: We
prefer our own in-group, whether it’s defined by the sports team
that we support, the college we attended, the religion we follow,
or the color of our skin. We implicitly prefer people, places, and
careers that match our identity in some way.
There are many ways in which biases and stereotypes lead to
social ills rather than social cohesion. Stereotypes can prevent us
from achieving a more complex understanding of others—they get
in the way of critical thinking. And of course they don’t represent all
or even most individuals in a particular group.
When they enhance our egos at the cost of devaluing others,
stereotypes can become the basis for prejudice and discrimination
that erodes the fabric of society. They maintain systems of
privilege and injustice and can block us from understanding how
others really are, rather than how we think they are.
The strength of our biases can shift depending on what state of
mind we’re in, and now there’s evidence that when we’re in a
stereotyping state of mind, it’s not just critical thinking that suffers.
Creativity does, too.
Our fast, automatic, intuitive thought processes can help us
navigate a complex world quickly and efficiently but can also lead
us astray. In contrast, our slow, thoughtful, rational minds can help
us become a civilized society, but this cognitive system is lazy,
as Daniel Kahneman has pointed out, so we need to engage it
Tests such as the IAT are designed to measure fast reactions over
slow thinking. The good news is that interventions—such as the
simple tool of thinking about a role model who happens to belong
to a group we’re biased against—can be effective in quieting down
and even changing our automatic responses.
We don’t need extremes of political correctness to address this
problem in society; instead, awareness of our tendencies, instincts,
and reactions can go a long way toward fixing the problem.
Intergroup interactions are some of the most successful ways to
reduce implicit and explicit biases. These interactions challenge
stereotypes by increasing knowledge about members of the out-
group, humanizing them, reducing anxiety related to interactions
with them, and increasing empathy and perspective-taking. Even
just thinking about a positive interaction, in which a member of
the out-group becomes an ally, can reduce bias as measured by
the IAT.
Racism, implicit bias, and stereotyping are ingrained in our society
and leave a trace in our brains as a result. But the good news
is that there are effective interventions, and being motivated to
override automatic instincts is usually enough to stop us from
behaving badly.


Lecture 24
Does Technology Make You Stupid?


Probably the most common fear is that access to smartphones is
killing our attention span, or how long we can focus on something.
Do we really mean that smartphones are killing our ability to focus,
or is it that our tolerance for boredom has decreased?
What evidence do we have that the same person who flips
through social media quickly cannot focus his or her attention for
longer periods of time on other tasks? The same people who are
accused of being addicted to social media, who won’t spend more
than a few seconds on any given post, can still get lost in a novel
or a movie.
You might say that we’re talking about 2 different skills: Getting
lost in something interesting is not the same as having to focus on
a topic, book, or paper that is less about entertainment and more
about education. From that perspective, attention span is more a
question of fighting boredom, or keeping oneself actively engaged
with the material at hand, rather than relying on the presenter to
hold our focus.
What about the idea that people in the past were able to tolerate
boredom much better and that smartphones and other tools have
made us less able to control our own minds? Most people can
focus on things for hours; it’s just that those things that keep our
attention change with each generation, and with each individual.
It depends on what we find entertaining, whether it is musical
composition or comparing vacation photos.
Doctors’ offices have had magazines in them for decades because
waiting, with nothing but our minds to occupy us, is an aversive
state for many people. This is a phenomenon that’s been around a
while—not one that emerged in the Internet age.
In these instances, we confront a basic fact about the way our
brains are wired: We get stressed when we feel as though we
aren’t in control.
John Eastwood at York University has proposed that boredom is a
function of feeling that you want to engage in a satisfying activity,
but for some reason, you can’t. And you attribute that reason to
something in the environment, which you can’t control.
The doctor’s waiting room or the long flight are situations in which
you don’t have control over the main activity you’re involved with.
To regain some control, and to minimize this aversive state, people
turn to some diversion. Maybe it used to be a magazine while now
it’s a smartphone, but the human tendency is essentially the same.
Being bored is considered shameful by many people; many people
look down on people who are easily bored, seeing it as a sign of
a lack of intelligence. But boredom can actually lead to creativity.
The problem now may be that we have fewer opportunities to learn
how to entertain ourselves using only our minds. With smartphones
and tablets, the range and quality of the entertainment at our
fingertips has changed dramatically, which means that it’s
easier to assert control by turning to our devices when we’re
feeling bored.
Some would argue that checking social media doesn’t compare
with reading
Anna Karenina
, but the truth is that we have
that choice: The novel is just as accessible as any other
application or game. Why don’t we always choose literature over
status updates?
The answer is different for different people, but for many people,
the rewards of getting lost in a great novel take a bit longer to
reap. You need time to get lost in the story, and having to interrupt
your experience when the flight lands or the doctor calls you in can
be jarring—reminding you of the fact that you’re not in control of
your mental state.
But checking out a photo of your niece or reading a text message
from a friend can fit into any tiny time period, giving you a tiny
sense of accomplishment when called away from the activity,
rather than reminding you that you’re waiting on someone else.
John Eastwood worries that smartphones have made us less
skilled at tackling boredom, putting us on a slippery slope in which
we tolerate it less and less. Not having to submit to boredom, we
don’t learn how to eliminate it using just our thoughts. He has
likened it to an addiction, suggesting that the more you do it the
more you feel the need to do it.
You might have come across headlines purporting that checking
social media is addictive—that people behave as if they are on
drugs and that the brain looks like it does when you take cocaine.
This is an exaggeration.
The dopamine-mediated reward system can get activated—which
is where these comparisons come from—but dopamine isn’t just
a reward chemical, and the regions involved don’t just light up
when you’re enjoying yourself. It’s more like a salience network,
activated when something in the environment is worth paying
attention to.
You probably recognize that checking social media is not as
engaging as getting lost in a great novel, and we can see the
difference in how the brain is activated, too. But checking social
media can be more rewarding than just waiting for something to
happen, and being interrupted while on your phone might be less
aversive than if you were lost in a novel. So, you’ll see more of the
dopamine system activated when flipping through some app than
you will if you’re just sitting in the doctor’s office, bored.
But studies are also showing that instead of the Internet making
people more distractible, it’s likely that people who are more
distractible to begin with struggle more with the nefarious aspects
of technology use.
Does computer use make you less intelligent and more reliant on
technology? Some studies have shown the opposite. For example,
one study from 2010 showed that people who use computers more
also tend to show better performance on certain cognitive tests.
Researchers found that frequent computer use was associated
with better overall cognitive performance across adulthood, even
when they accounted for age, education, sex, and health.
You might argue that computers
are generally used more by
smarter people, so one would
expect frequent users to perform
better on these tests, simply
because they’re smarter. But even
when the researchers controlled
for intelligence, they still found
a correlation between computer
use and executive function—
specifically, greater speed at
switching between tasks.
This finding is in line with other
studies that have also found an association between more
frequent computer use and better cognitive outcomes in older
adults. In fact, people who have fewer intellectual and educational
advantages seem to benefit the most from computer use.
In another large study, computer use was found to be associated
with a 30% to 40% lower risk of incident dementia—that is, the
men who were diagnosed with dementia once the study was in
progress but not before.
Even a relatively passive activity, such as browsing the Internet,
engages the prefrontal cortex, where our executive functions,
such as reasoning and decision making, reside. But older adults
who are new to the Internet don’t show this same pattern of
activation; they just show engagement of parts of the brain used
while reading. It seems that, with experience, Internet use can
change how the brain is engaged during browsing.
Time is a factor. Interventions in which computer use has been
used as a treatment for up to a year have generally failed to show
positive effects in older adults. But over the course of many years,
the efforts seem to pay off.
Technology will make
you stupid.
Spending a lot of time
doing anything will
rewire your brain, but
the uses of technology
vary widely, and some
of them actually make
us smarter.
Many people complain that knowledge is superficial these days—
that our memories are harmed by the fact that we don’t need them
as much as we used to. Have we made a Faustian bargain with
computers, building their memory at the cost of our own?
Research has shown that when people think that information will
be available to them by some other means in the future, they
don’t remember it as well. The effort we make when trying to learn
information plays a big role in terms of our ability to recall it later.
The availability of information via search engines has made us feel
as though it’s unnecessary to remember almost anything. Perhaps
the skills that we need are those that involve learning how to find
the information, rather than remembering its content.
Google is changing how we remember. We are better at
remembering where to find facts than we are at remembering the
facts themselves. But maybe, as the number of facts available
to us skyrockets, this skill is actually more useful to us in the
long term.
The brain is both malleable and adaptable. As our technological
environment changes, our brains develop different skills that help
us meet the distinctive challenges of the new environment.
Initially, people who spend more time on social media do spend
less time in face-to-face interactions, but even that is changing.
In 1998, Robert Kraut and his colleagues published a study of 73
households that had just gotten access to the Internet. The study
made headlines because it showed that the more these families
used the Internet, the less they communicated with each other.
Their social circles declined, and they became more depressed
and lonely. The Internet was hurting their social lives. Kraut called
it the Internet paradox.
But in 2002, Kraut and colleagues published a follow-up study that
didn’t generate nearly as much attention in the press. The study
reported that 3 years later, the negative effects of the Internet had
dissipated in the original families, and in a larger sample of 406
new computer and television users, using the Internet actually
improved communication, social involvement, and well-being.
The best results were seen in people who are naturally outgoing.
Kraut and colleagues call this the rich-get-richer effect: They found
poorer outcomes for people who were less social or had less
support. And people who are shy tend to have fewer friends on
social media.
But contrary to the idea that social media disconnects, a study
from 2012 showed that increasing the number of times a person
posts on Facebook correlates with feelings of greater connection
to his or her community. And this intervention—forcing participants
to post more regularly—was especially beneficial to people who
are naturally shy.
Social media gives us new ways to stay in touch and connect. And
we learn to adapt our own online behavior if we find that these
technologies are harming us. If Facebook makes you sad, you’ll
learn to use it differently, or not at all.
Having online identities affects how we think about ourselves,
in part because we now have access to much more information
about our past.
Technology is a tool that can dramatically expand our capacity to
store and retrieve information. If you think of the Internet as an
extension of our memory, for example, then there’s no question
that it is a vast improvement over the capacity of any individual
human brain. But some people still balk at the idea that technology
can enhance certain characteristically human functions, such
as creativity.
We already know that music algorithms can write or at least
evaluate pop songs, and these algorithms can even make human
composers more creative, giving them access to many more
soundscapes and experiments in music than would be possible
without them


Get Fit With Our 28 Day Challenge... Its Free ;)

* indicates required
Email Format