Subtitles section Play video
-
Can week-old pizza cause psychedelic hallucinations? Does coffee make you smarter? Or does it just
-
make you do dumb stuff faster?
-
Like a bunch of psychology itself, questions like this can seem pretty intuitive. I mean,
-
people may not be the easiest organisms to understand, but you're a person, right? So
-
you must be qualified to draw, like, some conclusions about other people and what makes
-
them tick.
-
But it's important to realize that your intuition isn't always right. In fact, sometimes it
-
is exactly wrong, and we tend to grossly underestimate the dangers of false intuition. If you have
-
an idea about a person and their behavior that turns out to be right, that reinforces
-
your trust in your intuition. Like if one of my buddies, Bob, begins eating that deep-dish
-
pizza that's been in the fridge for the past week but he eats it anyway and soon starts
-
to wig out, I'll say "Dude, I told you so". But if I'm wrong and he's totally fine, I
-
probably won't even think about it ever again.
-
This is known as 'Hindsight Bias" or the "I-Knew-It-All-Along" phenomenon. This doesn't mean the common sense
-
is wrong, it just means that our intuitive sense more easily describes what just happened,
-
than what will happen in the future. Another reason you can't blindly trust your intuition
-
is your natural tendency toward overconfidence. Sometimes, you just really, really feel like
-
you're right about people when actually you're really, really wrong. We've all been there.
-
We also tend to perceive order in random events, which can lead to false assumptions. For example,
-
if you flip a coin five times you have equal chances of getting all tails as you do getting
-
alternating heads and tails. But we see the series of five tails as something unusual,
-
as a streak, and thus giving that result some kind of meaning that it very definitely does
-
not have.
-
That is why we have the methods and safe-guards of psychological research and experimentation,
-
and the glorious process of scientific inquiry. They help us to get around these problems
-
and basically save the study of our minds from the stupidity of our minds. So I hope
-
that it won't be a spoiler if I tell you now that pizza won't make you trip, and coffee
-
doesn't make you smart. Sorry.
-
[Intro]
-
In most ways psychological research is no different than any other scientific discipline,
-
like step one is always figuring out how to ask general questions about your subject and
-
turn them into measurable, testable propositions. This is called operationalizing your questions.
-
So you know how the scientific method works -- it starts with a question and a theory,
-
and I don't mean theory in the sense of like, a hunch that say, a quad-shot of espresso
-
makes you think better. Instead, in science a theory is what explains and organizes lots
-
of different observations and predicts outcomes. And when you come up with a testable prediction,
-
that's your hypothesis.
-
Once your theory and hypothesis are in place, you need a clear and common language to report
-
them with, so for example, defining exactly what you mean by "thinking better" with your
-
espresso hypothesis will allow other researchers to replicate the experiment. And replication
-
is key. You can watch a person exhibit a certain behavior once, and it won't prove very much,
-
but if you keep getting consistent results, even as you change subjects or situations,
-
you're probably on to something.
-
This is a problem with one popular type of psychological research: case studies, which
-
take an in-depth look at one individual. Case studies can sometimes be misleading, because
-
by their nature, they can't be replicated, so they run the risk of over-generalizing.
-
Still, they're good at showing us what CAN happen, and end up framing questions for more
-
extensive and generalizable studies. They're also often memorable and a great story telling
-
device psychologists use to observe and describe behavior. Like, say the smell of coffee makes
-
Carl suddenly anxious and irritable -- that obviously doesn't mean that it has that same
-
effect on everyone. In fact, Carl has terrible memories associated with that smell, and so
-
his case is actually quite rare. Poor Carl. But you would still have to look at lots of
-
other cases to determine that conclusively.
-
Another popular method of psychological research is naturalistic observation, where researchers
-
simply watch behavior in a natural environment, whether that's chimps poking ant-hills in
-
the jungle, kids clowning in a classroom or drunk dudes yelling at soccer games. The idea
-
is to let the subjects just do their thing without trying to manipulate or control the
-
situation. So yeah, basically just spying on people. Like case studies, naturalistic
-
observations are great at describing behavior, but they're very limited in explaining it.
-
Psychologists can also collect behavioral data using surveys or interviews, asking people
-
to report their opinions and behaviors. Sexuality researcher Alfred Kinsey famously used this
-
technique when he surveyed thousands of men and women on their sexual history and published
-
his findings in a pair of revolutionary texts, Sexual Behavior in the Human Male and Female
-
respectively.
-
Surveys are a great way to access consciously held attitudes and beliefs, but how to ask
-
the questions can be tricky; subtle word choices can influence results. For example more forceful
-
words like "ban" or "censor" may elicit different reactions than "limit" or "not allow". Asking
-
"Do you believe in space aliens?" is a much different question than "Do you think that
-
there is intelligent life somewhere else in the universe?" It's the same question, but
-
in the first the subject might assume you mean aliens visiting earth, and making crop
-
circles and abducting people and poking them.
-
And if how you phrase surveys is important, so is who you ask. I could ask a room full
-
of students at a pacifist club meeting what they think about arms control, but the result
-
wouldn't be a representative measure of where students stand, because there's a pretty clear
-
sampling bias at work here. To fairly represent a population, I'd need to get a random sample
-
where all members of the target group, in this case students, had an equal chance of
-
being selected to answer the question.
-
So once you've described behavior with surveys, case studies, or naturalistic observation,
-
you can start making sense out of it, and even predict future behavior. One way to do
-
that is to look at one trait or behavior is related to another, or how they correlate.
-
So let's get back to my buddy Bob who seems to think that his refrigerator is actually
-
some kind of time machine that can preserve food indefinitely. Let's say that Bob has
-
just tucked into a lunch of questionable leftovers, pizza that may very well have had a little
-
bit of fungus on it. But he was hungry, and lazy, and so he doused it in Sriracha. Suddenly,
-
he starts seeing things: green armadillos with laser beam eyes.
-
From here we could deduce that eating unknown fungus predicts hallucination, that's a correlation.
-
But correlation is not causation. Yes, it makes sense that eating questionable fungus
-
would cause hallucinations, but it's possible that Bob was already on the verge of a psychotic
-
episode, and those fuzzy leftovers were actually benign. Or there could be an entirely different
-
factor involved, like maybe he hadn't slept in 72 hours, or had an intense migraine coming
-
on, and one of those factors caused his hallucinations. It's tempting to draw conclusions from correlations,
-
but it's super-important to remember that correlations predict the possibility of cause-and-effect
-
relationships; they cannot prove them.
-
So we've talked about how to describe behavior without manipulating it and how to make connections
-
and predictions from those findings. But that can only take you so far; to really get to
-
the bottom of cause-and-effect behaviors, you're gonna have to start experimenting.
-
Experiments allow investigators to isolate different effects by manipulating an independent
-
variable, and keeping all other variables constant, or as constant as you can. This
-
means that they need at least two groups: the experimental group, which is gonna get
-
messed with, and the control group, which is not gonna get messed with.
-
Just as surveys use random samples, experimental researchers need to randomly assign participants
-
to each group to minimize potential confounding variables, or outside factors that may skew
-
the results. You don't want all grumpy teenagers in one group and all wealthy Japanese surfers
-
in the other; they gotta mingle.
-
Now sometimes one or both groups are not informed about what's actually being tested. For example,
-
researchers can test how substances effect people by comparing their effects to placebos,
-
or inert substances. And often, the researchers themselves don't know which group is experimental
-
and which is control, so they don't unintentionally influence the results through their own behavior,
-
in which case it's called, you guessed it, a double blind procedure.
-
So let's put these ideas into practice in our own little experiment. Like all good work,
-
it starts with a question. So the other day my friend Bernice and I were debating. We
-
were debating caffeine's effect on the brain. Personally, she convinced that coffee helps
-
her focus and think better, but I get all jittery like a caged meerkat and can't focus
-
on anything. And because we know that overconfidence can lead you to believe things that are not
-
true, we decided to use some critical thinking.
-
So let's figure out our question: "Do humans solve problems faster when given caffeine?"
-
Now we gotta boil that down into a testable prediction. Remember: keep it clear, simple,
-
and eloquent so that it can be replicated. "Caffeine makes me smarter" is not a great
-
hypothesis. A better one would be, say, "Adult humans given caffeine will navigate a maze
-
faster than humans not given caffeine." The caffeine dosage is your independent variable,
-
the thing that you can change. So, you'll need some coffee. Your result or dependent
-
variable, the thing that depends on the thing that you can change is going to be the speed
-
at which the subject navigates through this giant corn maze.
-
Go out on the street, wrangle up a bunch of different kinds of people and randomly assign
-
them into three different groups. Also at this point, the American Psychological Association
-
suggests that you acquire everyone's informed consent to participate. You don't want to
-
force anyone to be in your experiment, no matter how cool you think it is.
-
So the control group gets a placebo, in this case decaf. Experimental group one gets a
-
low dose of caffeine, which we'll define at a 100 milligrams; just an eye opener, like,
-
a cup of coffee's worth. Experimental group two gets 500 milligrams, more than a quad
-
shot of espresso dunked in a Red Bull. Once you dose everyone, turn them lose in the maze
-
and wait at the other end with a stopwatch.
-
All that's left is to measure your results from the three different groups and compare
-
them to see if there were any conclusive results. If the highly dosed folks got through it twice
-
as fast as the low dose and the placebo groups, then Bernice's hypothesis was correct, and
-
she can rub my face in it saying she was right all along, but really that would just be the
-
warm flush of hindsight bias telling her something she didn't really know until we tested it.
-
Then, because we've used clear language and defined our parameters, other curious minds
-
can easily replicate this experiment, and we can eventually pool all the data together
-
and have something solid to say about what that macchiato was doing to your cognition–
-
or at least the speed at which you can run through a maze. Science: probably the best
-
tool that you have for understanding other people.
-
Thanks for watching this episode of Crash Course Psychology; if you paid attention you
-
learned how to apply the scientific method to psychological research through case studies,
-
naturalistic observation, surveys, and interviews and experimentation. You also learned about
-
different kinds of bias in experimentation and how research practices help us avoid them.
-
Thanks especially to our Subbable subscribers, who make this and all of Crash Course possible.
-
If you'd like to contribute to help us keep Crash Course going, and also get awesome perks
-
like an autographed science poster, or even be animated into an upcoming episode, go to
-
Subbable.com/CrashCourse to find out how.
-
Our script was written by Kathleen Yale and edited by Blake de Pastino and myself. Our
-
consultant is Dr. Ranjit Bhagwat. Our director and editor is Nicholas Jenkins, our script
-
supervisor is Michael Aranda, who is also our sound designer, and our graphics team
-
is Thought Café.