top of page

Best Practice: Why Research Is Critical

Updated: Jun 25


research

We have based our Best Practice series on Dr. Beth Morling's Research Methods in Psychology (5th ed.). We encourage you to purchase it for your bookshelf. If you teach research methods, consider adopting this best-of-class text for your classes.


Dr. Beth Morling is a distinguished Fulbright scholar and was honored as the 2014 Professor of the Year by the Carnegie Foundation for the Advancement of Teaching.


Dr. Beth Morling

With more than two decades of experience as a researcher and professor of research methods, she is an internationally recognized expert and a passionate advocate for the Research Methods course. Morling's primary objective is to empower students to become discerning critical thinkers, capable of evaluating research and claims presented in the media.


research methods


In this post, we will explore a pivotal question addressed by Chapter 2: "How do we know what to believe?"


podcast icon
CLICK TO HEAR THIS POST NARRATED


Every day, we’re bombarded with claims—on TikTok, in conversations with friends, from news headlines, and even from supposed experts. Some of these claims are supported by solid evidence, while others are not. The goal of this post is to help you think like a scientist when it comes to evaluating these claims. We’ll take a close look at how personal experience, intuition, and authority often shape what people believe—and how these sources, while often compelling, can lead us astray. We’ll contrast these with the research-based approach used by psychological scientists to arrive at more reliable conclusions.


This is not just an academic exercise. Every day, people make decisions about their health, relationships, finances, and even who to trust based on untested beliefs or flashy claims. By learning to recognize the strengths and limitations of different sources of knowledge, you’ll be better equipped to sort fact from fiction. For instance, you’ll be able to identify when someone is relying on personal experience (“It worked for me!”) versus when they’re referring to a large, peer-reviewed study. You’ll also be able to spot the subtle ways that intuition can lead you astray, even when it feels convincing. Just because something makes sense, feels right, or is endorsed by someone you respect doesn’t mean it’s true.


In addition, we’ll walk through how to find trustworthy research and how to read scientific articles in a way that makes them less intimidating. You'll learn that the structure of empirical research follows a pattern—from the abstract to the discussion—that you can use to your advantage. Finally, we’ll talk about the growing threat of disinformation and how you can protect yourself by using the same critical thinking skills you’ll develop in this course. This post doesn’t just give you content—it gives you a cognitive toolkit for making better decisions in school, at work, and in life.



Experience Versus Research


Personal experience can be powerful. When something seems to work for us or someone we trust, it’s easy to believe in it wholeheartedly. For example, maybe you’ve felt calmer after hitting a punching bag when you were angry and concluded that venting helped. But psychological scientists ask, “Compared to what?” That’s the key difference. In science, we need a comparison group to evaluate whether a particular approach truly works.


Let’s dig deeper into this. In your daily life, you almost never have access to a true comparison group. If you felt better after visiting a rage room, you can’t say for sure it was the smashing that helped. Maybe you were already starting to calm down. Perhaps you had a good conversation with a friend afterward, or maybe the activity simply distracted you from your anger. Without a comparison group—people who didn’t go to the rage room—you can’t tell what caused the change.


A real-world example illustrates the dangers of relying solely on experience. For decades, radical mastectomy was considered the gold standard treatment for breast cancer. Doctors observed patients' improvement and assumed it was due to the procedure. But they never systematically compared this aggressive surgery to less extreme alternatives. When scientists finally conducted a randomized clinical trial, they found that radical mastectomy was no more effective than simpler surgeries. The key difference? The research employed comparison groups and systematic observation, whereas the doctors relied on personal stories and assumptions.


The same principle applies in psychology. Brad Bushman’s study on anger and catharsis provides a great example. Participants who imagined their antagonist’s face while punching a bag felt more aggressive afterward, not less. Those who sat quietly calmed down the most. Without comparison groups, the participants might have believed that venting helped, but the research revealed the opposite.


Another problem with experience is that it’s usually confounded. That means other variables are changing alongside the one you’re interested in. Maybe you felt better after drinking herbal tea, but did you also go to bed earlier or skip your afternoon coffee? Research designs aim to control for all other variables, allowing only one variable to change at a time. That’s how we can identify actual cause-and-effect relationships. In everyday life, it's almost impossible to do that.


Scientific research provides a clearer and more reliable path to understanding how things work. It provides us with tools to examine whether our impressions are correct, and it helps protect us from being misled by coincidences, assumptions, or wishful thinking. Experience matters, but research is the lens that helps us see it accurately.



Why Research Is Probabilistic


Understanding that research is probabilistic is one of the most important takeaways in the field of psychological science. When we say research is probabilistic, we mean that its conclusions are not intended to apply to every individual case, but rather to identify patterns that occur for most people most of the time. This might sound like a limitation, but it’s actually a strength. Science acknowledges that people are different and that there will always be individuals who deviate from the norm. Still, if a study finds that a majority of people benefit from mindfulness meditation, that finding can guide practice, even if it doesn’t work for everyone.


Let’s say a psychologist runs a study on a new anxiety-reduction technique and finds that 80% of participants report feeling less anxious afterward. That’s a strong result, but it also means 20% didn’t benefit. The conclusion of the study is not that the treatment is perfect; it’s that the treatment is probably effective for a large number of people. This is what makes science useful for policymaking, clinical decisions, and education: it provides general guidance based on patterns of evidence, rather than guesses or exceptions.


It’s important to realize that even compelling personal stories can be misleading in light of probabilistic evidence. For example, someone might say, “My cousin smoked two packs a day and lived to 95!” While that may be true, the overwhelming body of evidence shows that smoking greatly increases the risk of lung cancer and heart disease. Anecdotes do not invalidate research—they are individual data points, not trends. Anecdotes may feel emotionally persuasive, but science operates through empirical evidence and statistical analysis. When thousands of studies and millions of data points converge on a pattern, that pattern is worth trusting.


Probabilistic thinking also helps us manage our expectations and make better predictions. If you know that a particular intervention helps 70% of people, then you can try it with the understanding that it may not work for you, but it’s still worth trying. This also helps in areas like health communication and advertising, where exaggerated or absolute claims (“Guaranteed results!”) should immediately raise red flags. Scientists never promise guarantees. They deal in likelihoods, not certainties.


In everyday life, we often feel uncomfortable with uncertainty, so we tend to seek definitive answers. But science teaches us to tolerate some ambiguity and to seek evidence that increases our confidence in a claim.

That’s why researchers use large sample sizes, statistical analyses, and replication to determine what is typically true, rather than what is always true. And when multiple studies point in the same direction, we can be more confident in the results. Thinking probabilistically helps us stay grounded and avoid being misled by isolated or sensational examples. It’s a mindset that fosters better decision-making across all areas of life.



Intuition Is Biased


We often like to think of our intuition as a superpower—our built-in ability to make fast, accurate judgments. And sometimes, it does help us make quick decisions, like catching a falling glass or sensing that someone is upset. However, when it comes to evaluating psychological claims or determining what works and what doesn’t, our intuition is often deeply flawed. One of the main reasons is that intuition is shaped by cognitive biases—mental shortcuts that help us make decisions quickly but often lead us astray.


Let’s start with the availability heuristic. This bias means that we tend to judge the frequency or likelihood of something based on how easily examples come to mind. For instance, if you hear about two tragic teen suicides in your community, it may suddenly feel like suicide is the most common cause of teen death, even though statistics show that accidents are far more common. Similarly, constant media coverage of airplane crashes can make flying feel dangerous, even though driving is statistically much riskier. The vividness of an example, its emotional impact, or how recently it occurred can make it seem more common than it really is.


Another major bias is confirmation bias. This is the human tendency to look for, notice, and remember information that supports what we already believe—and to ignore or discount information that contradicts our beliefs. Let’s say someone believes that the COVID-19 vaccine causes side effects. When they search online, they might only look at sites that support that belief or interpret ambiguous information in a way that reinforces their views. They might ignore large-scale studies showing the vaccine’s safety or dismiss those findings as biased. In other words, confirmation bias reinforces our existing views, making it harder to learn new information or correct misconceptions.


We’re also swayed by good stories—narratives that make intuitive sense, even when they’re wrong. Freud’s idea of catharsis is a classic example. The metaphor of the human mind being like a steam engine, where pressure builds up and must be released, makes sense. We’ve all felt the need to “blow off steam.” However, as we saw in Bushman's research, venting anger often increases aggression rather than reducing it. Still, people continue to believe in catharsis because the story feels right. Likewise, programs like Scared Straight seem logical—exposing teens to the harsh realities of prison life should scare them into behaving better. Yet research consistently shows these programs increase criminal behavior. We’re drawn to stories that are familiar, emotionally satisfying, or culturally reinforced, even when the data tell a different story.


Then there’s the bias blind spot—our belief that we are less biased than other people. It’s easy to spot biases in others. You might think, “My friend always finds articles that agree with her political views,” but fail to notice that you do the same thing. In one study, people rated themselves as less likely to fall for psychological biases than the average person, even though that logically can’t be true for everyone. This blind spot makes it difficult for us to question our own thinking or recognize when we’re being irrational.


The problem with all these biases is that they feel natural. We don’t usually notice when we’re using the availability heuristic, engaging in confirmation bias, or being swayed by a good story. That’s why scientific reasoning is so important. Scientists design studies specifically to guard against these biases. They use control groups, random assignment, blinding procedures, and peer review to ensure their findings aren’t just the result of faulty intuition. As students of psychology, our job is to develop that same kind of self-awareness and skepticism. Instead of assuming our gut feelings are right, we learn to ask, “What does the evidence say?” “Could I be missing a key comparison?” or “Am I only seeing what I want to see?” Scientific reasoning isn’t perfect, but it’s a whole lot more reliable than intuition alone.



Authority Isn’t Always Reliable


Authority figures can be incredibly persuasive. When someone has a fancy degree, wears a white coat, or speaks confidently on television or social media, we’re often inclined to believe what they say. It’s not irrational—we’re social animals, and throughout history, listening to leaders or experts has often been useful. But when it comes to scientific truth, authority by itself isn’t enough. In fact, placing too much trust in authority can be dangerous if we fail to evaluate whether that authority is supported by solid evidence critically.


Consider Arthur Janov, a licensed psychologist who developed primal scream therapy. His credentials and charisma led many people to believe in his method, which involved reliving traumatic experiences by screaming. It sounds powerful, it feels cathartic, and it makes intuitive sense to many clients and practitioners. However, there was a catch—there was no good research supporting its effectiveness. Janov had the authority, but he didn’t have the data. And this isn't just a historical anecdote. Even today, therapists, doctors, or social media influencers may promote treatments based on their own beliefs, experiences, or popularity, rather than on controlled, peer-reviewed evidence.


The case of radical mastectomy also shows the dangers of deferring too heavily to authority. For nearly 100 years, the procedure was promoted by leading surgeons who believed, without rigorous evidence, that removing as much tissue as possible was the best way to stop cancer. These were not fringe figures—they were the top experts in their field. Yet their beliefs persisted despite the lack of a systematic test of their claims. When research finally compared outcomes between radical and simple mastectomies, it showed no added benefit from the more invasive surgery. Authority had led the way, but it had led people astray.


So, how can we know whether an authority figure is trustworthy? One way is to ask whether they are relying on their credentials alone or citing actual, peer-reviewed research. Are they referencing empirical studies that used comparison groups, careful measurements, and transparent methodology? Are they open about the limitations of their conclusions? Do they invite skepticism and alternative views, or do they shut down disagreement?

Even highly educated individuals can be biased, overconfident, or influenced by their own experiences. That’s why it’s crucial to separate the speaker from the evidence they provide. A well-informed high school teacher citing strong research is more trustworthy than a doctor who relies only on anecdotes. And when a celebrity claims that crystals boost your immune system or that a specific supplement cured their brain fog, ask yourself: Did they get that information from a double-blind, placebo-controlled trial, or a sponsored ad?


Learning to evaluate authority doesn’t mean being cynical. It means being curious and cautious. It’s okay to listen to experts—but also to ask, “Where did that information come from?” The gold standard for trust isn’t a Ph.D. or a slick website—it’s evidence. That’s what makes science self-correcting: it doesn’t matter who you are; your ideas have to stand up to testing. As students, being aware of this helps us resist being swayed by charisma and gives us the confidence to seek truth from data, not just from degrees.



Finding and Reading Research


So, how do you go about finding trustworthy research in psychology? It’s not as simple as just Googling a topic and clicking the first link. Much of the best information is tucked away in scientific journals and academic databases, which often employ specialized language and formatting. The good news is that once you learn how to navigate these resources, you’ll be able to access a wealth of reliable, up-to-date, peer-reviewed studies that are designed to answer exactly the kinds of questions you might have—whether it’s about treatments for anxiety, predictors of academic success, or the cognitive effects of screen time.


One of the most reliable ways to locate psychological research is by using databases such as PsycINFO or Google Scholar. PsycINFO is a database specifically curated for psychology and related fields. It’s a powerful tool because it lets you filter for peer-reviewed articles, specific authors, publication dates, and subjects. It even tells you how many times an article has been cited, which can be a rough indicator of its influence. The downside is that PsycINFO requires a subscription, so you’ll need to access it through your college or university library.


Google Scholar is a great alternative if you’re outside the university network or want to search more broadly. It draws in scholarly content from various disciplines and is available for free use. The catch is that it’s less precise: it doesn’t always tell you if an article is peer-reviewed, and it doesn’t let you filter results as cleanly as PsycINFO. Still, it’s a useful starting point for exploring a topic and can lead you to both empirical studies and review articles.


When you find an article, the next step is reading it effectively. Scientific articles typically follow a predictable structure, consisting of an abstract, introduction, methods, results, discussion, and references. The Abstract gives you a quick summary of what the paper is about, including the research question, methods, and key findings. This is your first checkpoint to see whether the article is relevant to your interests.


Whenever you cite an article, you must read it in its entirety since Abstracts can be unintentionally misleading. We've read literature reviews repeating the Abstract's errors. If only the authors had read the Method section. If you use LLM platforms like ChatGPT, reading an article confirms its existence and verifies its findings. AI-generated hallucinations can compromise the accuracy of scholarship, but they can be mitigated by approaching articles critically.

Then, head to the Introduction—especially the last few paragraphs—which will tell you the specific hypotheses the authors are testing. The Method section explains exactly how the study was conducted, including who participated, what materials were used, and how the data were collected. The Results section presents the data and statistical tests, while the Discussion provides an interpretation of the findings in the context of the research question and broader psychological theories.


If reading all this sounds intimidating, don’t worry—you’re not alone. It’s totally normal for the method and results sections to feel overwhelming at first. Focus on understanding the big picture: What did they do? What did they find? How does it answer their research question? Use the figures and tables to guide your interpretation. And when in doubt, refer to the discussion section for a clearer explanation. You can also use AI tools or summaries if you cite them properly, but nothing beats learning to navigate the article yourself.


You should also be aware of the distinction between empirical journal articles and review articles. Empirical articles report new data from a specific study. Review articles, on the other hand, synthesize findings from multiple studies and sometimes include a meta-analysis, a statistical technique that combines results across studies to calculate an average effect size. Review articles are excellent for gaining a comprehensive understanding of a topic.


Finally, not all journals are equal. Stick with reputable ones like Psychological Science or the Journal of Experimental Psychology. Avoid predatory journals that publish anything for a fee and skip peer review. Ask your professors or librarians if you’re unsure. Remember, the goal is to build your understanding based on the best available evidence. Once you learn to find and read the research yourself, you won’t need to rely solely on someone else’s summary or opinion. You’ll be building your scientific literacy—and that’s a skill you’ll use long after this course is over.


Evaluating Journalism and Disinformation


Now that you know how to find and read scientific research, it’s important to think about how psychology is communicated outside of scholarly articles. Most people don’t read empirical journal articles in their day-to-day lives—they read the news, browse social media, and watch videos. That’s where journalism comes in. Journalists help translate science for the public, and when done well, this translation is incredibly valuable. But not all journalism is accurate, and in some cases, it’s downright misleading. Worse, some online content isn’t journalism at all—it’s disinformation: content deliberately crafted to mislead, manipulate, or provoke. Disinformation graphic © Skorzewiak/Shutterstock.com.


disinformation


Let’s start by discussing legitimate journalism. A good science journalist will read the original study, consult multiple experts, and craft a story that accurately conveys the main findings. However, journalists are also under pressure to make their stories engaging and eye-catching. That’s why they might use sensational headlines, oversimplify findings, or gloss over limitations. For example, you might see a headline like, “Smartphones are growing horns on teenagers’ skulls!”—a real story that went viral. The headline was based on a peer-reviewed study; however, the study had several serious limitations: it didn’t measure smartphone use, didn’t use a representative sample, and didn’t disclose conflicts of interest. This is an example of how even real research can be misrepresented if journalists fail to clearly explain the methods and limitations.


That’s why you need to develop click restraint. When you see a shocking or emotional headline, pause. Don’t immediately share it or form a conclusion. Instead, ask: What’s the source? Does the article link to the original research? Have other trusted outlets covered the story? This leads into another tool: lateral reading. That means opening a new tab and searching for other perspectives on the same topic. Reliable stories are typically confirmed across multiple legitimate outlets. Untrustworthy stories may stand alone, be based on dubious sources, or lack corroborating evidence.


Disinformation is an even more serious problem.


Unlike sloppy journalism, disinformation is intentionally false. It’s often designed to evoke strong emotions, such as anger, fear, or vindication, and it spreads rapidly through social media.

People create disinformation for various reasons: to gain political power, provoke outrage, generate revenue through advertising, or influence public opinion. Sometimes, the content looks sophisticated—complete with charts, quotes, and links—but the core claims are false or unsupported. Examples include conspiracy theories about vaccines altering DNA, claims that certain foods “cure” cancer, or fabricated political scandals.


To protect yourself, it’s helpful to recognize signs of disinformation. Be skeptical of headlines that sound too good (or too terrible) to be true. Look for sources that are transparent about their authorship and funding. Avoid websites that don’t cite research or that quote “experts” without credentials. Use fact-checking sites like Snopes or PolitiFact, and consider using browser plug-ins that flag unverified claims. Even if a story confirms something you already believe, it’s worth asking whether it’s based on evidence or just emotional appeal.


Also, remember that professional journalists, like scientists, are part of a community that values accuracy and self-correction.


Reputable news outlets publish corrections when they get things wrong. They consult multiple sources. They link to original research. Disinformation sites rarely do any of this. They rely on our bias blind spots and confirmation bias to do the work for them.

In the age of information overload, your best defense is to slow down and think critically. Don’t let viral headlines or flashy infographics dictate what you believe. Use your skills as a psychology student to trace claims back to evidence, recognize credible sources, and evaluate the quality of reporting. These habits aren’t just useful for class—they’re essential tools for being an informed, responsible, and thoughtful citizen in today’s world.


Summary


In this post, we’ve taken a deep dive into how people form beliefs and why psychological scientists rely on research-based conclusions instead of intuition, personal experience, or authority figures. While it’s natural to trust our experiences or defer to someone who sounds confident, this post showed that those sources of information often lead us astray. Experiences lack comparison groups and are usually confounded by other variables. Intuition feels compelling, but is shaped by powerful cognitive biases. Even authority figures with advanced degrees can promote ideas that lack evidence. By contrast, research uses systematic comparison, control groups, and probabilistic reasoning to generate more accurate and generalizable conclusions.


We learned that relying on personal experience alone is problematic because we often lack a point of comparison. Without a comparison group, you can’t know whether the factor causes the experience you think it does. For instance, you might think that tapping your face helped you calm down—but unless you compare that to what would have happened if you didn’t tap, you can’t really know. Experience is also confounded: perhaps you also practiced yoga that day, got better sleep, or had a good conversation with a friend. In contrast, scientific studies are designed to control for those other variables so they can more accurately identify cause-and-effect relationships. Researchers create structured conditions in which the only difference between groups is the factor being tested. That level of control is nearly impossible in everyday experience.


We also explored the idea that research findings are probabilistic, not absolute. Just because a study finds that 80% of people benefit from a therapy doesn’t mean that every single person will. Research helps us understand patterns and trends—not guarantee them—and it provides a better foundation for decision-making than isolated personal anecdotes. Probabilistic thinking is one of the most powerful habits of mind that scientific reasoning can offer. It helps us navigate the complexity of human behavior with humility and precision. Instead of asking, “Will this work for me?” we learn to ask, “What are the odds that this works for most people like me?” That shift in mindset helps us make more informed, less emotionally reactive decisions.


Additionally, this post explores several cognitive biases that impact our ability to reason accurately. The availability heuristic leads us to overestimate the frequency of vivid or memorable events. Confirmation bias causes us to seek out information that supports our existing beliefs. And the bias blind spot convinces us that other people are biased, but we’re not. These mental shortcuts make intuition an unreliable guide. Science offers tools to overcome them—tools like comparison groups, random assignment, and peer review. Scientific methods are designed not just to gather information, but to correct for the flaws in human thinking. This means that science is not just a collection of facts—it’s a disciplined way of counteracting the tricks our minds play on us.


We examined why it’s essential to evaluate authority figures with caution. Just because someone is confident or has credentials doesn’t mean their claims are valid. Experts can be wrong, especially when they rely on intuition or personal experience rather than systematic research. The key question to ask is: "What evidence supports this claim?" If the answer doesn’t involve peer-reviewed studies or controlled comparisons, be skeptical. Learning to interrogate the basis of expert advice doesn’t make you disrespectful—it makes you scientifically literate. Good authorities welcome scrutiny because they recognize that their conclusions are only as strong as the data supporting them.


Finally, we discussed how to locate and interpret research. Tools like PsycINFO and Google Scholar let you locate empirical journal articles and review articles. You learned to focus on the abstract, hypotheses, results, and discussion to figure out the article’s argument and evidence. We also covered the dangers of disinformation and sensationalized journalism. You now know how to practice click restraint and lateral reading—strategies to verify whether a claim holds up under scrutiny. This is especially important in today’s digital age, where false information can spread rapidly and convincingly. Knowing how to spot red flags in media coverage, identify credible sources, and trace claims back to peer-reviewed studies makes you not only a better student but also a more discerning consumer of information.


In summary, this post surveyed tools to help you think more critically about the information you encounter every day. Whether it’s an Instagram post about healing crystals, a TikTok video claiming a new cure, or a news article on psychological science, you’re now better prepared to ask: Is this based on systematic evidence? Can I trace this back to good research? Are there comparison groups? Was the study replicated? By applying what you’ve learned, you’re developing not only your scientific literacy but also your power to make better, more informed choices in every part of life. These skills will serve you well, not just in psychology but in any context where clear thinking and evidence-based decision-making matter.



Key Takeaways


  1. Experience, intuition, and authority are persuasive but often flawed sources of belief; science provides more reliable conclusions through controlled, comparative, and probabilistic methods.


  2. Research is probabilistic, meaning findings describe what is likely true for most people, not what is guaranteed for all—encouraging a mindset of informed skepticism.


  3. Cognitive biases like the availability heuristic and confirmation bias distort our judgments; scientific methods are designed to correct for these distortions.


  4. Authority should not be equated with accuracy; credible claims must be supported by peer-reviewed research, not charisma or credentials alone.


  5. Scientific literacy includes locating, reading, and critically evaluating research, as well as recognizing disinformation and misleading journalism using tools like click restraint and lateral reading.


infographic



Glossary


Abstract: a concise summary of a research paper's content, typically outlining the research question or purpose, methodology, key findings, and main conclusions. It enables readers to quickly determine the paper's relevance.


anecdote: a brief account of a particular incident or personal experience. While potentially illustrative or emotionally persuasive, anecdotes are individual data points and do not constitute generalizable trends or invalidate systematically gathered research findings, which rely on broader empirical evidence and statistical analysis.

availability heuristic: a cognitive bias where individuals overestimate the likelihood or frequency of an event based on the ease with which examples or instances come to mind. Vivid or recent events are often more easily recalled and can disproportionately influence judgment, even if they are not statistically representative. For instance, hearing about several local car thefts might lead someone to believe car theft is more common than national statistics indicate because the local examples are readily available in memory.

bias blind spot: a cognitive bias characterized by the tendency to recognize the impact of biases on the judgment of others, while failing to see the impact of biases on one's judgment. Individuals often believe they are less biased than their peers. For example, a person might readily identify how a friend's political views influence their news consumption but not recognize similar patterns in their behavior.

catharsis: the process of releasing, and thereby providing relief from, strong or repressed emotions. In psychology, the concept, often associated with psychoanalysis, suggests that expressing or "venting" emotions can reduce their intensity. However, the therapeutic value and mechanisms of catharsis are debated, with some research suggesting that certain forms of "venting" can sometimes amplify negative emotions.


click restraint: the practice of pausing to critically evaluate the source and potential biases of online content, particularly sensational or emotionally charged headlines, before clicking, sharing, or forming a conclusion. It is a method to combat misinformation and reactive engagement.


comparison group: in research, a group of participants that does not receive the experimental treatment or intervention being studied. This group serves as a baseline against which the experimental group (which receives the treatment) is compared, allowing researchers to determine whether the treatment itself caused any observed changes, rather than other factors. For example, to test a new teaching method, a comparison group would continue using the standard method, allowing researchers to determine if the new method yields different outcomes.

confirmation bias: A cognitive bias involving the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one's preexisting beliefs or hypotheses, while giving disproportionately less consideration to alternative possibilities or contradictory information.

confounding (or confounding variable): an extraneous variable is associated with both the independent variable and the dependent variable, making it difficult to determine whether the observed effect is due to the independent variable or the extraneous one. Effective research designs aim to control for or eliminate confounding variables. For instance, if a study finds that people who drink herbal tea report less stress, but these individuals also meditate more frequently, meditation is a confounding variable because it's unclear whether the tea or the meditation reduced the stress.

Discussion: the section of a research paper that interprets and explains the significance of the findings, considers them in the context of the initial research question and existing theories, discusses study limitations, and suggests directions for future research. empirical article: an article that reports original research based on data from direct observation or experimentation. It typically includes sections detailing the introduction, methods, results, and discussion of the study. These articles are primary sources, presenting firsthand findings rather than summarizing the work of others. They form the building blocks of scientific knowledge.

empirical research: research based on direct or indirect observation and experience rather than solely on theory, logic, or personal opinion. It involves the systematic collection and analysis of data and is a cornerstone of the scientific method.

Introduction: the initial section of a research paper that provides background information on the topic, reviews relevant existing literature, identifies the research gap or problem being addressed, and states the study's objectives, research question(s), and often the specific hypotheses to be tested.

intuition: the ability to understand or know something immediately, based on feelings or instincts rather than conscious reasoning or explicit learning. While intuition can lead to quick judgments, its accuracy can vary, and it is often studied in contrast to more deliberate, analytical thought processes.

lateral reading: a strategy for evaluating the credibility of online information by leaving the current website to open new browser tabs and search for what other sources say about the original site or its claims. This approach enables a quick assessment of the source's reputation and trustworthiness before engaging deeply with its content.

meta-analysis: a statistical technique for combining and synthesizing the results from multiple independent studies on a specific topic. It aims to derive an overall conclusion or a more precise estimate of an effect by quantitatively pooling data. This method increases statistical power and helps resolve inconsistencies in research findings. Meta-analyses are considered a high level of evidence in research.

Method: the section of a research paper that describes in detail how the study was conducted. This includes a description of the participants or subjects, the materials or apparatus used, the study design, and the procedures for data collection and analysis. The method section should be sufficiently detailed to allow for replication by other researchers.


predatory journals: publications that exploit the academic publishing model for profit, typically by charging publication fees to authors without providing robust editorial and publishing services, such as rigorous peer review, proper indexing, or long-term archiving. They often mimic legitimate scholarly journals but lack academic integrity.


Primal Scream Therapy: a form of psychotherapy developed by Arthur Janov, based on the theory that neurosis is caused by repressed pain from early life trauma (the "Primal Pain"). The therapy encourages patients to re-experience and express these repressed feelings, often through spontaneous and unrestrained screaming, with the goal of resolving the trauma.


probabilistic: in a scientific context, probabilistic explanations or models acknowledge inherent randomness or uncertainty and aim to describe likelihoods or trends that apply to populations or systems on average, rather than making deterministic predictions for every individual case.

professional journalism: the systematic gathering, preparation, and dissemination of news and information to the public, executed by individuals trained in journalistic practices and committed to a recognized code of ethics. It emphasizes principles such as truthfulness, accuracy, verification, fairness, impartiality, and accountability. Professional journalists strive for independence from influences that could compromise their integrity and are dedicated to serving the public interest by providing reliable information that enables citizens to make informed decisions. This practice is characterized by transparency in methods and a responsibility to minimize harm

radical mastectomy: a surgical procedure for breast cancer treatment involving the removal of the entire breast, underlying chest muscles (pectoral muscles), and lymph nodes in the axilla (armpit). This extensive procedure is less common today, with more modified and breast-conserving surgeries often preferred when oncologically appropriate.


Results: the section of a research paper that reports the findings of the study by presenting the collected data and the outcomes of any statistical analyses performed. This section should be a straightforward, objective account of what was found, without interpretation or discussion of implications (which are reserved for the Discussion section).


review article: a scholarly article that summarizes, synthesizes, and critically evaluates the existing research on a specific topic, drawing from numerous studies. Review articles can identify patterns, contradictions, and gaps in the literature, and may include a meta-analysis—a statistical technique for combining and analyzing data from multiple studies to derive an overall effect size. They provide a comprehensive overview of a field or research question.



About the Authors


Zachary Meehan earned his PhD in Clinical Psychology from the University of Delaware and serves as the Clinic Director for the university's Institute for Community Mental Health (ICMH). His clinical research focuses on improving access to high-quality, evidence-based mental health services, bridging gaps between research and practice to benefit underserved communities. Zachary is actively engaged in professional networks, holding membership affiliations with the Association for Behavioral and Cognitive Therapies (ABCT) Dissemination and Implementation Science Special Interest Group (DIS-SIG), the BRIDGE Psychology Network, and the Delaware Project. Zachary joined the staff at Biosource Software to disseminate cutting-edge clinical research to mental health practitioners, furthering his commitment to the accessibility and application of psychological science.



Zachary Meehan


Fred Shaffer earned his PhD in Psychology from Oklahoma State University. He is a biological psychologist and professor of Psychology, as well as a former Department Chair at Truman State University, where he has taught since 1975 and has served as Director of Truman’s Center for Applied Psychophysiology since 1977. In 2008, he received the Walker and Doris Allen Fellowship for Faculty Excellence. In 2013, he received the Truman State University Outstanding Research Mentor of the Year award. In 2019, he received the Association for Applied Psychophysiology and Biofeedback (AAPB) Distinguished Scientist award. He teaches Experimental Psychology every semester and loves Beth Morling's 5th edition.


Fred Shaffer



Support Our Friends



ISNR



NRBS

BFE


AAPB




Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
New Logo.jpg
  • Twitter
  • Instagram
  • Facebook

© 2025 BioSource Software

bottom of page