Best Practice: Variables
- BioSource Faculty
- 3 days ago
- 10 min read
Updated: 2 days ago

We have based our Best Practice series on Dr. Beth Morling's Research Methods in Psychology (5th ed.). We encourage you to purchase it for your bookshelf. If you teach research methods, consider adopting this best-of-class text for your classes.
Dr. Beth Morling is a distinguished Fulbright scholar and was honored as the 2014 Professor of the Year by the Carnegie Foundation for the Advancement of Teaching.

With more than two decades of experience as a researcher and professor of research methods, she is an internationally recognized expert and a passionate advocate for the Research Methods course. Morling's primary objective is to empower students to become discerning critical thinkers, capable of evaluating research and claims presented in the media.

In this post, we will explore a question addressed by Chapter 3: "Which kinds of variables do researchers study?"
We will explore how psychological researchers identify and define variables, focusing on the distinction between measured and manipulated variables. We will explain how researchers transform conceptual variables into operational definitions, enabling them to design studies, test hypotheses, and draw valid conclusions. We will also examine how the role of variables influences the type of claims researchers can make—frequency, association, or causal—and will emphasize the importance of variable classification for interpreting study results accurately.
Variables
Variables are the building blocks of psychological research. A variable is anything that can vary, meaning it has at least two levels or values. For example, if you're studying sleep habits, the variable might be “hours of sleep per night,” and its levels could include five hours, seven hours, or nine hours. Some variables are categorical, like “favorite type of music” with levels such as jazz, rock, or classical. Others are quantitative, like “number of hours studied,” where the levels are numerical. Importantly, a variable is not the same as a constant. A constant is something that could vary but doesn't in a particular study. For example, if you only study Canadian teenagers, then “country” is a constant because all participants share that trait. Identifying variables correctly is crucial for designing research, analyzing results, and understanding claims.
Knowing how to describe and label variables accurately helps you read studies more effectively. When researchers discuss a study’s findings, they’re referring to variables they’ve measured or manipulated. Recognizing the levels of a variable helps you determine whether researchers designed the study well and measured what they meant to. For instance, if a study claims to measure “academic success,” you’ll want to know how that variable was operationalized—maybe as GPA, class rank, or test scores. When we say something varies, we expect to see those distinctions clearly defined. This distinction also matters when deciding how much confidence to place in a study’s conclusions.
Variables play different roles in different kinds of claims. In a frequency claim, there’s just one variable, and the researchers are measuring its level or rate within a population. In association claims, researchers examine whether two measured variables are related to each other. In causal claims, one variable is manipulated to see if it causes changes in another variable. The role a variable plays—whether it’s the focus of a frequency claim or part of a causal design—affects how it should be measured, analyzed, and interpreted.
Understanding variables also helps you make sense of hypotheses and predictions. When researchers develop a theory, they identify conceptual variables they want to explore—broad ideas like happiness, aggression, or academic achievement. Then they define those variables in specific, observable ways so they can be measured or manipulated. This step transforms an abstract concept into something testable. For example, “stress” could be defined as cortisol levels, a self-report score, or even the number of life events experienced in the past year. The choice of operational definition can influence the study’s outcomes and its validity.
As you begin to analyze studies yourself, try identifying all the variables in the study and their levels. Ask whether each variable was measured or manipulated, and whether the levels make sense in the context of the research question. This exercise will help you think more clearly about research design and interpretation. It also sharpens your ability to evaluate claims critically, especially in news articles that oversimplify or misrepresent research findings.
Finally, remember that understanding variables isn’t just about definitions—it’s about building a mindset for thinking scientifically.
When you read a study, see a headline, or hear someone make a claim, ask: What’s the variable here? What are its levels? Was it measured or manipulated?
Getting into the habit of asking these questions will make you a more thoughtful, skeptical, and scientifically literate reader—someone who doesn’t just take claims at face value but knows how to interrogate the evidence behind them.
Measured and Manipulated Variables
Understanding the distinction between measured and manipulated variables is essential to interpreting psychological studies and understanding what kind of research supports which claims. A measured variable is one whose levels are observed and recorded by the researcher without any intervention. Examples include age, height, stress level, or the number of hours someone slept last night. These variables are captured as they naturally occur, often using self-report questionnaires, behavioral observations, or biological measures like heart rate or cortisol levels. A manipulated variable, on the other hand, is one that a researcher controls or changes. For example, participants might be randomly assigned to drink either 10mg or 30mg of caffeine or to study either in silence or with music. This assignment allows researchers to assess the effect of the manipulation on some outcome, such as memory recall.
Some variables can only be measured and not ethically or practically manipulated. Age is a common example—we can observe and record a person’s age, but we can’t assign someone to be older or younger. The same goes for IQ or gender identity. In some cases, manipulating a variable would be unethical, such as assigning participants to traumatic childhood experiences to study their effects on adult well-being. In these situations, researchers rely on measured variables and try to use carefully controlled designs that minimize confounds while still maintaining ethical standards. These limitations affect the type of claim the researcher can make. For example, if a variable is only measured, the researcher can’t claim causality—only an association.
Some variables are flexible—they can be measured in some studies and manipulated in others. Consider “sleep.” A study might measure participants’ sleep by asking how many hours they slept last night, or it might manipulate sleep by randomly assigning people to sleep 4, 6, or 8 hours. This flexibility allows researchers to study variables in different ways depending on their research question.
Similarly, researchers might measure participants’ music training history or manipulate music exposure in an experimental setting. The key is to recognize which role the variable is playing in a particular study so you can interpret the results properly.
When reading or conducting research, identifying whether a variable was measured or manipulated helps you determine what type of study was used—observational, correlational, or experimental—and, most importantly, what type of claim the researchers are justified in making.
Only manipulated variables, when randomly assigned in controlled experiments, can support causal claims. Measured variables, even when related to other variables, can only support frequency or association claims.
This distinction is at the heart of evaluating internal validity—the extent to which a study can support a cause-and-effect conclusion.
It’s also helpful to think about manipulated and measured variables in terms of the independent and dependent variables used in experimental designs. The independent variable is what the researcher manipulates—like the dose of a medication or type of learning environment. The dependent variable is what the researcher measures to see if it changed as a result—like attention span, memory accuracy, or anxiety level.

Being able to identify which variable is which helps you trace the logic of a study from start to finish. You can ask: Did the manipulation come before the measurement? Were the groups randomly assigned? These questions help determine whether the study supports a causal claim.
In sum, the distinction between measured and manipulated variables is more than a technicality. It’s a foundation for understanding how psychological research is conducted, what claims it supports, and how confidently you can believe those claims.
Whenever you see a headline or read a study, start by asking: Were the variables measured, manipulated, or both?
The answer will guide your interpretation of the results and shape your understanding of the study’s implications for real-world behavior and mental processes.
From Conceptual Variable to Operational Definition
In psychological research, we often begin with broad concepts that are important in theory but need to be turned into something concrete to be tested. These broad ideas are called conceptual variables, or constructs. For example, “stress,” “intelligence,” or “satisfaction with life” are all conceptual variables. While they are useful for thinking and talking about psychological ideas, we can’t study them scientifically until we define how we’re going to measure or manipulate them in a specific way. This process is known as operationalization—creating an operational definition. An operational definition turns a concept into a specific, observable, and testable form. For instance, “satisfaction with life” could be operationalized by asking participants to rate their agreement with items on a standardized scale, such as “In most ways my life is close to ideal,” rated from 1 (strongly disagree) to 5 (strongly agree).
Every variable in a study must be operationalized so that it can be measured or manipulated in practice. This doesn’t just apply to abstract ideas; even straightforward variables like “weight gain” or “study time” need specific definitions. “Weight gain” might be defined as the number of pounds gained in four weeks, and “study time” might be recorded using a log or tracking software.
The goal is to ensure consistency and clarity so that other researchers can replicate the study or evaluate whether the definition matches the construct. Sometimes operationalizing a variable is easy, but other times, especially with emotional or cognitive variables, it takes creativity and rigorous development.
Operational definitions can vary depending on the goals and methods of the study. For example, if researchers want to understand “aggression,” one study might define it as the number of times a child hits a peer during recess. Another might use a questionnaire that asks how often a person has felt like hurting someone. A third study could measure how long it takes a participant to choose to punish another person in a lab game. Each of these is a different operationalization of the same conceptual variable. When comparing studies, it’s important to understand how each one defined and measured its variables.
Researchers often rely on existing scales, behavioral tasks, or physiological indicators to operationalize their variables. For example, “anxiety” might be measured with the Spielberger Trait Anxiety Inventory (a questionnaire), or with physiological indicators such as increased heart rate and skin conductance. Choosing the right operational definition affects the study’s construct validity—how well the operationalized variable reflects the conceptual variable. If the operationalization doesn’t truly capture the concept, the study’s conclusions may be weak or misleading. A poorly chosen operational definition can compromise an otherwise well-designed study.
Operational definitions also matter when interpreting results. If two studies report different outcomes for “school achievement,” it might be because one used GPA while the other used standardized test scores.
Understanding how each variable was operationalized helps you evaluate whether results can be compared or generalized. It also helps you decide whether the measure actually reflects the concept you’re interested in.
As a consumer of research, always ask: What’s the conceptual variable? How was it operationalized? Is the operational definition a good match for the concept?
Summary
In sum, turning conceptual variables into operational definitions is a crucial part of designing, understanding, and evaluating psychological research. It bridges the gap between theory and data. It allows us to test abstract ideas in the real world, using observable behavior, self-reports, or physiological responses. As you read and conduct research, pay close attention to how variables are defined and measured. Operational definitions give substance to the abstract—and determine whether a study’s conclusions can be trusted.
Key Takeaways
Variables are the core elements of psychological research; they must vary and be clearly defined to be meaningful in studies.
Measured variables are observed without intervention, while manipulated variables are deliberately changed to test effects.
Psychological concepts must be operationalized—translated into measurable forms—to be studied scientifically.
The type of claim a study makes (frequency, association, or causal) depends on whether variables were measured or manipulated.
Accurate operational definitions strengthen a study’s construct validity and ensure its findings are interpretable and replicable.
Glossary
association claim: a statement that suggests a relationship between two measured variables without asserting causality.
categorical variable: a variable whose levels are distinct categories or groups without inherent numerical meaning, such as gender, ethnicity, or music genre. causal claim: a conclusion that one variable directly affects another, requiring experimental manipulation and control.
conceptual variable: a broad psychological idea or construct, such as "stress" or "intelligence," that must be defined for study.
constant: a characteristic that could vary but remains the same for all participants in a study.
construct validity: the degree to which an operational definition accurately represents the conceptual variable it intends to measure.
dependent variable: the outcome that researchers measure to determine if it is affected by changes in the independent variable.
flexible variable: a variable that can be either measured or manipulated, depending on the study’s design and goals. For example, “sleep” can be measured by recording how many hours participants slept naturally, or it can be manipulated by assigning participants to sleep for different durations. The classification depends on the research context and how the variable is used in the study. frequency claim: a statement about the rate or level of a single variable within a population.
independent variable: the variable that researchers manipulate to test its effect on the dependent variable.
manipulated variable: a variable that researchers control or assign to different levels to examine its causal impact.
measured variable: a variable that researchers observe and record without altering it, such as age or stress level.
operational definition: a precise, testable specification of how a conceptual variable will be measured or manipulated in a study.
operationalization: the process of turning a conceptual variable into a specific, observable, and measurable form.
quantitative variable: a variable whose levels are numerical and represent measurable amounts, allowing for mathematical operations, such as height, test scores, or hours of sleep. variable: any characteristic or condition that can take on different values or levels in a study.
About the Authors
Zachary Meehan earned his PhD in Clinical Psychology from the University of Delaware and serves as the Clinic Director for the university's Institute for Community Mental Health (ICMH). His clinical research focuses on improving access to high-quality, evidence-based mental health services, bridging gaps between research and practice to benefit underserved communities. Zachary is actively engaged in professional networks, holding membership affiliations with the Association for Behavioral and Cognitive Therapies (ABCT) Dissemination and Implementation Science Special Interest Group (DIS-SIG), the BRIDGE Psychology Network, and the Delaware Project. Zachary joined the staff at Biosource Software to disseminate cutting-edge clinical research to mental health practitioners, furthering his commitment to the accessibility and application of psychological science.

Fred Shaffer earned his PhD in Psychology from Oklahoma State University. He is a biological psychologist and professor of Psychology, as well as a former Department Chair at Truman State University, where he has taught since 1975 and has served as Director of Truman’s Center for Applied Psychophysiology since 1977. In 2008, he received the Walker and Doris Allen Fellowship for Faculty Excellence. In 2013, he received the Truman State University Outstanding Research Mentor of the Year award. In 2019, he received the Association for Applied Psychophysiology and Biofeedback (AAPB) Distinguished Scientist award. He teaches Experimental Psychology every semester and loves Beth Morling's 5th edition.

Commenti