Variables in research Variables in research
Hamro Library
Hamro Library

Variables in research

variables-in-research

Meaning of Variables

A variable is a characteristic that has more than one category (or value). Thus sex is a variable with the categories male and female. Age is a variable with many different categories (one-year-old, two years old, etc). A variable is a quantity, which can assume any numerical value out of a given set of values. The quantities like profit, sales, revenue, national income, exports, imports, heights, weights, time, temperature, etc. are examples of variables.

A variable is a symbol, such as X, Y, H, x, or B, that can assume any of a prescribed set of values, called the domain or the variable. If the variable can assume only one value, it is called constant. A variable that can theoretically assume any value between two given values is called a continuous variable; otherwise, it is called a discrete variable.

The number of N children in a family, which can assume any of the values 0, 1, 2, 3, ... but cannot 2.5, or 3.842, is a discrete variable. The age A of an individual, which can be 62 years, 63.8 years, or 65.8341 years, depending on the accuracy of measurement, is a continuous variable (Spiegel, 1992: 1). Data that can be described by a discrete or continuous variable are called discrete data or continuous data, respectively.

The number of children in each of 1000 families is an example of discrete data, while the heights of 100 university students is an example of continuous data. In general, measurements give rise to continuous data, while enumeration, or counting, gives rise to discrete data.

It is sometimes convenient to extend the concept of the variable to nonnumerical entities; for example, color C in a rainbow is a variable that can take on the "values" red, orange, yellow, green, blue, indigo, and violet. It is generally possible to replace such variables with numerical quantities; for example, denote red by 1, orange by 2, etc.

In crude terms, variables are constructs or properties, which scientists study. Some examples of variables in behavioral science include productivity, level of aspiration, aptitude, anxiety, affiliation, intelligence, conformity, etc. In more precise terms, a variable is a quantity, which takes different values, or it is something, which varies. A variable is a symbol to which numerals or values are assigned (Kerlinger, 2000). For example, x (a variable) is a symbol to which numerical values are assigned. Suppose x stands for intelligence which may be a score from high to low (Dwivedi, 1998: 33).

Variables are things that we measure, control, or manipulate in research. They differ in many respects, most notably in the role they are given in our research and in the type of measures that can be applied to them. Some of them are discussed below:

Independent and Dependent Variables

Independent variables are those that are manipulated whereas dependent variables are only measured or registered. This distinction appears terminologically confusing to many because, like some students, "all variables depend on something." However, once you get used to this distinction, it becomes indispensable. The terms dependent and independent variable apply mostly to experimental research where some variables are manipulated, and in this sense, they are " independent? from the initial reaction patterns, feature, intentions, etc. of the subjects.

Some other variables are expected to be "dependent" on the manipulation or experimental conditions. That is to say, they depend on "what the subject will do" in response. Somewhat contrary to the nature of this distinction, these terms are also used in studies where we do not literally manipulate independent variables, but only assign subjects to "experimental groups" based on some pre-existing properties of the subjects. For example, if in an experiment, males are compared with females regarding their white cell count (WCC), gender could be called the white independent variable and WCC the dependent variable.

The equation for a straight line where the dependent variable Y is determined by the independent variable X is:

Y = a + bX

Where,
Y = Dependent variable
a = Y-intercept
b = Slope of the line
X = Independent variable

Using this equation, we can take a given value of X and compute the value of Y. The 'a' is called the "Y-intercept" because its value is the point at which the regression line crosses the Y-axis-that is, the vertical axis. The 'b' in the equation is the "slope" of the line. It represents how much each unit change of the independent variable X changes the dependent variable Y. Both 'a' and 'b' are numerical constants, since, for any given straight line, their value does not change.

Suppose we know that 'a' is 3 and 'b' is 2. Let us determine what Y would be for an X equal to 5. When we substitute the values of a, b, and X in the above equation, we find the corresponding value of Y to be:

Y = a + bX
= 3+ 2 (5)
= 3 + 10
= 13 < value for Y, given X = 5.

Let's take an example. Economists might base their predictions of the annual gross national product, or GNP, on the final consumption spending within the economy. Thus, " the final consumption spending" is an independent variable, and " the GNP" would be the dependent variable (Levin, 1988: 508-9). Frequently, we find a causal relationship between variables; that is, the independent variables "cause" the dependent variable to change.

Moderating Variables

In each relationship, there is at least one independent variable and a dependent variable. It is normally hypothesized that in some way the independent variable "causes" the dependent variable to occur. For simple relationships, all other variables are considered extraneous and are ignored.

In a typical office, we might be interested in a study of the effect of the four-day workweek on office productivity and hypothesize the following: The introduction of the four-day workweek (independent variable) will lead to increased office productivity per worker-hour (dependent variable).

In actual study situations, however, such a simple one-on-one relationship needs to be conditioned or revised to take other variables into account. Often one uses another type of explanatory variable of value here-the moderating variable. A moderating variable is a second independent variable that is included because it is believed to have a significant contributory or contingent effect on the originally stated independent-dependent relationship. For example, one might hypothesize that: The introduction of the four-day workweek (independent variable) will lead to higher productivity (dependent variable), especially among younger workers (moderating variable).

Extraneous Variables

An almost infinite number of extraneous variables exists that might conceivably affect a given relationship. Some can be treated as independent or moderating variables, but most must either be assumed or excluded from the study. Fortunately, the infinite number of variables has little or no effect on a given situation. Most can be safely ignored. Others may be important, but their impact occurs in such a random fashion as to have little effect.

Using the example of the effect of the four-day workweek, one would normally think the imposition of a local sales tax, the election of a new mayor, a three-day rainy spell, and thousands of similar events and conditions would have little effect on workweek and office productivity. However, there may be other extraneous variables to consider as possible confounding variables to our hypothesized independent variable-dependent variable relationship. In our example, we would attempt to control for the type of work by studying the effect of the four-day workweek within groups performing different types of work.

Intervening Variables

The variables mentioned with regard to causal relationships are concrete and clearly measurable; they can be seen, counted, or observed in some way. Sometimes, however, one may not be completely satisfied by the explanations they give. Thus, while we may recognize that four-day workweek results in higher productivity we might think this is not the full explanation-that workweek length affects some intervening variable, which, in turn, results in higher productivity.

An intervening variable is a conceptual mechanism through which the independent variable and moderating variable might affect the dependent variable. The intervening variable may be defined as "that factor which theoretically affects the observed phenomenon but can not be seen, measured, or manipulated; its effect must be inferred from the effects of the independent and moderator variables on the observed phenomenon".

In the case of the workweek hypothesis, one might view the intervening variable to be job satisfaction, giving a hypothesis such as; the introduction of a four-day workweek independent variable will lead to higher productivity dependent variable by increasing job satisfaction intervening variable.

Please leave your comment

If this article has helped you, please leave a comment.

Previous Article Next Article