Category: Research Paper
Quantitative Research Matrix
6th, 7th, and 8th edition: Chapters 10, 11, 12, and 13
Quantitative Research Methods
Quantitative research is generally defined as four types: true experimental, quasi-experimental, ex post facto, and correlational. A brief overview of the differences and similarities of each type is presented below. A more detailed description of various components of experimental research is presented in Experimental Research: Control, Designs, Internal and External Validity
True and Quasi-Experimental Research
True experimental research can be identified by three characteristics: randomly formed groups, manipulation of the treatment (the IV), and comparisons among groups. These will be discussed in the context of the following example. We wish to know whether cooperative learning produces better achievement among 10th grade students in mathematics than a traditional lecture approach. A group of students, n = 50, will be randomly assigned to a classroom using cooperative learning or to a classroom using lecture, with 25 randomly assigned to each classroom. At the end of a semester, a final achievement test on mathematics will be administered to determine which groups scores, on average, higher in mathematics.
In true experimental research, the groups studied will be randomly formed. Recall from the section on sampling that random means a systematic approach is used assign people to groups, but the systematic approach used to assign has no predictable pattern. A table of random numbers gives this result; a flip of a coin also accomplishes this. For example, if we are assigning people to one of two groups, flipping a coin and deciding group membership for each person based on whether a head or a tail shows is random since one cannot predict accurately whether the head or tail will show.
It is easy to confuse randomly formed groups, or random assignment, with random sampling. The two are certainly not the same thing. Random sampling is one method for selecting--picking--people to participate in a study. Random assignment is a method for assigning people to groups--it is not a method for selecting study participants. Also note that random sampling is not required for a true experiment. Randomly formed groups are necessary for a true experiment, but one could use convenience sampling to select study participants and still have a true experiment. For the example study, students may have been selected based on who was available--based on convenience, then they were randomly assigned to one of two groups.
The second requirement, that the treatment be manipulated, means that the researcher has control of who receives which treatment. Manipulation in this sense is similar to the definition of politics--who gets what. If the researcher decides who gets what, then manipulation occurred. In the example, the researcher randomly assigned students to one of two groups, so the researcher manipulated who would receive which treatment, cooperative learning or lecture.
The third requirement, well, more of a characteristic than a requirement, is that groups are compared. In most experiments, there will be at least two groups, perhaps more, which will be compared on some outcome of interest, some dependent variable. In the example, the two groups are cooperative learning and lecture, and they will be compared on performance on the final achievement test.
Quasi-experimental research is just like true experimental with the only difference being the lack of randomly formed groups. Of the two types of experimental research, quasi-experimental is most commonly used in education. It is difficult to find schools that will allow a researcher to select students from classes and assign them randomly to other classes. So, in most educational research situations, intact classes are used for the experiment. When intact classes or groups are used, but manipulation is present--the researcher determines which group receives which treatment--then quasi-experimentation results. For example, a researcher uses his to classes for an experiment. He randomly assigns cooperative learning to class B, and randomly assigns lecture to class A. Following the treatment, an instrument is administered to all participants to learn whether the treatments resulted in differences between the two classes. Note in this example the groups were not randomly formed, but the treatment was manipulated and groups were compared, so quasi-experimentation resulted.
Non-experimental Quantitative Research: Ex Post Facto and Correlational
Both true and quasi-experimental research are distinguished by one common characteristic: manipulation. No other type of research has manipulation of the independent variable. Two other forms of quantitative research, which are not experimental due to lack of manipulation, are ex post facto (sometimes called causal-comparative) and correlational. Often both of these types are grouped into what researchers call non-experimental research or simply correlational research. Thus, correlational research can be understood to include both of the two types I discuss below: ex post facto and correlational. For our purposes, we will make a distinction between these two types.
Ex Post Facto (Causal-Comparative)
Ex post facto looks like an experiment because groups are compared; there is, however a key difference--no manipulation of the independent variable. With ex post facto research, the difference between groups on the independent variable occurs independent of the researcher. For example, suppose a researcher contacts a school's principal and asks for two teachers, one who uses cooperative learning and one who uses lecture. The researcher's goal is the compare student's scores on a test to determine which method produces better achievement. This is very similar to the example given above for experimental research, but the key difference is that the researcher did not manipulate the independent variable. The researcher did not determine which class, or which teacher, would use cooperative learning or lecture. Rather, the researcher asked which teachers use which instructional strategy, and then selected the groups for comparisons.
Another example of ex post facto is the analysis of differences in any quantitative outcome and by sex (male vs. female). For example, if one is interested in learning whether differences exist between males and females in ITBS scores, that is an ex post facto study since the independent variable cannot be manipulation, and since there are group comparisons.
So the keys to an ex post facto study are group comparisons and non-manipulated independent variables. Groups may be randomly formed in ex post facto research, such as through random sampling of males and females, but randomly formed groups alone is not enough for an ex post facto study to be confused with a true experimental study.
A correlational study is the examination of relationships among two or more quantitative variables. Both the independent and dependent variables will be quantitative. It is possible to have multiple independent variables and possibly multiple dependent variables. For example, I wish to know which of the following variables independent variables (High School GPA, SAT scores, HS Rank) predict the following dependent variables (GRE mathematics, GRE verbal, college GPA).
Sometimes there is a distinction made between types of correlational studies. A predictive study is done simply to learn which, among a set, of variables best predicts the dependent variable. The goal here is simply to maximize prediction. A second type of study is relationship. With relationship studies, the goal is the understand, as best as possible, those variables that theoretically related a dependent variable. With this type of study, researchers are interested in testing and confirming theories or hypotheses concerning relationships among variables.
Matrix of Distinguishing Characteristics Among Quantitative Research Methods
The key differences among the four types of quantitative studies are outlined below in the matrix. Understanding this matrix will assist you in determining which methods are used in most quantitative research.
Ex post facto research Introduction When translated literally, ex post facto means ‘from what is done afterwards’. In the context of social and educational research the phrase means ‘after the fact’ or ‘retrospectively’ and refers to those studies which investigate possible cause-and-effect relationships by observing an existing condition or state of affairs and searching back in time for plausible causal factors. In effect, researchers ask themselves what factors seem to be associated with certain occurrences, or conditions, or aspects of behaviour. Ex post facto research, then, is a method of teasing out possible antecedents of events that have happened and cannot, therefore, be engineered or manipulated by the investigator. The following example will illustrate the basic idea. Imagine a situation in which there has been a dramatic increase in the number of fatal road accidents in a particular locality. An expert is called in to investigate. Naturally, there is no way in which she can study the actual accidents because they have happened; nor can she turn to technology for a video replay of the incidents. What she can do, however, is attempt a reconstruction by studying the statistics, examining the accident spots, and taking note of the statements given by victims and witnesses. In this way the expert will be in a position to identify possible determinants of the accidents. These may include excessive speed, poor road conditions, careless driving, frustration, inefficient vehicles, the effects of drugs or alcohol and so on. On the basis of her examination, she can formulate hypotheses as to the likely causes and submit them to the appropriate authority in the form of recommendations. These may include improving road conditions, or lowering the speed limit, or increasing police surveillance, for instance. The point of interest to us is that in identifying the causes retrospectively, the expert adopts an ex post facto perspective. Kerlinger (1970) has defined ex post facto research more formally as that in which the independent variable or variables have already occurred and in which the researcher starts with the observation of a dependent variable or variables. She then studies the independent variable or variables in retrospect for their possible relationship to, and effects on, the dependent variable or variables. The researcher is thus examining retrospectively the effects of a naturally occurring event on a subsequent outcome with a view to establishing a causal link between them. Interestingly, some instances of ex post facto designs correspond to experimental research in reverse, for instead of taking groups that are equivalent and subjecting them to different treatments so as to bring about differences in the dependent variables to be measured, an ex post facto experiment begins with groups that are already different in some respect and searches in retrospect for the factor that brought about the difference. Indeed Spector (1993:42) suggests that ex post facto research is a procedure that is intended to transform a non-experimental research design into a pseudo-experimental form. Two kinds of design may be identified in ex post facto research—the co-relational study and the criterion group study. The former is sometimes termed ‘causal research’ and the latter, ‘causal-comparative research’. A co-relational - Questia, a part of Gale, Cengage Learning. www.questia.com Publication Information: Book Title: Research Methods in Education. Contributors: Louis Cohen - author, Lawrence Manion author, Keith Morrison - author. Publisher: Routledge Falmer. Place of Publication: London. Publication Year: 2000. Page Number: 205. Ex Post Facto Research Method "Ex post facto means formulated, enacted, or operating retroactively. When translated literally, ex post facto means from what is done afterwards. In social and educational research, the phrase means: after the fact. Ex post facto research method refers to those studies which investigate possible cause-andeffect relationships by observing an existing condition or state of affairs and looking back in time for valid causal factors. The ex post facto research method is a method of one using statistics and alternative ways to figure out an event. One would not be able to study actual events because they have already happened. Go to http://www.questiaschool.com/read/103778525?title=11%3A%20Ex%20Post%20Facto %20Research for further detailed information." Ex Post Facto (Causal- Comparative) Research • Explores possible causes and effects • The independent variable is not manipulated, it has already been applied • Focuses first on the effect, then attempts to determine what caused the observed effect. Characteristics of Ex Post Facto Research • There is a control or comparison group • Intact groups are used • The treatment is not manipulated, it has already occurred. EX POST FACTO - Term used to designate action taken to change the effect given to a set of circumstances. This action relates back to a prior time and places this new effect upon the same set of circumstances existing at that time.related
The popularity of experimentation in marketing research has much to do with the possibilities of establishing cause and effect. Experiments can be configured in such a way as to allow the variable causing a particular effect to be isolated. Other methods commonly used in marketing research, like surveys, provide much more ambiguous findings. In fact, experimentation is the most scientific method employed in marketing research.Chapter Objectives
Having read this chapter the reader should:
· Understand the basic concepts of experimentation: experimental design, treatments, confounding factors and extraneous causal factors
· Be familiar with the different bases for inferring causal relationships
· Recognise the principal impediments to valid experimental results, and
· Be familiar with the main forms which experimental designs can take.Structure Of The Chapter
A definition of experiments is given at the outset of the chapter and then there follows a brief outline of the basic concepts of experimentation: dependency, causality and inference. The discussion then moves to the impediments to valid results in experimentation. A distinction is drawn between internal and external validity. The final section of the chapter gives an account of the main experimental designs used in marketing research.A definition of experiments
An experiment involves the creation of a contrived situation in order that the researcher can manipulate one or more variables whilst controlling all of the others and measuring the resultant effects. For instance, when United Fruits were considering replacing their Gros Michel variety of banana with the Valery variety, a simple experiment was first carried out. In selected retail outlets, the two varieties were switched on different days of the week and sales data examined to determine what effect the variety had on sales volumes. That is, the variety was being manipulated whilst all other variables were held constant. United Fruits found that the switch back and forth between Gros Michel and Valery had no effect upon sales. United Fruit were therefore able to replace Gros Michel with Valery.
Boyd and Westfall 1 have defined experimentation as:
". that research process in which one or more variables are manipulated under conditions which permit the collection of data which show the effects, if any, in unconfused fashion."
Experiments can be conducted either in the field or in a laboratory setting. When operating within a laboratory environment, the researcher has direct control over most, if not all, of the variables that could impact upon the outcome of the experiment. For example, an agricultural research station may wish to compare the acceptability of a new variety of maize. Since the taste characteristics are likely to have a major influence on the level of acceptance, a blind taste panels might be set up where volunteers are given small portions of maize porridge in unmarked bowls. The participants would perhaps be given two porridge samples and the researcher would observe whether they were able to distinguish between the maize varieties and which they preferred. In addition to taste testing, laboratory experiments are widely used by marketing researchers in concept testing, package testing, advertising research and test marketing.
Figure 6.1 Types of experiment used in marketing research
When experiments are conducted within a natural setting then they are termed field experiments. The variety test carried out by United Fruits on their Gros Michel and Valery bananas is an example of a field experiment. The researcher obviously has less control over variables likely to have an effect upon the experimental variable but will strive to exert whatever control is possible.Basic concepts in experimentation
Dependency: Experiments allow marketing researchers to study the effects of an independent variable on a dependent variable. The researcher is able to manipulate the independent variable (i.e. he/she is able to change the value of the independent variable) and observe what effect, if any, this has upon the value of the dependent variable. Put another way, an independent variable is one which can be manipulated independently of other variables. Independent variables are selected for inclusion in an experiment on the basis of an assumption that they are in some way related to the dependent variable being studied. It is for this reason that independent variables are on occasion referred to as explanatory variables. The dependent variable is the one under study. The researcher begins from the premise that changes in the value of the dependent variable are at least in part caused by changes in the independent variable. The experiment is designed to determine whether or not this cause and effect relationship actually exists.
Causality: A causal relationship is said to exist where the value of one variable is known to determine or influence the value of another. Green et al. 3 draw a distinction between two types of causation: deterministic and probabilistic.
Where the independent variable (X) wholly explains changes in the value of the dependent variable (Y) and the researcher is able to establish the functional relationship between the two variables then this can be expressed as follows:
In this case, it is said that X is both a necessary and a sufficient condition for Y to occur. The value of Y is determined by X, and X alone. Thus it can be said, in these circumstances, that X is a deterministic cause of Y. An illustrative example would be where the demand for agricultural commodities, say sugar, is dependent upon the world price. Further suppose that the functional relationship between sugar demand and world prices is known, then the formula becomes:
Changes in demand for sugar (grade No. 6) = f(World Price)
Whilst this example serves to illustrate the point it is rare to find such relationships when studying marketing problems. In most instances, the value of the dependent variable will be a function of several variables. For instance, only in exceptional cases would the demand for a product, even a commodity, depend solely upon price movements. Factors such as the reputation of the supplier, terms of sale, promotional activities, packaging etc. are likely to have an impact on demand as well. A more common causal model is one where the value of the dependent variable is a function of several independent variables.
Marketing problems are more often multivariate than univariate and so the relationship between dependent and independent variables is more often probabilistic than deterministic. A probabilistic relationship could be expressed as:
What is depicted here is a situation where the dependent variable (y) is a function of several variables (x1. x2. xn ). If marketing research can establish the form of the relationship (f) between the independent variables and also between the independent and dependent variables then the value of y can be predicted. In this instance x1. for example, is a necessary but not sufficient condition for y to occur. The same is true of each of the other independent variables. Rather, each individual independent variable is said to be a probabilistic cause of the value of y.Inferring causal relationships
The evidence for drawing inferences about causal relationships can take three forms: associative variation, consistent ordering of events and the absence of alternative causes.
Causality cannot be established unless there is associative, or concomitant, variation. That is, the data must show that a change in one variable is almost always accompanied by a change in the other.
experimental setting elicits responses that differ from those which would be obtained in the real world.Internal validity
History: The term 'history' has been used to describe events that happen whilst the experiment is underway and serve to distort experimental results. A common occurence is when a commercial organisation is testing a new product within a small geographical area, prior to launching the product nationwide, and competitors intentionally set out to distort test results by giving additional promotional support to their own competing product and/or by cutting the price of their product.
Pretest effect: It is sometimes considered necessary to take some preliminary measures before the main experiment is carried out. For instance, a company wishing to promote monogerm sugar beet seed in Pakistan wanted to first establish how much farmers already knew about the different types of seed available. A particular district was chosen as a test area and a pretest was undertaken where a sample of farmers from that area were asked to list the types of seed of which they were aware. The farmers were also asked to list the brands of sugar beet seed with which they were familiar. This constituted the 'before' measure. A little later a promotional campaign was launched within the test area and after a period of time the sample of farmers were again visited and asked to identify the brands of seed with which they were familiar. It is likely that any increase in awareness of the company's brand was due, in part at least, to the heightened awareness of issues relating to seeds caused by the pretest activity. That is, the pretest is likely to increase interest in matters relating to seeds and therefore make farmers more attentive to the brand promotion than they otherwise might be.
Maturation: Maturation refers to biological and/or psychological changes to respondents that occur in the period between the 'before' and 'after' measurements and consequently affect the information which they provide. Experiments requiring the cooperation of respondents over a substantial period of time are most likely to suffer from maturation effects. Consumer and farmer panels are examples of experimental instruments that demand longer term participation by panel members. Suppose that a farmer panel were established to measure the level of adoption of new marketing practices or technologies promoted by agricultural extension officers. As the years pass the marketing extension officer has noted that farmers on the panel appear to be adopting fewer of the innovations being proposed by the extension service. However, the lower rates of adoption may not be explained by either the marketing extension service becoming less effective in communicating the benefits of innovative marketing practices and technologies nor by current innovations being somehow less appropriate or offering more marginal benefits. Rather, the explanation may be that the panel itself is aging and as farmers get older they may become more resistant to change. Certainly as people get older their needs and attitudes are subject to change. In these circumstances the data drawn from the panel is a function of the maturation of the panel rather than the experimental variables (i.e. the efforts of the marketing extension officers and the characteristics of the marketing innovations).
Whilst it is not always possible to adjust the experimental design so as to eliminate each of these potential threats to the validity of results, it is always possible to measure their impact upon results. The chief device for doing so is to include a 'control group'.
Instrumentation: From time to time, measurement instruments have to be recalibrated or their readings become suspect. Although marketing research does make use of a wide range of mechanical, electrical and electronic instruments in experiments that clearly require periodic readjustment, (e.g. tachistoscopes, pupilometers, audiometer) there are other, more commonly used, marketing research test instruments that also need to be checked for consistency, such as, questionnaires, interviewers, interviewing procedures.
Questionnaires may contain standardised questions with the challenge to consistency coming from the interpretation of the meaning of the question. Consider the apparently straight-forward question, "How big is your farm?" There are several equally valid responses to this question that could combine to give a totally misleading set of data. The variation is due to farmers' interpretation of what the researcher really wants to know. Some farmers will include only the land area that they had under crop in the year of the survey whilst others will include both productive and nonproductive land. In other instances, farmers may understand the question to mean the area of land they actually own. Some farmers may believe it is the farmland that they own and/or rent.
Another aspect is that of consistency in the conduct of interviews. There can be variation in the data collected during an experiment if either different interviewers are used to collect data after the experiment from those who conducted interviews before the experiment; or interviewers change the way questions are put to participants as they become more familiar with the content of the questionnaire.
Mortality: Over time there is a danger that some participants will drop out of an experiment. This can happen when people literally die or decide withdraw from an experimental group for one reason or another. This obviously changes the composition of the experimental group. Where the effects of a marketing variable are being studied by comparing data drawn either from two groups that have been matched to ensure that their composition is identical or the same group at different points in time then mortality can confound the results.
Sampling bias: Sampling bias occurs when the method of assigning participants to experimental groups results in groups whose behaviour cannot be compared to one another because they differ in some important respect(s). Consider the task of evaluating the implementation of new weighing and grading practices within a municipal grain market. It could be that it is easier for larger grain traders to adopt the new practices since they are better able to afford the grading and weighing equipment required. If during a field experiment conducted to study the rate of adoption two groups are established with a view to comparing the rate of adoption within them and one of those groups is predominantly comprised of larger (or smaller) traders then this is likely to distort the results.External validity
Interactive effects of testing: The design of the experiment itself may give rise to measurement variations between the "before" and "after" phases of the research. Consider a test of consumer acceptance involving two exotic rice varieties being evaluated as possible replacements for a popular indigenous variety which is suffering from a disease and is therefore in short supply. The experimental design involves leaving a trial pack of rice A with a sample of households and returning a few weeks later to interview members of the household about rice A and to deliver a second trial pack containing rice B. A third visit is subsequently made during which household members are asked questions about rice B. Respondents' assessment of rice B is not made under the same conditions as their assessment of rice A. When trying rice A the respondents are likely to have made comparisons, perhaps only subconsciously, with existing rice varieties that they already use. However, when evaluating rice B the respondents will also be making comparisons with rice A. This problem can be overcome, to some extent, by splitting the sample so that half are given the trial varieties in the order rice A then rice B; the remaining half are given the two varieties in the reverse order of rice B then rice A. A more difficult problem to overcome is that whatever the sequence of presentation, by the time household members are asked about the second trial variety, they have become more 'experienced' interviewees and respond differently simply because they feel they better understand what the interviewer wants and how to answer the questions. By the same measure, the interviewer becomes more experienced the second time around, having become more familiar with the product, the interviewing process, and the questionnaire (or interview schedule), and may pose the questions in a different way. As a result, the interviewer may elicit different information on the third call from that which was obtained on the second visit.
Interactive effects of sampling bias: It can happen that participants are assigned to an experimental group without due concern for possible bias and this then interacts with the experimental treatment producing a spurious outcome. Such an interactive sampling bias would result from unknowingly assigning heavy users of a particular product category to one experimental group and using favourable responses to a new formulation within the category as the basis for projecting national demand.
Contrived situations: Any laboratory experiment is, by definition, unlike the real world. Typically, the researcher manipulates the situation so that only those variables in which he/she is immediately interested in studying are allowed to operate as they would in the real world. On occasion this leads to experimental results which are not replicated in the real world. An outstanding example of this set of circumstances is that of Coca Cola's infamous blind taste panels. Coca Cola was concerned at the creeping increases in market share of Coke's main competitor Pepsi. Coca Cola decided to conduct sensory analysis tests where participants were asked to score two colas on taste preference. The participants were given the colas in unmarked cups (i.e. a 'blind' tasting) before being asked which they preferred. On balance, the preference was of Pepsi's slightly sweeter cola. Coca Cola reacted in a way seldom seen anywhere in the world. The brand leader was removed from the market and a new, slightly sweeter formulation was launched under the Coca Cola brand name. It was to prove a costly mistake. Coca Cola were inundated by calls from consumers who were irate over the company's tampering with a product that has almost become a national institution. Most Americans have grown up with Coca Cola and could not accept that it could be changed. The company was forced to reintroduce the original formulation under the title of Coke Classic.
Coca Cola's taste panels were conducted in an artificial environment in which such variables as the brand name, the packaging and all the associations which go along with these were not allowed to operate. The research focused only on the taste characteristics of the product and a particular result was obtained. However, in the real world people consume Coca Cola for many reasons, many of them having little to do with the taste.Experimental designs
The process of experimentation is one of subjecting participants (e.g. target consumers, farmers, distributors etc.) to an independent variable such as an advertisement, a packaging design or a new product, and measuring the effect on a dependent variable (e.g. level of recall, sales or attitude scores).
As the name suggests, with after-only experimental designs measures of the independent variable are only taken after the experimental subjects have been exposed to the independent variable. This is a common approach in advertising research where a sample of target customers are interviewed following exposure to an advertisement and their recall of the product, brand, or sales features is measured. The advertisement could be one appearing on national television and/or radio or may appear in magazines, newspapers or some other publication. The amount of information recalled by the sample is taken as an indication of the effectiveness of the advertisement.
Figure 6.2 An example of an after-only design
The chief problem with after-only designs is that they do not afford any control over extraneous factors that could have influenced the post-exposure measurements. For example, marketing extension personnel might have completed a trial campaign to persuade small-scale poultry producers, in a localised area, to make use of better quality feeds in order to improve the marketability and price of the end product. The decision to extend the campaign to other districts will depend on the results of this trial. After-only measures are taken, following the campaign, by checking poultry feed sales with merchants operating within the area. Suppose a rise in sales of good quality poultry feed mixes occurs four weeks after the campaign ends. It would be dangerous to assume that this sales increase is wholly due to the work of the marketing extension officers. A large part of the increase may be due to other factors such as promotional activity on the part of feed manufacturers and merchants who took advantage of the campaign, of which they were forewarned, and timed their marketing programme to coincide with the extension campaign. If the extension service erroneously drew the conclusion that the sales increase was entirely due to their own promotional activity, then they might be misled into repeating the same campaign in other areas where there would not necessarily be the same response from feed manufacturers and merchants.
After-only designs are not true experiments since little or no control is exercised over any of the variables by the researcher. However its inclusion here serves to underline the need for more complex designs.
A before-after design involves the researcher in measuring the dependent variable both before and after the participants have been exposed to the independent variables.
The before-after design is an improvement upon the after-only design, in that the effect of the independent variable, if any, is established by observing differences between the value of the dependent variable before and after the experiment. Nonetheless, before-after designs still have a number of weaknesses.
Consider the case of the vegetable packer who is thinking about sending his/her produce to the wholesale market in more expensive, but more protective, plastic crates, instead of cardboard boxes. The packer is considering doing so in response to complaints from commissioning agents that the present packaging affords little protection to produce from handling damage. The packer wants to be sure that the economics of switching to plastic crates makes sense. Therefore, the packer introduces the plastic crates for a trial period. Before introducing these crates, the packer records the prices received for his/her top grade produce. Unless prices increase by more than the additional cost of plastic crates then there is no economic advantage to using the more expensive packaging.
Figure 6.3 Before-after designs
Suppose, for instance, that the packer was receiving $15 per crate, when these were of the cardboard type, but that the price after the introduction of plastic crates had risen to $17 per crate. The $2 difference would be attributed to better quality produce reaching the market as a result of the protection afforded by the plastic crates. However, there are several equally plausible explanations for the upward drift in produce prices including a shortfall in supply, a fall in the quality of produce supplied by competitors who operate in areas suffering adverse weather conditions, random fluctuation in prices, etc.
"Before-after with control group" design
This design involves establishing two samples or groups of respondents: an experimental group that would be exposed to the marketing variable and a control group which would not be subjected to the marketing variable under study. The two groups would be matched. That is, the two samples would be identical in all important respects. The idea is that any confounding factors would impact equally on both groups and therefore any differences in the data drawn from the two groups can be attributed to the experimental variable.
Study figure 6.4 which depicts how an experiment involving the measurement of the impact of a sugar beet seed promotional campaign on brand awareness might be configured with a control group.
Figure 6.4 An example of a before-after with control group design
If a "before and after with control group" experiment is properly designed and executed then the effects of maturation, pretesting and measurement variability should be the same for the experimental group as for the control group. In this case. these factors appear to have had a negative effect on awareness of one percent. Had it not been for the experimental variable, the experimental group would have shown a similar fall in awareness over the period of the test. Instead of recording a fall in the level of awareness of the sugar beet brand, the experimental group actually showed a nine percent increase in brand awareness. However, the design is not guaranteed to be unflawed. The accurate matching of the two groups is a difficult, some would say impossible, task. Moreover, over time the rate and extent of mortality, or drop out, is likely to vary between the groups and create additional problems in maintaining a close match between groups.The "After-only with control group" experimental design
Again, this design involves establishing two matched samples or groups of respondents. There is no measurement taken from either group before the experimental variable is introduced and the control group is not subsequently subjected to the experimental variable. Afterwards measures are taken from both groups and the effect of the experimental variable is established by deducting the control group measure from the experimental group measure. An illustrative example will help clarify the procedures followed.
A Sri Lankan food technology research institute was trying to convince small-scale food processors to adopt solar dryers to produce dried plantain and other dehydrated vegetables. Much of the initial resistance to the adoption of this technology was due to the belief that the taste characteristics of this snack food would be altered from those of traditional sun-dried plantain. The research institute was able to convince the food manufacturers that there would be no perceptible changes in the taste characteristics by carrying out an "after-only with control group" experiment. Sensory analysis experiments conclusively showed that almost none of the participants was able to discriminate between plantain dehydrated by means of the solar powered dryer and that which was sun-dried.
Many product tests are of the "after-only with control group" type. This design escapes the problems of pretesting, history and maturation. However, this form of "after-only design" does not facilitate an analysis of the process of change, whereas a comparable "before-after design" would. The attitudes, opinions and/or behaviour of individual participants can be recorded both before and afterwards and changes noted. For instance, the effect of the experimental variable on those participants who held unfavourable attitudes can be compared with those they held in the "before" measurement. Changes in those that held favourable attitudes in the "before" measurement can also be assessed after exposure to the experimental variable.Ex post facto design
The ex post facto design is a variation of the "after-only with control group" experimental design. The chief difference is that both the experimental and control groups are selected after the experimental variable is introduced rather than before. This approach eliminates the possibility that participants will be influenced by an awareness that they are being tested.
Following market liberalisation in Zimbabwe a number of maize meal producers, using hammer mill technology, came into the industry to compete against millers using roller mill technology. The hammer milled product was much coarser than the highly refined roller milled maize meal to which most urban consumers had grown accustomed. The hammer milled product, however, had superior nutritional benefits since meal produced in this way retained a much larger amount of the germ, bran and endosperm. One production miller sought to communicate the nutritional advantages of hammer milled meal through point-of-sale material in stores and provisions merchants. A sample of consumers who claimed to have seen the point-of-sale material was subsequently assigned to an experimental group and a matching selection of consumers who denied having seen the point-of-sale material comprised the control group. It was hypothesised that those who had seen the point-of-sale material would suggest that hammer milled maize meal had superior nutritional properties to that of roller meal to a far greater extent than would those who had not seen the point-of-sale aids.
The results supported the hypothesis in as much as 68 percent of those recalling having seen the point-of-sale promotional aids reported hammer milled meal as nutritionally superior whilst only 43 percent of those unaware of the point-of-sale aids said that hammer mill was more nutritious than roller meal. However, some care has to be taken in making the conclusion that the point-of-sale campaign was an unqualified success. It is to be remembered that participants were assigned to the two groups on the basis of self-selection. Those reporting having seen the promotional material were probably those on whom the campaign had made most impression. It is quite likely that some of those in the control group also saw the material but do not recall having done so.
Where exposure to the experimental variable can be determined objectively, on an ex post facto basis, the bias introduced by self-selection can be eliminated and the design, in essence, becomes identical to the "after-only with control group" design. In these circumstances, the ex post facto design is an improvement upon the "after-only with control group" design since the experimental variable would have its impact in a natural situation. Suppose, for example, that government has been using radio to communicate thy benefits of giving vitamin supplements to children under two years of age and that these are available in tablet form, free-of-charge, in local clinics. Ownership, and access, to a radio can be established objectively.Chapter Summary
Experimentation offers the possibility of establishing a cause and effective relationship between variables and this makes it an attractive methodology to marketing researchers. An experiment is a contrived situation that allows a researcher to manipulate one or more variables whilst controlling all of the others and measuring the resultant effects on some independent variable.
Experiments are of two types: those conducted in a laboratory setting and those which are executed in natural settings; these are referred to as field experiments. Laboratory experiments give the researcher direct control over most, if not all, of the variables that could affect the outcome of the experiment. The evidence for drawing inferences about causal relationships takes three forms: associative variation, consistent ordering of events and the absence of alternative causes.
There are a number of potential impediments to obtaining valid results from experiments. These may be categorised according to whether a given confounding factor has internal validity, external validity, or both. Internal validity is called into question when there is doubt that the experimental treatment is actually responsible for changes in the value of the dependent variable. External validity becomes an issue when there is uncertainty as to whether experimental findings can be generalised to a defined population. The impediments to internal validity are history, pre-testing, maturation, instrumentation, sampling bias and mortality. Impediments to external validity are: the interactive effects of testing, the interactive effects of sampling bias and errors arising from making use of contrived situations.
The main forms of experimental design differ according to whether or not a measure is taken both before and after the introduction of the experimental variable or treatment, and whether or not a control group is used alongside the experimental group. The designs are: after-only, before-after, before-after with control group, after-only with control group and ex post facto designs.Key Terms
Ex post facto measures
1. Give the alternative name for 'the independent variable'.
2. Name 4 threats to the internal validity of experimental results.
3. What is the main device for controlling the effects of maturation in experimental groups?
4. In what way does the ex post facto experimental design differ from the after-only with control group design?
5. Define the term 'deterministic causation'.
6. What is meant by the term 'external validity'?
7. What are the 3 conditions necessary in order to be able to infer causation?
8. Why is it said that after-only designs are not true experiments?Chapter References
1. Boyd, H.W. Jr. and Westfall, R. (1972) Marketing Research: Text and Cases, Irwin, p. 80.
2. Dillon, W.R. Madden, T.J. and Firtle, N.H. (1994), Marketing Research In A Research Environment, 3rd edition, Irwin, p. 175.
3. Green, P.E. Tull, D.S. and Albaum, G. (1993), Research For Marketing Decisions, 5th edition, Prentice-Hall, pp. 105-107.