more complicated when they use the Taguchi Method to analyze the results of multiple tests.)
The simplicity of these studies in both collecting the data and analyzing the results is also why itâs so much easier to explain the results to people who donât like thinking about things like heteroskedasticity and BLUE estimators. The fancier statistical regressions are much harder for non-statisticians to understand and trust. In effect, the statistician at some point has to say, âTrust me. I did the multivariate regression correctly.â Itâs easier to trust a randomized experiment. The audience still has to trust that the researcher flipped the coin correctly, but thatâs basically it. If the only difference between the two groups is how theyâre treated, then itâs pretty clear that the treatment is the only thing that can be causing a difference in outcome.
Randomization also frees the researcher to take control of the questions being asked and to create the information that she wants. Data mining on the historic record is limited by what people have actually done. Historic data canât tell you whether teaching statistics in junior high school increases math scores if no junior high has ever offered this subject. Super Crunchers who run randomized experiments, however, can create information to answer this question by randomly assigning some students to take the class (and seeing if they do better than those who were not chosen).
Firms historically have been willing to create more qualitative data. Focus groups are one way to explore what the average âman on the streetâ thinks about a new or old product. But the marketer of the future will adopt not just social science methods of multivariate regressions and the mining of historical databases, she will start exploiting the randomized trials of science.
Businesses realize that information has value. Your databases not only help you make better decisions, database information is a commodity that can be sold to others. So itâs natural that firms are keeping better track of what they and their customers are doing. But firms should more proactively figure out what pieces of information are missing and take action to fill the gaps in their data. And thereâs no better way to create information about what causes what than via randomized testing.
Why arenât more firms doing this already? Of course, it may be because traditional experts are just defending their turf. They donât want to have to put their pet policies to a definitive test, because they donât want to take the chance that they might fail. But in part, the relative foot-dragging may have to do with timing. Randomized trials require firms to hypothesize in advance before the test starts. My editor and I had a devil of a time deciding what book titles we really wanted to test. Running regressions, in contrast, lets the researcher sit back and decide what to test after the fact. Randomizers need to take more initiative than people who run after-the-fact regressions, and this difference might explain the slower diffusion of randomized trials in corporate America.
As we move to a world in which quantitative information is increasingly a commodity to be hoarded, or bought, or sold, we will increasingly find firms that start to run randomized trials on advertisements, on price, on product attributes, on employment policies. Of course not all decisions can be pre-tested. Some decisions are all-or-nothing, like the first moon launch, or whether to invest $100 million in a new technology. Still, for many, many different decisions, powerful new information on the wellsprings of human action is just waiting to be created.
This book is about the leakage of social science methods from the academy to the world of on-the-ground decision making. Usually and unsurprisingly, business has been way ahead of government in picking up useful technologies. And the same is
Jennifer McCartney, Lisa Maggiore