Tag Archives: statistical analysis

Burning Down the House

Nowhere is the challenge of getting people to understand how to use data better illustrated than the methodology wars being fought in the discipline of Psychology. If you haven’t heard of the methodology wars be assured that the battlefields – studies in psychological research – are being fought over like blocks of Stalingrad; and like that famous battle, not much is left standing in the aftermath.

I’m not sure exactly how the methodology wars started. Somehow, somewhere, someone decided to actually re-test a “classic” study in psychology. A study that’s been accepted into the core of the discipline – that established somebody’s reputation, made somebody a career. Only it didn’t replicate. They re-did the experiment as carefully as they could and it didn’t show the same result. Didn’t, usually, show any result at all. Increase the sample size to fix the problem and the signal becomes even clearer. Alas, the signal always seems to be that there is no signal.

Pretty soon people started calling into question nearly every Psych study done over the last fifty years and testing them. And many – and I mean many – have failed.

Slate’s article (and it’s really good – giving a great overview of the issue – so give it a read) recounts the latest block to burn down in psychology’s methodology wars. The research in question centered around the idea that our facial states feedback into our emotions. If we smile (even inadvertently) we will feel happier.

It’s an interesting idea – intuitively plausible – and apparently widely supported by a huge variety of studies in the field. It’s an idea which strikes me as perfectly reasonable and in which I have zero vested interested one way or another.

But when it was submitted to rigorous simultaneous validation in a number of different labs, it failed. Completely.

The original test involved 32 participants and a change in average subjective scoring between the two groups of 4.4 to 5.5. That means each group had sixteen participants. The improved second test had 92 total participants and showed a scoring difference of 4.3 to 51.

That’s a pretty small sample.

Especially for something that became received wisdom. A classic.

So how would people react if it turned out to be wrong?

Well, the Slate article answers that question pretty definitively. Because when the multi-lab tests came back, here’s what happened. Seventeen different labs replicated the experiment with nearly 2000 subjects. In half the participating labs, participants who smiled recorded a slightly higher average on the resulting happiness test (but much lower than in the original experiment). In the other half, it went the other way.

Net, net, there was no correlation at all. Zero.

Okay, so far you have just another sad story of a small sample size failure.

That’s not what really attracted my attention. Nope. What really made me laugh in utter disbelief was the comment of the “scientist” who had done the original research. Here it is, and I quote in full lest you think I’m about to exaggerate:

“Fritz Strack has no regrets about the RRR, but then again, he doesn’t take its findings all that seriously. “I don’t see what we’ve learned,” he said.

Two years ago, while the replication of his work was underway, Strack wrote a takedown of the skeptics’ project with the social psychologist Wolfgang Stroebe. Their piece, called “The Alleged Crisis and the Illusion of Exact Replication,” argued that efforts like the RRR reflect an “epistemological misunderstanding,” since it’s impossible to make a perfect copy of an old experiment. People change, times change, and cultures change, they said. No social psychologist ever steps in the same river twice. Even if a study could be reproduced, they added, a negative result wouldn’t be that interesting, because it wouldn’t explain why the replication didn’t work.

So when Strack looks at the recent data he sees not a total failure but a set of mixed results. Nine labs found the pen-in-mouth effect going in the right direction. Eight labs found the opposite. Instead of averaging these together to get a zero effect, why not try to figure out how the two groups might have differed? Maybe there’s a reason why half the labs could not elicit the effect.

[Bolding is mine]

So here’s a “scientist” who, despite presumably being familiar with the extensive literature on statistics and the methodology wars, somehow believes that because half the labs reported a number slightly above average the key thing to look at is why the other half didn’t. Apparently, the only thing that would satisfy him is if all the labs reported an exactly opposite result. Which, presumably, would result in a new classic paper that frowning makes you happier!

Ring, Ring!

Clue Phone.

It’s random variation calling for you, Dr. Strack!

Cause here’s the thing…you’d expect about half the labs to show a positive result when there is no correlation. If some labs didn’t report a positive result then the correlation would pretty much have to be negative, right?

This isn’t, as he appears to believe, half-corroboration. It’s the way every null result ever found actually looks out here in the real world. I’d advise him to try flipping a coin 100 times, repeatedly, and see how often it comes out 50 heads and 50 tails. He might be surprised to learn that about the half the time this test will yield more heads flips than tail flips. This does not mean that heads is more likely than tails and it does not suggest that researchers should focus on why some trials yielded more heads and other trials yielded more tails.

 

Okay, I get it. You published a study. You made a career out of it. It’s embarrassing that it turns out to be wrong. But it’s hard to know in this case which is the worse response – intellectual dishonesty or sheer stupidity. Frankly, I think the latter. Because I don’t care how dishonest you are, some explanations should be too embarrassing to try on for size. And the idea that the right interpretation of these results would be to look for why some labs had slightly different results than others clearly belongs in that category.

I find the defense based on the difficulties of true replication more respectable. And yet, what are we to make of an experiment so delicate that it can’t be replicated AT ALL even with the most careful controls? How important can any inference we make from such an experiment plausibly be? By definition, it could only fit the most narrow range of cases imaginable. And the idea that replication of an experiment doesn’t matter seems…you know…a tad unscientific.

From my perspective, it isn’t the original study that illustrates the extraordinary problem we have getting people to use data well. Yes, over-reliance on small sample sizes is all too common and all too easy. That’s unfortunate, not shameful. But the deeper problem is that even when data is used well, a lethal combination of self-interest and a near total lack of understanding of basic statistics make it all too possible for people to ignore the data whenever they wish.

As Simon and Garfunkel plaintively observed, “A man hears what he wants to hear, and disregards the rest”.

If it wasn’t so sad, it would be funny.

Dammit. It is funny.

For it’s easy to see that in this version of the psych methodology wars, the defenders have their own unique version of a foxhole – with posterior high in the air and head firmly planted in the sand.

[Getting close to the Digital Analytics Hub. If you love talking analytics, check it out. Would be great to see you there!]

Is Data Science a Science?

I got a fair amount of feedback through various channels around my argument that data science isn’t a science and that the scientific method isn’t a method (or at least much of one). I wouldn’t consider either of these claims particularly important in the life of a business analyst, and I think I’ve written pieces that are far more significant in terms of actual practice, but I’ve written few pieces about topics which are evidently more fun to argue about. Well, I’m not opposed to a fun argument now and again, so here’s a redux on some of the commentary and my thoughts in response.

There were two claims in that post:

  1. I was somewhat skeptical that data science was correctly described as a science
  2. I was extremely skeptical that the scientific method was a good description of the scientific endeavor

The comment that most engaged me came from Adam Gitzes and really focused on the first claim:

Science is the distillation of evidence into a causal understanding of the world (my definition anyway). In business analytics, we use surveys, data analysis techniques, and experimental design to also understand causal relationships that can be used to drive our business.

On re-reading my initial post, I realized that while I had argued that business analytics wasn’t science (#1 above), I hadn’t really put many reasons on the table for that view – partly because I was too busy demolishing the “Scientific Method” and partly because I think it’s the less important of the two claims and also the more likely to be correct. Mostly, I just said I was skeptical of the idea. So I think Adam’s right to push out a more specific description of science and ask why data science might not be reasonably described as a kind of scientific endeavor.

I’m not going to get into the thicket of trying to define science. Really. I’m not. That’s the work of a different career. If I got nothing else out of my time studying Philosophy, I got an appreciation for how incredibly hard it is to answer seemingly simple questions like “what is science?” For the most part, we know it when we see it. Physics is science. Philosophy isn’t. But knowing it when you see it is precisely what fails when it comes to edge cases like data science or sociology.

When it comes to business analytics and data science, however, there are a couple of things that make me skeptical of applying the term science that I think we might actually agree on and that use our shared, working understanding of the scientific endeavor.

In business analytics, our main purpose isn’t to understand the world. It’s to improve a specific part of it. Science has no such objective.

Does that seem like a small difference? I don’t think it is. Part of what makes the scientific endeavor unique is that there is no axe to grind. Understanding is the goal. This isn’t to say that people don’t get attached to their ideas or that their careers don’t benefit if they are successful advocates for them – it’s done by humans after all. It would be no more accurate to suggest that the goal of a business is always profit. External forces can and often do set the agenda for researchers. But these are corruptions of the process not the process itself. Business analytics starts (appropriately) with an axe to grind and true science doesn’t.

To see why this makes a difference, consider my own domain – digital analytics. If our goal was just to understand the digital world, we’d have a very different research program than we do. If knowledge was our only goal, we’d spend as much time analyzing why people create certain kinds of digital worlds as how people consume them. That’s not the way it works. In reality, our research program is entirely focused on why and how people use a digital property and what will get more of them to take specific actions – not why and how it was created.

We are, rightly I believe, skeptical of the idea that research sponsored by tobacco companies into lung cancer is, properly speaking, science. That’s not because those researchers don’t follow the general outline of the scientific endeavor – it’s because they have an axe to grind and their research program is determined by factors outside the community of science. When it comes to business analytics, we are all tobacco scientists.

Perhaps we’re not so biased as to the findings of our experiments – good analytics is neutral as to what will work – but we’re every bit as biased when it comes to the outcomes desired and the shape of the research program.

Here’s another crucial difference. I think it’s fair to suggest that in data science we sometimes have no interest in causality. If I’m building a forecast model and I can find variables that are predictive, I may have little interest in whether those variables are also causal. If I’m building a look-alike targeting model, for example, it doesn’t matter one whit whether the variables are causal. Now it’s true that philosophers of science hotly debate the role and necessity of causality in science, but I tend to agree with Adam that there is something in the scientific endeavor that makes the demand for causality a part of the process. But in business analytics, we may demand causality for some problems but be entirely and correctly unconcerned with it in others. In business analytics, causality is a tool not a requirement.

There is, also, the nature of the analytics problem – at least in my field (digital). Science is typically concerned with studying natural phenomena. The digital world is not a natural world, it’s an engineered world. It’s created and adapted with intention. Perhaps even worse, it responds to and changes with the measurements we make and those measurements influence our intentions in subsequent building (which is the whole point after all).

This is Heisenberg’s Uncertainty Principle with a vengeance! When we measure the digital world, we mean to change it based on the measurement. What’s more, once we change it, we can never go back to the same world. We could restore the HTML, but not the absence of users with an alternative experience. In digital, every test we run changes the world in a fundamental way because it changes the users of that world. There is no possibility of conducting a digital test that doesn’t alter the reality we’re measuring – and while this might be true at the quantum level in physics, at the macro level where the scientific endeavor really lives, it seems like a huge difference.

What’s more, each digital property lives in the context of a larger digital world that is being constantly changed with intention by a host of other people. When new Apps like Uber change our expectations of how things like payment should work or alter the design paradigm on the Web, these exogenous and intentional changes can have a dramatic impact on our internal measurement. There is, then, little or no possibility of a true controlled experiment in digital. In digital analytics, our goal is to optimize one part of a giant machine for a specific purpose while millions of other people are optimizing other, inter-related parts of the same machine for entirely different and often opposed purposes.

This doesn’t seem like science to me.

There are disciplines that seem clearly scientific that cannot do controlled experiments. However, no field where the results of an experiment change the measured reality in a clearly significant fashion and are used to intentionally shape the resulting reality is currently described as scientific.

So why don’t I think data science is a science – at least in the realm of digital analytics? It differs from the scientific endeavor in several aspects that seem to me to be critical. Unlike science, business analytics and data science start with an agenda that isn’t just understanding and this fundamentally shapes the research program. Unlike science, business analytics and data science have no fixed commitment to causal explanations – just a commitment to working explanations. Finally, unlike science, business analytics and data science change the world they measure in a clearly significant fashion and do so intentionally with respect to the measurement.

Given that we have no fixed and entirely adequate definition of science, none of this is proof. I can’t demonstrate to you with the certainty of a logical proof that the definition of science requires X, data science is not X, so data science is not a science.

However, I think I have shown that at least by many of the core principles we associate with the scientific endeavor, that business analytics (which I take to be a proxy in this conversation for data science) is not well described as a science.

This isn’t a huge deal. I’ve done business analytics for many years and never once thought of myself as a scientist. What’s more, once we realize that being scientists doesn’t attach a powerful new methodology to business analytics – which was the rather more important point of my last post – it’s much less clear why anyone would think it makes a difference.

Agree?

 

A few other notes on the comments I received. With regards to Nikolaos’ question “why should we care?” I’m obviously largely in agreement. There is intellectual interest in these questions (at least for me), but I won’t pretend that they are likely to matter in actual practice or will determine ‘what works’. I’m also very much in agreement with Ake’s point about qualitative data. The truth is that nothing in the scientific endeavor precludes the use of qualitative data in addition to behavioral data. But even though there’s no determinate tie between the two, I certainly think that advocates for data science as a science are particularly likely to shun qualitative data (which is a shame). As far as Patrick’s comment goes, I think it dodges the essential question. He’s right to suggest that the term data science is contentless because data is not the subject of science, the data is always about something which is the subject of science. But I take the deeper claim to be what I have tackled here; namely, that business analytics is a scientific endeavor. That claim isn’t contentless, just wrong. I remain, still, deeply unconvinced of the utility of CRISP-DM.

 

Now is as good a time as any (how’s that for a powerful call to action?) to pre-order my book, ‘Measuring the Digital World’ on Amazon.