Analytics with a Strategic Edge

The Role of Voice of Customer in Enterprise Analytics

The vast majority of analytics effort is expended on problems that are tactical in nature. That’s not necessarily wrong. Tactics gets a bad rap, sometimes, but the truth is that the vast majority of decisions we make in almost any context are tactical. The problem isn’t that too much analytics is weighted toward tactical issues, it’s really that strategic decisions don’t use analytics at all. The biggest, most important decisions in the digital enterprise nearly always lack a foundation in data or analysis.

I’ve always disliked the idea behind “HIPPOs” – with its Dilbertian assumption that executives are idiots. That isn’t (mostly) my experience at all. But analytics does suffer from what might be described as “virtue” syndrome – the idea that something (say taxes or abstinence) is good for everyone else but not necessarily for me. Just as creative folks tend to think that what they do can’t be driven by analytics, so too is there a perception that strategic decisions must inevitably be more imaginative and intuitive and less number-driven than many decisions further down in the enterprise.

This isn’t completely wrong though it probably short-sells those mid-level decisions. Building good creative takes…creativity. It can’t be churned out by machine. Ditto for strategic decisions. There is NEVER enough information to fully determine a complex strategic decision at the enterprise level.

This doesn’t mean that data isn’t useful or should not be a driver for strategic decisions (and for creative content too). Instinct only works when it’s deeply informed about reality. Nobody has instincts in the abstract. To make a good strategic decision, a decision-maker MUST have certain kinds of data to hand and without that data, there’s nothing on which intuition, knowledge and experience can operate.

What data does a digital decision-maker need for driving strategy?

Key audiences. Customer Journey. Drivers of decision. Competitive choices.

You need to know who your audiences are and what makes them distinct. You need (as described in the last post) to understand the different journeys those audiences take and what journeys they like to take. You need to understand why they make the choices they make – what drives them to choose one product or service or another. Things like demand elasticity, brand awareness, and drivers of choice at each journey stage are critical. And, of course, you need to understand when and why those choices might favor the competition.

None of this stuff will make a strategic decision for you. It won’t tell you how much to invest in digital. Whether or not to build a mobile app. Whether personalization will provide high returns.

But without fully understanding audience, journey, drivers of decision and competitive choices, how can ANY digital decision-maker possibly arrive at an informed strategy? They can’t. And, in fact, they don’t. Because for the vast majority of enteprises, none of this information is part-and-parcel of the information environment.

I’ve seen plenty of executive dashboards that are supposed to help people run their business. They don’t have any of this stuff. I’ve seen the “four personas” puffery that’s supposed to help decision-makers understand their audience. I’ve seen how limited is the exposure executives have to journey mapping and how little it is deployed on a day-to-day basis. Worst of all, I’ve seen how absolutely pathetic is the use of voice of customer (online and offline) to help decision-makers understand why customers make the choices they do.

Voice of customer as it exists today is almost exclusively concerned with measuring customer satisfaction. There’s nothing wrong with measuring NPS or satisfaction. But these measures tell you nothing that will help define a strategy. They are at best (and they are often deeply flawed here too) measures of scoreboard – whether or not you are succeeding in a strategy.

I’m sure that people will object that knowing whether or not a strategy is succeeding is important. It is. It’s even a core part of ongoing strategy development. However, when divorced from particular customer journeys, NPS is essentially meaningless and uninterpretable. And while it truly is critical to measure whether or not a strategy is succeeding, it’s even more important to have data to help shape that strategy in the first place.

Executives just don’t get that context from their analytics teams. At best, they get little pieces of it in dribs and drabs. It is never – as it ought to be – the constant ongoing lifeblood of decision-making.

I subtitled this post “The Role of Voice of Customer in Enterprise Analytics” because of all the different types of information that can help make strategic decisions better, VoC is by far the most important. A good VoC program collects information from every channel: online and offline surveys, call-center, site feedback, social media, etc. It provides a continuing, detailed and sliceable view of audience, journey distribution and (partly) success. It’s by far the best way to help decision-makers understand why customers are making the choices they are, whether those choices are evolving, and how those choices are playing out across the competitive set. In short, it answers the majority of the questions that ought to be on the minds of decision-makers crafting a digital strategy.

This is a very different sort of executive dashboard than we typically see. It’s a true customer insights dashboard. It’s also fundamentally different than almost ANY VoC dashboard we see at any level. The vast majority of VoC reporting doesn’t provide slice-and-dice by audience and use-case – a capability which is absolutely essential to useful VoC reporting. VoC reporting is almost never based on and tied into a journey model so that the customer insights data is immediately reflective of journey stage and actionable arena. And VoC reporting almost never includes a continuous focus on exploring customer decision-making and tying that into the performance of actual initiatives.

It isn’t just a matter of a dashboard. One of the most unique and powerful aspects of digital voice-of-customer is the flexibility it provides to rapidly, efficiently and at very little cost tackle new problems. VoC should be a core part of executive decision-making with a constant cadence of research, analysis, discussion and reporting driven by specific business questions. This open and continuing dialog where VoC is a tool for decision-making is critical to integrating analytics into decisioning. If senior folks aren’t asking for new VoC research on a constant basis, you aren’t doing it right. The single best indicator of a robust VoC program in digital is the speed with which it changes.

Sadly, what decision-makers mostly get right now (if they get anything at all) is a high-level, non-segmented view of audience demographics, an occasional glimpse into high-level decision-factors that is totally divorced from both segment and journey stage, and an overweening focus on a scoreboard metric like NPS.

It’s no wonder, given such thin gruel, that decision-makers aren’t using data for strategic decisions better. If our executives mostly aren’t Dilbertian, they aren’t miracle workers either. They can’t make wine out of information water. If we want analytics to support strategy – and I assume we all do – then building a completely different sort of VoC program is the single best place to start. It isn’t everything. There are other types of data (behavioral, benchmark, econometric, etc.) that can be hugely helpful in shaping digital strategies. But a good VoC program is a huge step forward – a step forward that, if well executed – has the power to immediately transform how the digital enterprise thinks and works.

 

This is probably my last post of the year – so see you in 2016! In the meantime, my book Measuring the Digital World is now available. Could be a great way to spend your holiday down time (ideally while your resting up from time on the slopes)! Have a great holiday…

Is Data Science a Science?

I got a fair amount of feedback through various channels around my argument that data science isn’t a science and that the scientific method isn’t a method (or at least much of one). I wouldn’t consider either of these claims particularly important in the life of a business analyst, and I think I’ve written pieces that are far more significant in terms of actual practice, but I’ve written few pieces about topics which are evidently more fun to argue about. Well, I’m not opposed to a fun argument now and again, so here’s a redux on some of the commentary and my thoughts in response.

There were two claims in that post:

  1. I was somewhat skeptical that data science was correctly described as a science
  2. I was extremely skeptical that the scientific method was a good description of the scientific endeavor

The comment that most engaged me came from Adam Gitzes and really focused on the first claim:

Science is the distillation of evidence into a causal understanding of the world (my definition anyway). In business analytics, we use surveys, data analysis techniques, and experimental design to also understand causal relationships that can be used to drive our business.

On re-reading my initial post, I realized that while I had argued that business analytics wasn’t science (#1 above), I hadn’t really put many reasons on the table for that view – partly because I was too busy demolishing the “Scientific Method” and partly because I think it’s the less important of the two claims and also the more likely to be correct. Mostly, I just said I was skeptical of the idea. So I think Adam’s right to push out a more specific description of science and ask why data science might not be reasonably described as a kind of scientific endeavor.

I’m not going to get into the thicket of trying to define science. Really. I’m not. That’s the work of a different career. If I got nothing else out of my time studying Philosophy, I got an appreciation for how incredibly hard it is to answer seemingly simple questions like “what is science?” For the most part, we know it when we see it. Physics is science. Philosophy isn’t. But knowing it when you see it is precisely what fails when it comes to edge cases like data science or sociology.

When it comes to business analytics and data science, however, there are a couple of things that make me skeptical of applying the term science that I think we might actually agree on and that use our shared, working understanding of the scientific endeavor.

In business analytics, our main purpose isn’t to understand the world. It’s to improve a specific part of it. Science has no such objective.

Does that seem like a small difference? I don’t think it is. Part of what makes the scientific endeavor unique is that there is no axe to grind. Understanding is the goal. This isn’t to say that people don’t get attached to their ideas or that their careers don’t benefit if they are successful advocates for them – it’s done by humans after all. It would be no more accurate to suggest that the goal of a business is always profit. External forces can and often do set the agenda for researchers. But these are corruptions of the process not the process itself. Business analytics starts (appropriately) with an axe to grind and true science doesn’t.

To see why this makes a difference, consider my own domain – digital analytics. If our goal was just to understand the digital world, we’d have a very different research program than we do. If knowledge was our only goal, we’d spend as much time analyzing why people create certain kinds of digital worlds as how people consume them. That’s not the way it works. In reality, our research program is entirely focused on why and how people use a digital property and what will get more of them to take specific actions – not why and how it was created.

We are, rightly I believe, skeptical of the idea that research sponsored by tobacco companies into lung cancer is, properly speaking, science. That’s not because those researchers don’t follow the general outline of the scientific endeavor – it’s because they have an axe to grind and their research program is determined by factors outside the community of science. When it comes to business analytics, we are all tobacco scientists.

Perhaps we’re not so biased as to the findings of our experiments – good analytics is neutral as to what will work – but we’re every bit as biased when it comes to the outcomes desired and the shape of the research program.

Here’s another crucial difference. I think it’s fair to suggest that in data science we sometimes have no interest in causality. If I’m building a forecast model and I can find variables that are predictive, I may have little interest in whether those variables are also causal. If I’m building a look-alike targeting model, for example, it doesn’t matter one whit whether the variables are causal. Now it’s true that philosophers of science hotly debate the role and necessity of causality in science, but I tend to agree with Adam that there is something in the scientific endeavor that makes the demand for causality a part of the process. But in business analytics, we may demand causality for some problems but be entirely and correctly unconcerned with it in others. In business analytics, causality is a tool not a requirement.

There is, also, the nature of the analytics problem – at least in my field (digital). Science is typically concerned with studying natural phenomena. The digital world is not a natural world, it’s an engineered world. It’s created and adapted with intention. Perhaps even worse, it responds to and changes with the measurements we make and those measurements influence our intentions in subsequent building (which is the whole point after all).

This is Heisenberg’s Uncertainty Principle with a vengeance! When we measure the digital world, we mean to change it based on the measurement. What’s more, once we change it, we can never go back to the same world. We could restore the HTML, but not the absence of users with an alternative experience. In digital, every test we run changes the world in a fundamental way because it changes the users of that world. There is no possibility of conducting a digital test that doesn’t alter the reality we’re measuring – and while this might be true at the quantum level in physics, at the macro level where the scientific endeavor really lives, it seems like a huge difference.

What’s more, each digital property lives in the context of a larger digital world that is being constantly changed with intention by a host of other people. When new Apps like Uber change our expectations of how things like payment should work or alter the design paradigm on the Web, these exogenous and intentional changes can have a dramatic impact on our internal measurement. There is, then, little or no possibility of a true controlled experiment in digital. In digital analytics, our goal is to optimize one part of a giant machine for a specific purpose while millions of other people are optimizing other, inter-related parts of the same machine for entirely different and often opposed purposes.

This doesn’t seem like science to me.

There are disciplines that seem clearly scientific that cannot do controlled experiments. However, no field where the results of an experiment change the measured reality in a clearly significant fashion and are used to intentionally shape the resulting reality is currently described as scientific.

So why don’t I think data science is a science – at least in the realm of digital analytics? It differs from the scientific endeavor in several aspects that seem to me to be critical. Unlike science, business analytics and data science start with an agenda that isn’t just understanding and this fundamentally shapes the research program. Unlike science, business analytics and data science have no fixed commitment to causal explanations – just a commitment to working explanations. Finally, unlike science, business analytics and data science change the world they measure in a clearly significant fashion and do so intentionally with respect to the measurement.

Given that we have no fixed and entirely adequate definition of science, none of this is proof. I can’t demonstrate to you with the certainty of a logical proof that the definition of science requires X, data science is not X, so data science is not a science.

However, I think I have shown that at least by many of the core principles we associate with the scientific endeavor, that business analytics (which I take to be a proxy in this conversation for data science) is not well described as a science.

This isn’t a huge deal. I’ve done business analytics for many years and never once thought of myself as a scientist. What’s more, once we realize that being scientists doesn’t attach a powerful new methodology to business analytics – which was the rather more important point of my last post – it’s much less clear why anyone would think it makes a difference.

Agree?

 

A few other notes on the comments I received. With regards to Nikolaos’ question “why should we care?” I’m obviously largely in agreement. There is intellectual interest in these questions (at least for me), but I won’t pretend that they are likely to matter in actual practice or will determine ‘what works’. I’m also very much in agreement with Ake’s point about qualitative data. The truth is that nothing in the scientific endeavor precludes the use of qualitative data in addition to behavioral data. But even though there’s no determinate tie between the two, I certainly think that advocates for data science as a science are particularly likely to shun qualitative data (which is a shame). As far as Patrick’s comment goes, I think it dodges the essential question. He’s right to suggest that the term data science is contentless because data is not the subject of science, the data is always about something which is the subject of science. But I take the deeper claim to be what I have tackled here; namely, that business analytics is a scientific endeavor. That claim isn’t contentless, just wrong. I remain, still, deeply unconvinced of the utility of CRISP-DM.

 

Now is as good a time as any (how’s that for a powerful call to action?) to pre-order my book, ‘Measuring the Digital World’ on Amazon.

What is Data Science and (closely related) what is a Data Scientist?

I came across an interesting read recently on the definition of both data scientist and data science. Now, even though I’m about to disagree with almost everything in the article, that doesn’t mean I think it’s wrong-headed or not worth a read. It’s a fairly conventional, industry standard view of the world and provides a common-sense and reasonable set of definitions for both data scientist and data science. I’d encourage you take a look if you’re interested in this type of question.

Meanwhile, if you’re willing to rely on my summary, here’s what I take to be the gist of the article:

  1. Data Science is about finding insights in data to make better decisions
  2. Data Scientists bring to bear three primary skills: subject matter expertise, programming and data manipulation skills, and statistical knowledge to find those insights.
  3. Using survey techniques and asking data professionals to classify their skills, there are four major styles of data scientist. Three styles (business management professionals, developers, and researchers) map directly to the three key skills elaborated above (subject matter expertise, programming and statistics). Then there’s a fourth category appropriately titled “Creatives” who aren’t good at any of these skills…okay I jest…perhaps it’s more fair to say they are balanced fairly equally across the skill sets.
  4. Popular analytics methods (SMART and CRISP-DM) are essentially no more than variants of the “Scientific Method” and, when you get right down to it, data science is nothing more (or less since the diminutive is not meant to imply anything) than the application of that method to whatever problem a data professional is trying to solve. In other words, and here I quote directly, “data science just is science”.
  5. Science works via the “Scientific Method” described as:
    1. Formulate a question or problem statement
    2. Generate a hypothesis that is testable
    3. Gather/Generate data
    4. Analyze data to test the hypotheses / Draw conclusions
    5. Communicate results to interested parties or take action

That’s it. And you’re probably wondering how or why I would disagree with any of this since it’s pretty innocuous stuff. Yes, I’ve written in the past about my suspicions around the whole ‘data science’ term – though heaven knows I use it myself since the market seems to reward it. Taken as it generally is, it’s either a cunning replacement for the label statistician (since we all “know” statisticians aren’t much use when it comes to driving business value) or a demand that analysts should have “full-stack” skills. I don’t necessarily buy the idea that full-stack skills are critical or that there’s a huge benefit in combining them in a single person instead of spreading them across a team, but it’s not something I lose sleep over.

What’s more, once you start flavoring data scientists based on their real proficiencies inside that three-part set, you’re really just back to having analysts (the subject matter expertise folks), programmers, and statisticians. The same people you always had except now they call themselves data scientists and charge you quite a bit more for doing the same stuff they’ve always done. Since I’m one of those people, I not deeply opposed to the whole trend. Here’s a way to think about all this that I think is a little more useful.

None of which is really worth bothering to disagree about though. It’s semantics of a fairly uninteresting sort.

No, what really bothers me about this conventional view is encapsulated in the last two claims:  #4 and #5. The idea that data science is science and that the scientific method is applicable to business analytics. I’m not at all sure that business analytics is or should aspire to be science and I’m quite sure that the scientific method won’t save us.

On the other hand, I agree with the first part of the claim in #4. Namely, that methodologies like CRISP-DM are just faintly warmed over versions of the scientific method.

Despite what most people would assume, that’s not a good thing and here I’m going to go all “philosophy guy” on you to explain why, and also why I think this is actually a pretty important point.

 

Debunking the Scientific Method

In the past five hundred years, the dominant theme in Western culture has been the continuing and astonishing success of the scientific endeavor. Only the most hardened skeptic could doubt the importance and success of scientific disciplines like physics, chemistry and biology in dramatically improving our understanding of the natural world. When it comes to the success of the scientific endeavor, I’m not skeptical at all. It’s worked and it’s worked amazingly well.

But why is that?

The popular conception is that science works because scientists apply the scientific method – testing theories experimentally and proving or refuting them. It’s the five step process enumerated above.

And it just isn’t right. Since way back in the day when I was studying philosophy of science, there’s been a broad consensus that the “scientific method” is a deeply flawed account of the scientific endeavor. Karl Popper provided the best and most influential account of the traditional scientific method and the importance of refutation as opposed to proof. Thomas Kuhn pretty much debunked that explanation as an historical account of how science actually works (despite having his own deeply unsuccessful explanation) and Quine absolutely destroyed it as an intellectual model. It turns out that it’s basically impossible to refute a single hypothesis in isolation with an experiment. Quine actually influenced my thinking on why KPIs, taken in isolation, are always useless. Depending on the background assumptions, any change of a KPI (and in any direction) can have diametrically opposed meanings. It’s pretty much the same thing with a hypothesis. You can rescue any hypothesis from experimental refutation by changing the background assumptions. What’s more, Kuhn showed that this happens all the time in science – punctuated by dramatic cases where it doesn’t.

I doubt there is a single working historian or philosopher of science who would accept the “scientific method” as a reasonable explanation for how science works from either an historical or intellectual perspective.

What’s more, the scientific method as popularly elaborated is almost contentless. Strip away the fancy language and it translates into something like this:

  1. Decide what problem you want to solve
  2. Think about the problem until you have an idea of how it might be solved
  3. Try it out and see if it works
  4. Repeat until you solve the problem

Does this feel action guiding and powerful?

It feels to me like the sort of thing you might sell on late-night TV. Available now, limited time only – a one stop absolutely foolproof method for solving any problem of any sort in any field! The Scientific Method! Buy!

The only part of the scientific method that feels significant in any respect is that requirement that your idea should be capable of specific refutation (testable) via experiment. Sadly, that’s exactly the concept that Quine showed to be impossible. So the scientific method as popularly understood is pretty much a bunch of boilerplate with one mistaken idea bolted on.

The idea that this type of general problem solving procedure is the explanation for the success of science seems implausible on its face and is contradicted by experience.

Implausible because the method as described is so contentless. How do I pick which problems to tackle from the infinite set available? The method is silent. How do I generate hypothesis? The method is silent. How do I know they are testable? The method is silent. How do I test them? The method is silent. How do I know what to do when a test doesn’t refute a hypothesis? The method is silent. How many failures to refute a hypothesis is enough to prove it? The method is silent. How do I communicate the results? The method is silent.

If what we want in a methodology is a massively generalized process that provides zero guidance on how to accomplish the tasks it lays out and has one impossible to meet demand, then the scientific method is great.

Hence the implausibility of the claim that the scientific method is a reasonable explanation for why science works. The scientific endeavor is neither defined, nor described, by the scientific method.

On a less important note, I’m not at all sure that it’s correct to think of data science as even potentially a scientific endeavor – at least when it comes to business analytics. The belief that the scientific endeavor works in general is broadly contradicted by experience – it doesn’t work for everything. Yes, the scientific endeavor has worked extraordinarily well in physics and biology. But smart people have tried to emulate the scientific approach in lots of other places too. Fields like history, sociology, philosophy and psychology (and lots of other disciplines as well) have all drunk the “scientific method” moonshine with a conspicuous absence of success. Clearly something about the scientific endeavor makes it very effective for some types of problems and not effective at all for others. That seems to me a pretty important fact to keep in mind when we claim that business analytics and data science are “just science”. It’s comforting to think we can re-cast business as science, but it’s not clear why we should think that’s true. I’ve never thought of business analytics as a truly scientific enterprise and renaming it data science doesn’t make it seem any more  likely to be so.

 

Why CRISP-DM and most other generalized analytics models are the scientific method…and LESS

Unfortunately, methods specific to analytics like CRISP-DM are worse not better. They lack even the idea of specific testability which, though incorrect, at least made some sense as a driver of a method. CRISP-DM lays out a process for analytics that essentially says it works like this: figure out what your problem is, figure out what data you need, setup your data, build your model, check your model, deploy your model.

Wow. That’s very helpful.

Here’s a CRISP-DM like method for becoming President of the United States.

  1. Decide which political party to join
  2. Register as a candidate for president
  3. Create lots of positive press about yourself and your positions
  4. Raise a lot of money
  5. Convince people to vote for you

Armed with a cutting-edge method like this, your path to power is assured. Donald Trump beware!

Really, how different is CRISP-DM from this? It adds a few little flourishes and some academic language but it lives at the same level of empty generality. I suppose it’s good to know that you deploy models only after you build them, but I’m thinking a formal methodology should give us a little more utility than that.

Methodologies like Six Sigma or SPEED (which I laid out last week and which is why this topic is much on my mind and seems important) provide something real and essential – they provide enough guidance to actually drive a process.

As a side note, I’d point out that successful methodologies are nearly always domain specific (SPEED is entirely specific to digital analytics and Six Sigma has been mostly successful in a very specific range of manufacturing production problems) for the simple reason that generality destroys utility when it comes to method.

 

So is Business Analytics a “Science”?

It’s a real question, then, whether business analytics can reasonably be considered a science and, in fact, it’s a much more ambitious claim than most people would realize (at least when it’s cloaked in the idea that data science is a science – after all, it says science right there in the title). I’m highly skeptical of the idea that data science is science because I’m highly skeptical that business analytics problems are scientific problems.

They don’t seem like it to me. Business analytics problems map very poorly indeed to the natural sciences and only very partially to the social sciences where the track record of the scientific endeavor is, to say the least, mixed.

So claiming that data science is about using the scientific method on data problems might seem like a “Mom and Apple Pie” kind of thing, but I think it’s wrong on two counts.

It’s wrong because business analytics problems are not obviously the types of problems that are scientific. I can’t say for sure that they aren’t – and I might be persuaded otherwise – but first glance I think there are strong reasons for skepticism and little reason to think that advocates of this view really understand what they are saying or have good reasons to back their claim.

It’s especially wrong because the scientific method as popularly understood is neither meaningful nor a method. This is important. In fact, this is the one really important thing you really should take away from this post. If you think hiring data scientists ensures you have a method (and not just a method but a “scientific” one), you’re going to be sadly disappointed. Data scientists don’t arrive at your doorstep complete with a real method for continuous improvement in digital.  It doesn’t matter how data sciencey they are. And if you believe that telling your analysts to use the “scientific method” is going to make your analytics more successful…well that, my friend, is even more absurd.

I have strong reasons for thinking that Six Sigma (for example) isn’t an appropriate methodology for digital analytics. But at least it’s a real method. Flawed as it is when applied to digital analytics, it’s rather more likely to drive results than the “scientific” method. And, of course, I have my own axe to grind. The methodology I described in SPEED is purpose-built for digital and is action-guiding. I’d love to have people adopt and use it. But even if you don’t like SPEED, the importance of having a real method and using that method to drive continuous improvement shouldn’t be discounted.

Go ahead, build your own. Just make sure it’s not of the “figure out your problem, then solve your problem, then iterate” variety; unless, of course, you want an analytics method to sell on late-night TV.

 

I promise there’s no (well…very little) philosophy in ‘Measuring the Digital World’ – but I do think there is some good method! It’s available for pre-order now on Amazon.

SPEED: A Process for Continuous Improvement in Digital

Everyone always wants to get better. But without a formal process to drive performance, continuous improvement is more likely to be an empty platitude than a reality in the enterprise. Building that formal process isn’t trivial. Existing methodologies like Six Sigma illustrate the depth and the advantages of a true improvement process versus an ad hoc “let’s get better” attitude, but those methodologies (largely birthed in manufacturing) aren’t directly applicable to digital. In my last post, I laid out six grounding principles that underlie continuous improvement in digital. I’ll summarize them here as:

  • Small is measurable. Big changes (like website redesigns) alter too much to make optimization practical
  • Controlled Experiments are essential to measure any complex change
  • Continuous improvement will broadly target reduction in friction or improvement in segmentation
  • Acquisition and Experience (Content) are inter-related and inter-dependent
  • Audience, use-case, prequalification and target content all drive marketing performance
  • Most content changes shift behavior rather than drive clear positive or negative outcomes

Having guiding principles isn’t the same thing as having a method, but a real methodology can be fashioned from this sub-structure that will drive true continuous improvement. A full methodology needs a way to identify the right areas to work on and a process for improving those areas. At minimum, that process should include techniques for figuring out what to change and for evaluating the direction and impact of those changes. If you have that, you can drive continuous improvement.

I’ll start where I always start: segmentation. Specifically, 2-tiered segmentation. 2-tiered segmentation is a uniquely digital approach to segmentation that slices audiences by who they are (traditional segmentation) and what they are trying to accomplish (this is the second tier) in the digital channel. This matrixed segmentation scheme is the perfect table-set for continuous improvement. In fact, I don’t think it’s possible to drive continuous improvement without this type of segmentation. Real digital improvement is always relative to an audience and a use-case.

But segmentation on its own isn’t a method for continuous improvement. 2-tiered segmentation gives us a powerful framework for understanding where and why improvement might be focused, but it doesn’t tell us where to target improvements or what those improvements might be. To have a real method, we need that.

Here’s where pre-qualification comes in. One of the core principles is that acquisition and experience are inter-related and inter-dependent. This means that if you want to understand whether or not content is working (creating lift of some kind), then you have to understand the pre-existing state of the audience that consumes that content. Content with a 100% success rate may suck. Content with a 0% success rate may be outstanding. It all depends on the population you give them. Every single person in line at the DMV will stay there to get their license. That doesn’t mean the experience is a good one. It just means that the self-selected audience is determined to finish the process. We need that license! Similarly, if you direct garbage traffic to even the best content, it won’t perform at all. Acquisition and content are deeply interdependent. It’s impossible to measure the latter without understanding the former.

Fortunately, there’s a simple technique for measuring the quality of the audience sourced for any given content area that we call pre-qualification. To understand the pre-qualification level of an audience at a given content point, we use a very short (typically nor more than 3-4 questions) pop-up survey. The pre-qualification survey explores what use-case visitors are in, where they are in the buying cycle, and how committed they are to the brand. That’s it.

It may be simple, but pre-qualification is one of the most powerful tools in the digital analytics arsenal and it’s the key to a successful continuous improvement methodology.

First we segment. Then we measure pre-qualification. With these two pieces we can measure content performance by visitor type, use-case and visitor quality. That’s enough to establish which content and which marketing campaigns are truly underperforming.

How?

Hold the population, use-case and pre-qualification level constant and measure the effectiveness of content pieces and sequences in creating successful outcomes. You can’t effectively measure content performance unless you hold these three variables constant, but when you control for these three variables you open up the power of digital analytics.

We now have a way to target potential improvement areas – just pick the content with the worst performance in each cell (visitor type x visit type x qualification level).

But there is much more that we can do with these essential pieces in place. By evaluating whether content underperforms across all pre-qualification levels equally or is much worse for less qualified visitors, you can determine if the content problem is because of friction (see guiding principle #3).

Friction problems tend to impact less qualified visitors disproportionately. So if less qualified visitors within each visitor type perform even worse than expected after consuming a piece of content, then some type of friction is likely the culprit.

Further, by evaluating content performance across visitor type (within use-case and with pre-qualification held constant), you have strong clues as to whether or not there are personalization opportunities to drive segmentation improvement.

Finally, where content performs well for qualified audiences but receives a disproportionate share of unqualified visitors, you know that you have to go upstream to fix the marketing campaigns sourcing the visits and targeting the content.

Segment. Pre-Qualify. Evaluate by qualification for friction and acquisition, and by visitor type for personalization.

Step four is to explore what to change. How do you do that? Often, the best method is to ask. This is yet another area for targeted VoC, where you can explore what content people are looking for, how they make decisions, what they need to know, and how that differs by segment. A rich series of choice/decision questions should create the necessary material to craft alternative approaches to test.

You can also break up the content into discrete chunks (each with a specific meta-data purpose or role) and then create a controlled experiment that tests which content chunks are most important and deliver the most lift. This is a sub-process for testing within the larger continuous improvement process. Analytically, it should also be possible to do a form of conjoint analysis on either behavior or preferences captured in VoC.

Segment. Pre-Qualify. Evaluate. Explore.

Now you’re ready to decide on the next round of tests and experiments based on a formal process for finding where problems are, why they exist, and how they can be tackled.

Segment, Pre-Qualify. Evaluate. Explore. Decide.

SPEED.

Sure, it’s just another consulting acronym. But underneath that acronym is real method. Not squishy and not contentless. It’s a formal procedure for identifying where problems exist, what class of problems they are, what type of solution might be a fit (friction reduction or personalization), and what that solution might consist of. All wrapped together in a process that can be endlessly repeated to drive measurable, discrete improvement for every type of visitor and every type of visit across any digital channel. It’s also specifically designed to be responsive to the guiding principles enumerated above that define digital.

If you’re looking for a real continuous improvement process in digital, there’s SPEED and then there’s…

Well, as far as I know, that’s pretty much it.

 

Interested in knowing more about 2-Tiered Segmentation and Pre-Qualification, the key ingredients to SPEED? “Measuring the Digital World” provides the most detailed descriptions I’ve ever written of how to do both and is now available for pre-order on Amazon.

Continuous Improvement

Is it a Method or a Platitude?

What does it take to be good at digital? The ability to make good decisions, of course. If you run a pro football team and you make consistently good decisions about players and about coaches, and they, in turn, make consistently good decisions about preparation and plays, you’ll be successful. Most organizations aren’t setup to make good decisions in digital. They don’t have the right information to drive strategic decisions and they often lack the right processes to make good tactical decisions. I’ve highlighted four capabilities that must be knitted together to drive consistently good decisions in the digital realm: comprehensive customer journey mapping, analytics support at every level of the organization, aggressive controlled experimentation targeted to decision-support, and constant voice of customer research. For most organizations, none of these capabilities are well-baked and it’s rare that even a very good organization is excellent at more than two of these capabilities.

The Essentials for Digital Transformation
                          The Essentials for Digital Transformation

There’s a fifth spoke of this wheel, however, that isn’t so much a capability as an approach. That’s not so completely different from the others as it might seem. After all, almost every enterprise I see has a digital analytics department, a VoC capability, a customer journey map, and an A/B Testing team. In previous posts, I’ve highlighted how those capabilities are mis-used, mis-deployed or simply misunderstood. Which makes for a pretty big miss. So it’s very much true that a better approach underlies all of these capabilities. When I talk about continuous improvement, it’s not a capability at all. There’s no there, there. It’s just an approach. Yet it’s an approach that, taken seriously, can help weld these other four capabilities into a coherent whole.

The doctrine of continuous improvement is not new – in digital or elsewhere. It has a long and proven track record and it’s one of the few industry best practices with which I am in whole-hearted agreement. Too often, however, continuous improvement is treated as an empty platitude, not a method. It’s interpreted as a squishy injunction that we should always try to get better. Rah! Rah!

No.

Taken this way, it’s as contentless as interpreting evolutionary theory as survival of the fittest. Those most likely to survive are…those most likely to survive. It is the mechanism of natural selection coupled with genetic variation and mutation that gives content to evolutionary doctrine. In other words, without a process for deciding what’s fittest and a method of transmitting that fitness across generations, evolutionary theory would be a contentless tautology. The idea of continuous improvement, too, needs a method to be interesting. Everybody wants to get better all the time. There has to be a real process to make it interesting.

There are such processes, of course. Techniques like Six Sigma famously elaborate a specific method to drive continuous improvement in manufacturing processes. Unfortunately, Six Sigma isn’t directly transferable to digital analytics. We lack the critical optimization variable (defects) against which these methods work. Nor does it work to simply substitute a variable like conversion rate for defects because we lack the controlled environment necessary to believe that every customer should convert.

If Six Sigma doesn’t translate directly into digital analytics, that doesn’t mean we can’t learn from it and cadge some good ideas, though. Here are the core ideas that drive continuous improvement in digital, many of which are rooted in formal continuous improvement methodologies:

  1. It’s much easier to measure a single, specific change than a huge number of simultaneous changes. A website or mobile app is a complex set of interconnecting pieces. If you change your home page, for example, you change the dynamics of every use-case on the site. This may benefit some users and disadvantage others; it may improve one page’s performance and harm another’s. When you change an entire website at once, it’s incredibly difficult to isolate which elements improved and which didn’t. Only the holistic performance of the system can be measured on a before and after basis – and even that can be challenging if new functionality has been introduced. The more discrete and isolated a change, the easier it is to measure its true impact on the system.
  2. Where changes are specific and local, micro-conversion analytics can generally be used to assess improvement. Where changes are numerous or the impact non-local, then a controlled environment is necessary to measure improvement. A true controlled environment in digital is generally impossible but can be effectively replicated via controlled experimentation (such as A/B testing or hold-outs).
  3. Continuous improvement can be driven on a segmented or site-wide basis. Improvements that are site-wide are typically focused on reducing friction. Segmentation improvements are focused on optimizing the conversation with specific populations. Both types of improvement cycles must be addressed in any comprehensive program.
  4. Digital performance is driven by two different systems (acquisition of traffic and content performance). Despite the fact that these two systems function independently, it’s impossible to measure performance of either without measuring their interdependencies. Content performance is ALWAYS relative to the mix of audience created by the acquisition systems. This dependency is even tighter in closed loop systems like Search Engine Optimization – where the content of the page heavily determines the nature of the traffic sent AND the performance of that traffic once sourced (though the two can function quite differently with the best SEO optimized page being a very poor content performer even though it’s sourcing its own traffic).
  5. Marketing performance is a function of four things: the type of audience sourced, the use-case of the audience sourced, the pre-qualification of the audience sourced and the target content to which the audience is sourced. Continuous improvement must target all four factors to be effective.
  6. Content performance is relative to function, audience and use-case. Some content changes will be directly negative or positive (friction causing or reducing), but most will shift the distribution of behaviors. Because most impacts are shifts in the distribution of use-cases or journeys, it’s essential that the relative value of alternative paths be understood when applying continuous improvement.

These are core ideas, not a formal process. In my next post, I’ll take a shot at translating them into a formal process for digital improvement. I’m not really confident how tightly I can describe that process, but I am confident that it will capture something rather different than any current approach to digital analytics.

 

With Thanksgiving upon us now is the time to think about the perfect stocking stuffer for the digital analyst you like best. Pre-order “Measuring the Digital World” now!

Analytics for a (Good) Purpose

I imagine that anyone reading my posts can tell that I love doing analytics. I mean real, hands-on, getting your cuticles data-dirty analytics. But if I have a complaint about the analytics part of what I do, it’s that so often it’s for purposes that just aren’t gripping. There’s nothing wrong with selling more insurance, getting people to view higher-value ads, or cutting a few seconds off the time it takes to complete a process. Making commerce better is a perfectly good thing to do. Commerce matters to all of us. But if there’s nothing wrong with improving commerce, neither is it food for the soul. I’ve been re-reading Tobias Wolff’s wonderful novel “Old School”, and in it, one of the professors says something like this – “Essays? We could live without essays. The world would be a little poorer – like a world without chess – but stories…stories we can’t live without.”

That’s why I’ve always loved the rare occasions when we get to turn an analytics eye on a problem that means something more. Part of my team at EY got that chance a little more than a week back when we hosted an “Analytics Hackathon” for the Earthwatch Institute.

You can check out Earthwatch here at Earthwatch.org – it’s a very cool organization. I love everything about what they do and the way they approach it. I love the science part, which is fascinating. The nature part, which is just something I happen to enjoy – my daughters will attest that I am “crazy hiker guy”. And I love the approach, that assumes we are at our best when we do good not from ideology, which is often cold and artificial, but from passion. Even more, that worthwhile commitment comes from passion tempered by knowledge. We all realize that knowledge without passion achieves little. But passion without knowledge more often does harm than good in our complex society. Building that rare combination of passion for and knowledge of the natural world strikes me as what Earthwatch is all about, and I can’t think of a more rewarding mission.

So Earthwatch provided us six years of data on their expeditioners (folks who volunteer to take field trips to support their scientific endeavors), their donors, and the intersection of the two, and let us have at it for day. They asked three big questions: what can you tell us about donors and donor patterns, how do donors and expeditioners intersect, and are there things we should know to improve the marketing of expeditions to attract volunteers?

Earthwatch Image 1Great questions all, but a lot to ask of a five-hour day.

We pre-loaded their data into Tableau, and after a brief context-setting presentation from the Earthwatch folks, we split up into groups with each group drawing a single question. Each group produced a full-on dashboard and spent some time answering the questions.

One of the great challenges for many non-profits is the split between what you do and those who pay. In the traditional enterprise, good products and service make your customers happy and willing to pay. At Earthwatch, as with many a non-profit, their mission doesn’t directly serve their donors (those who pay). So the challenge (and the opportunity) is how to connect donors to the mission.

The mechanism for doing that at Earthwatch is the expedition. Hands on participation in an Earthwatch expedition is by far the best spur to giving they have. So one of our groups focused specifically on the relationship between expeditions and giving – and what they found was fascinating and unexpected. But it’s also fair to ask what other factors might drive giving – are there demographic, lifestage, or proclivities that can be used to direct social advertising, inform partnerships or target messaging?

Unfortunately, like many an enterprise (and not just non-profits), Earthwatch hasn’t done the greatest job building out their knowledge of their customers – in this case their donors. With only age, gender and zip code to work with (and that data obviously spotty with null values dominating each demographic category), the options for look-alike or advanced targeting are fairly minimal.

However, even with such thin gruel, there are findings to be had and analysis to be done. If you graph Earthwatch’s expeditioners by age, you get a big horseshoe-like graph. Lots of teenagers. Lots of seniors. Not much in-between. That’s no surprise and probably not changeable. Graph donors, and the left-hand side of the horse-shoe (the teenagers) go away. That’s no surprise either. You can’t squeeze much water from a rock. What is surprising is that the middle part of the graph doesn’t fill-in. Aren’t the parents of those teens natural donors? Your children’s connection to an activity ought to be a powerful motivator to giving. I think there’s potentially a missed strategic opportunity here.

There were two other points that emerged from simple graphs of donations by age and donation amount by age. Earthwatch gets lots of donations from seniors. But there’s a big spike right at sixty. And there’s a pretty significant spike in donation amount right around forty. Think about that. Forty and sixty are big inflection points. They are times when almost all of us step outside the lines for at least a short while and think about the shape and nature of our life. That’s a good time to think about an Earthwatch expedition or a donation, right? This is a case where there’s no need to target a broad demographic. The combination of some key interest variables and a big birthday might be enough. It’s at least worth testing. Targeted marketers know the importance of magic moments, and the finer-grained you can make them, the more efficient you can be. For a non-profit like Earthwatch with tiny marketing dollars, the tighter you can draw the boundaries around a magic-moment, the more likely you are to be able to use it effectively.

Thinking about that donor curve also makes plain how important both patience and a long-term strategy are to a non-profit like Earthwatch (and maybe to a lot of for-profits as well). Earthwatch has been around for a long time. That means some of their early expeditioners are retirees now. If you can keep track of people for twenty, thirty or forty years, you have an opportunity to re-ignite those connections. When they have teenagers themselves, they are the right audience to target for expeditions and donations.

This long term view seems hard. But it’s exactly what great schools and universities do. They know their 25 year old graduates aren’t giving them money. But if they can create mechanism to stay in touch till those graduates hit forty, fifty and sixty, that is worth a lot. Social media is, of course, a great way to do this. And facilitating social media connections with volunteers ought to be a long-term strategic goal for any non-profit that engages with young people.

And what about all those folks who took expeditions back in the 80’s and 90’s? Track them down on LinkedIn and Facebook – that’s what interns are for – and send them something to get them back in the fold!

In my recent posts, I’ve been arguing that analytics is under-used in strategy. Mostly, this type of analytics isn’t advanced modelling or big data stuff. It’s macroeconomics not microeconomics. Just looking at the shape of the donor and expeditioner curves can help inform strategic thinking.

From a more tactical standpoint, we also looked at the relationship between their new membership program and repeat giving. Earthwatch has bounced back and forth a bit on membership, but they currently are focused on it. We found that members tended to be smaller donors (their biggest donors weren’t always members). More interesting, however, was the impact of membership on donation pattern and stability. We tracked donors who gave in ’14 before the membership program and then became members in ’15. Did they give less or more? We didn’t have the time or the tools to do this analysis properly, but it looked as if membership, on average, tended to slightly depress average donation but increase frequency of giving resulting in a net positive. As I said, we didn’t have time to really prove this, but analytically, there’s a couple of key points here. If you’re a non-profit trying to assess the impact of something like membership, you need to make sure you break the problem down into analyzable segments. That means creating cohorts of previous donors and tracking their behavior (including whether their behavior tends to improve or deteriorate over time), tracking the impact on new donors and efforts, and, in most cases, using hold-outs and control groups to make sure you’re not fooling yourself about the numbers.

Going back to the shapes of curves, the team that looked into the relationship between giving and expeditions found something truly interesting. They linked the two tables (donors/expeditioners) to isolate just the population that had gone on an expedition and donated money. Then they created a calculated variable that tracked the difference between the donation date and the expedition date and laid it out on a chart (ain’t Tableau wonderful).

Earthwatch Image 2What they found was kind of a shock. I would have expected a curve kind of like a camel’s hump after the expeditions. Not much giving ahead of time, a short latency period after the expedition, then a sharp hump followed by a quick decline and a long slow descent as the halo from the trip gradually dispersed. Much of that is exactly what they found. There isn’t much of a latency period but the there is a sharp hump followed by the quick decline and slow descent. The shocker was on the other side of the curve. It turns out that lots of expeditioners (not the teens but the adults) are quite likely to give BEFORE they travel. The team tackling this called it a “Packing Boost” (this is one of those things that makes me proud – not only did they find something interesting but they did the extra work to attach a business useful name to the phenomenon – that’s good consulting). The pre-trip donation amounts were quite a bit smaller on average, but the number of donations was almost symmetrical.

I would never have expected that.

Apparently, when people are getting ready for an expedition they are also in the mood to make a donation. I can see that, but not only was it a surprise to me, it wasn’t received wisdom at Earthwatch either. Their donation solicitations are not at all focused on the pre-trip period.

That’s potentially a huge win and an easily testable addition to their solicitation marketing program.

The third team looked at the behavior of expeditioners. Their initial analysis focused on when people book an expedition versus the type of expedition. It turns out that there are some pretty distinct types of trip. Expeditions to Africa are usually booked a long time in advance. Expeditions in the US and places like Costa Rica are more typically booked 2-3 months in advance. There are seasonal impacts as well, with most expeditions getting booked in the spring (to take place over summer).

Actionable? You bet it is. If you’re programming the hero section of the website (which happens to have a rotating set of expeditions), knowing the time-horizons for each type of trip can help you get your web marketing right. There’s also a planning element to this. If your Africa expedition isn’t largely staffed six months out, you’re in trouble. But that trip to Costa Rica still has plenty of runway.

Finally, that team looked at the impact of discounts on cancellation behavior and which expeditions were most cancelled (important from a planning perspective). They, too, ran out of time and had some tool limitations but initial analysis seems to suggest that people are less likely to cancel trips when they’ve gotten a discount. Even more suggestive, it didn’t look like the amount of the discount was hugely significant. This might indicate that some discounting is economically beneficial – even it drives no initial lift.  It’s also possible that it’s no more than an artifact of self-selection, since the discounts may be offered to customer segments that are inherently less likely to cancel (previous expeditioners, for example). It’s an unexpected and potentially important finding but like any exploratory finding, it needs testing and controls to see if it’s real.

 

I’m pretty sure our five hours of time won’t change the world. Still, we had a lot of fun doing work we genuinely enjoy for an organization that truly matters. And there’s a chance we helped out a little. That’s good enough for me.

Are there some big takeaways about analytics from our one-day Hackathon? Most of them are things we all should know.

Earthwatch helped make the process more productive by coming to the table with three real and fairly concrete problems. We don’t always get as much from clients that are investing a lot of money. Knowing the questions you want to answer is the single most important step in any analysis.

Like a lot of organizations, Earthwatch hasn’t invested as much in data collection and data quality as is ideal. Limitations on the data place real boundaries on what you can do – not only with analysis but with the fruits of that analysis in targeting and personalization.

Being open to the unexpected is critical (and sometimes that’s easier for an outside consultant without a lot of preconceptions around the business). The team that started by focusing on the impact to donations after taking an expedition ended up talking much more about the impact to donations of planning for an expedition. It wasn’t that their initial hypothesis was wrong. People do donate after going on an expedition. But they had the imagination and sense to see that a more interesting hypothesis emerged from the data.

Tableau is a great tool for visualization and data exploration, but it can’t do everything. Problems like the cohort analysis of membership or the impact of cancellation really required statistical analysis tools with more horsepower and more data manipulation capabilities. Still, the ability to quickly explore a data set across many dimensions is wonderful and the utility of that ease in the right hands is hard to overestimate.

Finally, the biggest part of any analysis is the imagination to map the data to the business problem or opportunity. Strategic insights aren’t usually driven by fancy analysis. They are more often sparked by simple views and cuts of the data (line graphs or bar charts) that make obvious some fundamental fact about the business. Sometimes data can spark new insights; sometimes it’s just a confirmation (or refutation) of strategic thoughts or business intuitions that are already on the table. Either way, it makes for a better strategy and more confident decisions.

 

Finally, one last plug for Earthwatch. What they do is important and, often, very cool (check out that Barrier Reef diving expedition). Like our Hackathon, there’s nothing wrong and everything right with having fun doing something worthwhile. So even if you’re not coming up on forty or sixty, take a look!