Tag Archives: VoC

Practical Steps to Building an Analytics Culture

Building an analytics culture in the enterprise is incredibly important. It’s far more important than any single capability, technology or technique. But building culture isn’t easy. You can’t buy it. You can’t proclaim it. You can’t implement it.

There is, of course, a vast literature on building culture in the enterprise. But if the clumsy, heavy-handed, thoroughly useless attempts to “build culture” that I’ve witnessed over the course of my working life are any evidence, that body of literature is nearly useless.

Here’s one thing I know for sure: you don’t build culture by talk. I don’t care whether it’s getting teenagers to practice safe-sex or getting managers to use analytics, preaching virtue doesn’t work, has never worked and will never work. Telling people to be data-driven, proclaiming your commitment to analytics, touting your analytics capabilities: none of this builds analytics culture.

If there’s one thing that every young employee has learned in this era, it’s that fancy talk is cheap and meaningless. People are incredibly sophisticated about language these days. We can sit in front of the TV and recognize in a second whether we’re seeing a commercial or a program. Most of us can tell the difference between a TV show and movie almost at a glance. We can tune out advertising on a Website as effortlessly as we put on our pants. A bunch of glib words aren’t going to fool anyone. You want to know what the reaction is to your carefully crafted, strategic consultancy driven mission statement or that five year “vision” you spent millions on and just rolled out with a cool video at your Sales Conference? Complete indifference.

That’s if you’re lucky…if you didn’t do it really well, you got the eye-roll.

But it isn’t just that people are incredibly sensitive – probably too sensitive – to BS. It’s that even true, sincere, beautifully reasoned words will not build culture. Reading moral philosophy does not create moral students. Not because the words aren’t right or true, but because behaviors are, for the most part, not driven by those types of reasons.

That’s the whole thing about culture.

Culture is lived, not read or spoken. To create it, you have to ingrain it in people’s thinking. If you want a data-driven organization, you have to create good analytic habits. You have to make the organization (and you too) work right.

How do you do that?

You do it by creating certain kinds of process and behaviors that embed analytic thinking. Do enough of that, and you’ll have an analytic culture. I guarantee it. The whole thrust of this recent series of posts is that by changing the way you integrate analytics, voice-of-customer, journey-mapping and experimentation into the enterprise, you can drive better digital decision making. That’s building culture. It’s my big answer to the question of how you build analytics culture.

But I have some small answers as well. Here, in no particular order, are practical ways you can create importantly good analytics habits in the enterprise.

Analytic Reporting

What it is: Changing your enterprise reporting strategy by moving from reports to tools. Analytic models and forecasting allow you to build tools that integrate historical reporting with forecasting and what-if capabilities. Static reporting is replaced by a set of interactive tools that allow users to see how different business strategies actually play-out.

Why it build analytics culture: With analytics reporting, you democratize knowledge not data. It makes all the difference in the world. The analytic models capture your best insight into how a key business works and what levers drive performance. Building this into tools not only operationalizes the knowledge, it creates positive feedback loops to analytics. When the forecast isn’t right, everyone know it and the business is incented to improve its understanding and predictive capabilities. This makes for better culture in analytics consumers and analytics producers.


Cadence of Communications

What it is: Setting up regular briefings between analytics and your senior team and decision-makers. This can include review of dashboards but should primarily focus on answers to previous business questions and discussion of new problems.

Why it builds analytics culture: This is actually one of the most important things you can do. It exposes decision-makers to analytics. It makes it easy for decision-makers to ask for new research and exposes them to the relevant techniques. Perhaps even more important, it lets decision-makers drive the analytics agenda, exposes analysts to real business problems, and forces analysts to develop better communication skills.


C-Suite Advisor

What it is: Create an Analytics Minister-without-portfolio whose sole job is to advise senior decision-makers on how to use, understand and evaluate the analytics, the data and the decisions they get.

Why it builds analytics culture: Most senior executives are fairly ignorant of the pitfalls in data interpretation and the ins-and-outs of KPIs and experimentation. You can’t send them back to get a modern MBA, but you can give them a trusted advisor with no axe to grind. This not only raises their analytics intelligence, it forces everyone feeding them information to up their game as well. This tactic is also critical because of the next strategy…


Walking the Walk

What it is: Senior Leaders can talk tell they are blue in the face about data-driven decision-making. Nobody will care. But let a Senior Leader even once use data or demand data around a decision they are making and the whole organization will take notice.

Why it builds analytics culture: Senior leaders CAN and DO have a profound impact on culture but they do so by their behavior not their words. When the leaders at the top use and demand data for decisions, so will everyone else.


Tagging Standards

What it is: A clearly defined set of data collection specifications that ensure that every piece of content on every platform is appropriately tagged to collect a rich set of customer, content, and behavioral data.

Why it builds analytics culture: This ends the debate over whether tags and measurement are optional. They aren’t. This also, interestingly, makes measurement easier. Sometimes, people just need to be told what to do. This is like choosing which side of the road to drive on – it’s far more important that you have a standard that which side of the road you pick. Standards are necessary when an organization needs direction and coordination. Tagging is a perfect example.


CMS and Campaign Meta-Data

What it is: The definition of and governance around the creation of campaign and content meta-data. Every piece of content and every campaign element should have detailed, rich meta-data around the audience, tone, approach, contents, and every other element that can be tuned and analyzed.

Why it builds analytics culture: Not only is meta-data the key to digital analytics – providing the meaning that makes content consumption understandable, but rich meta-data definition guides useful thought. These are the categories people will think about when they analyze content and campaign performance. That’s as it should be and by providing these pre-built, populated categorizations, you’ll greatly facilitate good analytics thinking.


Rapid VoC

What it is: The technical and organizational capability to rapidly create, deploy and analyze surveys and other voice-of-customer research instruments.

Why it builds analytics culture: This is the best capability I know for training senior decision-makers to use research. It’s so cheap, so easy, so flexible and so understandable that decision-makers will quickly get spoiled. They’ll use it over and over and over. Well – that’s the point. Nothing builds analytics muscle like use and getting this type of capability deeply embedded in the way your senior team thinks and works will truly change the decision-making culture of the enterprise.


SPEED and Formal Continuous Improvement Cycles

What it is: The use of a formal methodology for digital improvement. SPEED provides a way to identify the best opportunities for digital improvement, the ways to tackle those opportunities, and the ability to measure the impact of any changes. It’s the equivalent of Six Sigma for digital.

Why it builds analytics culture: Formal methods make it vastly easier for everyone in the organization to understand how to get better. Methods also help define a set of processes that organizations can build their organization around. This makes it easier to grow and scale. For large enterprises, in particular, it’s no surprise that formal methodologies like Six Sigma have been so successful. They make key cultural precepts manifest and attach processes to them so that the organizational inertia is guided in positive directions.


Does this seem like an absurdly long list? In truth I’m only about half-way through. But this post is getting LONG. So I’m going to save the rest of my list for next week. Till then, here’s some final thoughts on creating an analytics culture.

The secret to building culture is this: everything you do builds culture. Some things build the wrong kind of culture. Some things the right kind. But you are never not building culture. So if you want to build the right culture to be good at digital and decision-making, there’s no magic elixir, no secret sauce. There is only the discipline of doing things right. Over and over.

That being said, not every action is equal. Some foods are empty of nutrition but empty, too, of harm. Others positively destroy your teeth or your waistline. Still others provide the right kind of fuel. The things I’ve described above are not just a random list of things done right, they are the small to medium things that, done right, have the biggest impacts I’ve seen on building a great digital and analytics culture. They are also targeted to places and decisions which, done poorly, will deeply damage your culture.

I’ll detail some more super-foods for analytics culture in my next post!


[Get your copy of Measuring the Digital World – the definitive guide to the discipline of digital analytics – to learn more].

Measuring the Digital World

After several months in pre-order purgatory, my book, Measuring the Digital World is now available. If you’re even an occasional reader of this blog, I hope you’ll find the time to read it.

I know that’s no small ask. Reading a professional book is a big investment of time. So is reading Measuring the Digital World worth it?

Well, if you’re invested in digital optimization and analytics, I think it is – and here’s why. We work in a field that is still very immature. It’s grown up, as it were, underneath our feet. And while that kind of organic growth is always the most exciting, it’s also the most unruly. I’m betting that most of us who have spent a few years or more in digital analytics have never really had a chance to reflect on what we do and how we do it. Worse, most of those who are trying to learn the field, have to do so almost entirely by mentored trial-and-error. That’s hard. Having a framework for how and why things work makes the inevitable trial-and-error learning far more productive.

My goal in Measuring the Digital World wasn’t so much to create a how-to book as to define a discipline. I believe digital analytics is a unique field. A field defined by a few key problems that we must solve if we are to do it well. In the book, I wanted to lay out those problems and show how they can be tackled – irrespective of the tools you use or the type of digital property you care about.

At the very heart of digital analytics is a problem of description. Measurement is basic to understanding. We are born with and soon learn to speak and think in terms of measurement categories that apply to the physical world. Dimensionality, weight, speed, direction and color are some of the core measurement categories that we use over and over and over again in understanding the world we live in. These things don’t exist in the digital world.

What replaces them?

Our digital analytics tools provide the eyes and ears into the digital world. But I think we should be very skeptical of the measurement categories they suggest. Having lived through the period when those tools where designed and took their present shape, I’ve seen how flawed were the measurement conceptions that drove their form and function.

It’s not original, but it’s still true to say that our digital analytics tools mostly live at the wrong level and have the wrong set of measurement categories – that they are far too focused on web assets and far too little on web visitors.

But if this is a mere truism, it nevertheless lays the ground work for a real discipline. Because it suggests that the great challenge of digital is how to understand who people are and what they are doing using only their viewing behavior. We have to infer identity and intention from action. Probably 9 out of every 10 pages in Measuring the Digital World are concerned with how to do this.

The things that make it hard are precisely the things that define our discipline. First, to make the connection between action and both identity and intention, we have to find ways to generate meaning based on content consumption. This means understanding at a deep level what content is about – it also means making the implicit assumption that people self-select the things that interest them.

For the most part, that’s true.

But it’s also where things get tricky. Because digital properties don’t contain limitless possibilities and they impose a structure that tries to guide the user to specific actions. This creates a push-pull in every digital world. On the one hand, we’re using what people consume to understand their intention and, at the very same time, we’re constantly forcing their hand and trying to get them to do specific actions! Every digital property – no matter its purpose or design – embodies this push-pull. The result? A complex interplay between self-selection, intention and web design that makes understanding behavior in digital a constant struggle.

That’s the point – and the challenge – of digital analytics. We need to have techniques for moving from behavior to identity and intention. And we need to have techniques that control for the structure of digital properties and the presence or absence of content. These same challenges are played out on Websites, on mobile apps and, now, on omni-channel customer journeys.

This is all ground I’ve walked before, but Measuring the Digital World embodies an orderly and fairly comprehensive approach to describing these challenges and laying out the framework of our discipline. How it works. Why it’s hard. What challenges we still face. It’s all there.

So if you’re an experienced analyst and just want to reflect your intuitions and knowledge against a formal description of digital analytics and how it can be done, this book is for you. I’m pretty sure you’ll find at least a few new ideas and some new clarity around ideas you probably already have.

If you’re relatively new to the field and would like something that is intellectually a little more meaty than the “bag of tips-and-tricks” books that you’ve already read, then this book is for you. You’ll get a deep set of methods and techniques that can be applied to almost any digital property to drive better understanding and optimization. You’ll get a sense, maybe for the first time, of exactly what our discipline is – why it’s hard and why certain kinds of mistakes are ubiquitous and must be carefully guarded against.

And if you’re teaching a bunch of MBA or Business Students about digital analytics and want something that actually describes a discipline, this book is REALLY for you (well…for your students). Your students will get a true appreciation for a cutting edge analytics discipline, they’ll also get a sense of where the most interesting new problems in digital analytics are and what approaches might bear fruit. They’ll get a book that illuminates how the structure of a field – in this case digital – demands specific approaches, creates unique problems, and rewards certain types of analysis. That’s knowledge that cuts deeper than just understanding digital analytics – it goes right to the heart of what analytics is about and how it can work in any business discipline. Finally, I hope that the opportunity to tackle deep and interesting problems illuminated by the book’s framework, excites new analysts and inspires the next generation of digital analysts to go far beyond what we’ve been able to do.


Yes, even though I’m an inveterate reader, I know it’s no trivial thing to say “read this book”. After all, despite my copious consumption, I delve much less often into business or technical books. So many seem like fine ten-page articles stretched – I’m tempted to say distorted – into book form. You get their gist in the first five pages and the rest is just filler. That doesn’t make for a great investment of time.

And now that I’ve actually written a book, I can see why that happens. Who really has 250 pages worth of stuff to say? I’m not sure I do…actually I’m pretty sure there’s some filler tucked in there in a spot or two. But I think the ratio is pretty good.

With Measuring the Digital World I tried to do something very ambitious – define a discipline. To create the authoritative view of what digital analytics is, how it works, and why it’s different than any other field of analytics. Not to answer every question, lay out every technique or solve every problem. There are huge swaths of our field not even mentioned in the book. That doesn’t bother me. What we do is far too rich to describe in a single book or even a substantial collection. Digital is, as the title of the book suggests, a whole new world. My goal was not to explore every aspect of measuring that world, but only to show how that measurement, at its heart, must proceed. I’m surely not the right person to judge to what extent I succeeded. I hope you’ll do that.

Here’s the link to Measuring the Digital World on Amazon.

[By the way, if you’d like signed copy of Measuring the Digital World, just let me know. You can buy a copy online and I’ll send you a book-plate. I know it’s a little silly, but I confess to extreme fondness for the few signed books I possess!]

Analytics with a Strategic Edge

The Role of Voice of Customer in Enterprise Analytics

The vast majority of analytics effort is expended on problems that are tactical in nature. That’s not necessarily wrong. Tactics gets a bad rap, sometimes, but the truth is that the vast majority of decisions we make in almost any context are tactical. The problem isn’t that too much analytics is weighted toward tactical issues, it’s really that strategic decisions don’t use analytics at all. The biggest, most important decisions in the digital enterprise nearly always lack a foundation in data or analysis.

I’ve always disliked the idea behind “HIPPOs” – with its Dilbertian assumption that executives are idiots. That isn’t (mostly) my experience at all. But analytics does suffer from what might be described as “virtue” syndrome – the idea that something (say taxes or abstinence) is good for everyone else but not necessarily for me. Just as creative folks tend to think that what they do can’t be driven by analytics, so too is there a perception that strategic decisions must inevitably be more imaginative and intuitive and less number-driven than many decisions further down in the enterprise.

This isn’t completely wrong though it probably short-sells those mid-level decisions. Building good creative takes…creativity. It can’t be churned out by machine. Ditto for strategic decisions. There is NEVER enough information to fully determine a complex strategic decision at the enterprise level.

This doesn’t mean that data isn’t useful or should not be a driver for strategic decisions (and for creative content too). Instinct only works when it’s deeply informed about reality. Nobody has instincts in the abstract. To make a good strategic decision, a decision-maker MUST have certain kinds of data to hand and without that data, there’s nothing on which intuition, knowledge and experience can operate.

What data does a digital decision-maker need for driving strategy?

Key audiences. Customer Journey. Drivers of decision. Competitive choices.

You need to know who your audiences are and what makes them distinct. You need (as described in the last post) to understand the different journeys those audiences take and what journeys they like to take. You need to understand why they make the choices they make – what drives them to choose one product or service or another. Things like demand elasticity, brand awareness, and drivers of choice at each journey stage are critical. And, of course, you need to understand when and why those choices might favor the competition.

None of this stuff will make a strategic decision for you. It won’t tell you how much to invest in digital. Whether or not to build a mobile app. Whether personalization will provide high returns.

But without fully understanding audience, journey, drivers of decision and competitive choices, how can ANY digital decision-maker possibly arrive at an informed strategy? They can’t. And, in fact, they don’t. Because for the vast majority of enteprises, none of this information is part-and-parcel of the information environment.

I’ve seen plenty of executive dashboards that are supposed to help people run their business. They don’t have any of this stuff. I’ve seen the “four personas” puffery that’s supposed to help decision-makers understand their audience. I’ve seen how limited is the exposure executives have to journey mapping and how little it is deployed on a day-to-day basis. Worst of all, I’ve seen how absolutely pathetic is the use of voice of customer (online and offline) to help decision-makers understand why customers make the choices they do.

Voice of customer as it exists today is almost exclusively concerned with measuring customer satisfaction. There’s nothing wrong with measuring NPS or satisfaction. But these measures tell you nothing that will help define a strategy. They are at best (and they are often deeply flawed here too) measures of scoreboard – whether or not you are succeeding in a strategy.

I’m sure that people will object that knowing whether or not a strategy is succeeding is important. It is. It’s even a core part of ongoing strategy development. However, when divorced from particular customer journeys, NPS is essentially meaningless and uninterpretable. And while it truly is critical to measure whether or not a strategy is succeeding, it’s even more important to have data to help shape that strategy in the first place.

Executives just don’t get that context from their analytics teams. At best, they get little pieces of it in dribs and drabs. It is never – as it ought to be – the constant ongoing lifeblood of decision-making.

I subtitled this post “The Role of Voice of Customer in Enterprise Analytics” because of all the different types of information that can help make strategic decisions better, VoC is by far the most important. A good VoC program collects information from every channel: online and offline surveys, call-center, site feedback, social media, etc. It provides a continuing, detailed and sliceable view of audience, journey distribution and (partly) success. It’s by far the best way to help decision-makers understand why customers are making the choices they are, whether those choices are evolving, and how those choices are playing out across the competitive set. In short, it answers the majority of the questions that ought to be on the minds of decision-makers crafting a digital strategy.

This is a very different sort of executive dashboard than we typically see. It’s a true customer insights dashboard. It’s also fundamentally different than almost ANY VoC dashboard we see at any level. The vast majority of VoC reporting doesn’t provide slice-and-dice by audience and use-case – a capability which is absolutely essential to useful VoC reporting. VoC reporting is almost never based on and tied into a journey model so that the customer insights data is immediately reflective of journey stage and actionable arena. And VoC reporting almost never includes a continuous focus on exploring customer decision-making and tying that into the performance of actual initiatives.

It isn’t just a matter of a dashboard. One of the most unique and powerful aspects of digital voice-of-customer is the flexibility it provides to rapidly, efficiently and at very little cost tackle new problems. VoC should be a core part of executive decision-making with a constant cadence of research, analysis, discussion and reporting driven by specific business questions. This open and continuing dialog where VoC is a tool for decision-making is critical to integrating analytics into decisioning. If senior folks aren’t asking for new VoC research on a constant basis, you aren’t doing it right. The single best indicator of a robust VoC program in digital is the speed with which it changes.

Sadly, what decision-makers mostly get right now (if they get anything at all) is a high-level, non-segmented view of audience demographics, an occasional glimpse into high-level decision-factors that is totally divorced from both segment and journey stage, and an overweening focus on a scoreboard metric like NPS.

It’s no wonder, given such thin gruel, that decision-makers aren’t using data for strategic decisions better. If our executives mostly aren’t Dilbertian, they aren’t miracle workers either. They can’t make wine out of information water. If we want analytics to support strategy – and I assume we all do – then building a completely different sort of VoC program is the single best place to start. It isn’t everything. There are other types of data (behavioral, benchmark, econometric, etc.) that can be hugely helpful in shaping digital strategies. But a good VoC program is a huge step forward – a step forward that, if well executed – has the power to immediately transform how the digital enterprise thinks and works.


This is probably my last post of the year – so see you in 2016! In the meantime, my book Measuring the Digital World is now available. Could be a great way to spend your holiday down time (ideally while your resting up from time on the slopes)! Have a great holiday…

SPEED: A Process for Continuous Improvement in Digital

Everyone always wants to get better. But without a formal process to drive performance, continuous improvement is more likely to be an empty platitude than a reality in the enterprise. Building that formal process isn’t trivial. Existing methodologies like Six Sigma illustrate the depth and the advantages of a true improvement process versus an ad hoc “let’s get better” attitude, but those methodologies (largely birthed in manufacturing) aren’t directly applicable to digital. In my last post, I laid out six grounding principles that underlie continuous improvement in digital. I’ll summarize them here as:

  • Small is measurable. Big changes (like website redesigns) alter too much to make optimization practical
  • Controlled Experiments are essential to measure any complex change
  • Continuous improvement will broadly target reduction in friction or improvement in segmentation
  • Acquisition and Experience (Content) are inter-related and inter-dependent
  • Audience, use-case, prequalification and target content all drive marketing performance
  • Most content changes shift behavior rather than drive clear positive or negative outcomes

Having guiding principles isn’t the same thing as having a method, but a real methodology can be fashioned from this sub-structure that will drive true continuous improvement. A full methodology needs a way to identify the right areas to work on and a process for improving those areas. At minimum, that process should include techniques for figuring out what to change and for evaluating the direction and impact of those changes. If you have that, you can drive continuous improvement.

I’ll start where I always start: segmentation. Specifically, 2-tiered segmentation. 2-tiered segmentation is a uniquely digital approach to segmentation that slices audiences by who they are (traditional segmentation) and what they are trying to accomplish (this is the second tier) in the digital channel. This matrixed segmentation scheme is the perfect table-set for continuous improvement. In fact, I don’t think it’s possible to drive continuous improvement without this type of segmentation. Real digital improvement is always relative to an audience and a use-case.

But segmentation on its own isn’t a method for continuous improvement. 2-tiered segmentation gives us a powerful framework for understanding where and why improvement might be focused, but it doesn’t tell us where to target improvements or what those improvements might be. To have a real method, we need that.

Here’s where pre-qualification comes in. One of the core principles is that acquisition and experience are inter-related and inter-dependent. This means that if you want to understand whether or not content is working (creating lift of some kind), then you have to understand the pre-existing state of the audience that consumes that content. Content with a 100% success rate may suck. Content with a 0% success rate may be outstanding. It all depends on the population you give them. Every single person in line at the DMV will stay there to get their license. That doesn’t mean the experience is a good one. It just means that the self-selected audience is determined to finish the process. We need that license! Similarly, if you direct garbage traffic to even the best content, it won’t perform at all. Acquisition and content are deeply interdependent. It’s impossible to measure the latter without understanding the former.

Fortunately, there’s a simple technique for measuring the quality of the audience sourced for any given content area that we call pre-qualification. To understand the pre-qualification level of an audience at a given content point, we use a very short (typically nor more than 3-4 questions) pop-up survey. The pre-qualification survey explores what use-case visitors are in, where they are in the buying cycle, and how committed they are to the brand. That’s it.

It may be simple, but pre-qualification is one of the most powerful tools in the digital analytics arsenal and it’s the key to a successful continuous improvement methodology.

First we segment. Then we measure pre-qualification. With these two pieces we can measure content performance by visitor type, use-case and visitor quality. That’s enough to establish which content and which marketing campaigns are truly underperforming.


Hold the population, use-case and pre-qualification level constant and measure the effectiveness of content pieces and sequences in creating successful outcomes. You can’t effectively measure content performance unless you hold these three variables constant, but when you control for these three variables you open up the power of digital analytics.

We now have a way to target potential improvement areas – just pick the content with the worst performance in each cell (visitor type x visit type x qualification level).

But there is much more that we can do with these essential pieces in place. By evaluating whether content underperforms across all pre-qualification levels equally or is much worse for less qualified visitors, you can determine if the content problem is because of friction (see guiding principle #3).

Friction problems tend to impact less qualified visitors disproportionately. So if less qualified visitors within each visitor type perform even worse than expected after consuming a piece of content, then some type of friction is likely the culprit.

Further, by evaluating content performance across visitor type (within use-case and with pre-qualification held constant), you have strong clues as to whether or not there are personalization opportunities to drive segmentation improvement.

Finally, where content performs well for qualified audiences but receives a disproportionate share of unqualified visitors, you know that you have to go upstream to fix the marketing campaigns sourcing the visits and targeting the content.

Segment. Pre-Qualify. Evaluate by qualification for friction and acquisition, and by visitor type for personalization.

Step four is to explore what to change. How do you do that? Often, the best method is to ask. This is yet another area for targeted VoC, where you can explore what content people are looking for, how they make decisions, what they need to know, and how that differs by segment. A rich series of choice/decision questions should create the necessary material to craft alternative approaches to test.

You can also break up the content into discrete chunks (each with a specific meta-data purpose or role) and then create a controlled experiment that tests which content chunks are most important and deliver the most lift. This is a sub-process for testing within the larger continuous improvement process. Analytically, it should also be possible to do a form of conjoint analysis on either behavior or preferences captured in VoC.

Segment. Pre-Qualify. Evaluate. Explore.

Now you’re ready to decide on the next round of tests and experiments based on a formal process for finding where problems are, why they exist, and how they can be tackled.

Segment, Pre-Qualify. Evaluate. Explore. Decide.


Sure, it’s just another consulting acronym. But underneath that acronym is real method. Not squishy and not contentless. It’s a formal procedure for identifying where problems exist, what class of problems they are, what type of solution might be a fit (friction reduction or personalization), and what that solution might consist of. All wrapped together in a process that can be endlessly repeated to drive measurable, discrete improvement for every type of visitor and every type of visit across any digital channel. It’s also specifically designed to be responsive to the guiding principles enumerated above that define digital.

If you’re looking for a real continuous improvement process in digital, there’s SPEED and then there’s…

Well, as far as I know, that’s pretty much it.


Interested in knowing more about 2-Tiered Segmentation and Pre-Qualification, the key ingredients to SPEED? “Measuring the Digital World” provides the most detailed descriptions I’ve ever written of how to do both and is now available for pre-order on Amazon.

Digital Transformation – How to Get Started, Real KPIs, the Necessary Staff and So Much More!

In the last couple of months, I’ve been writing an extended series on digital transformation that reflects our current practice focus. At the center of this whole series is a simple thesis: if you want to be good at something you have to be able to make good decisions around it. Most enterprises can’t do that in digital. From the top on down, they are setup in ways that make it difficult or impossible for decision-makers to understand how digital systems work and act on that knowledge. It isn’t because people don’t understand what’s necessary to make good decisions. Enterprises have invested in exactly the capabilities that are necessary: analytics, Voice of Customer, customer journey mapping, agile development, and testing. What they haven’t done is changed their processes in ways that take advantage of those capabilities.

I’ve put together what I think is a really compelling presentation of how most organizations make decisions in the digital channel, why it’s ineffective, and what they need to do to get better. I’ve put a lot of time into it (because it’s at the core of our value proposition) and really, it’s one of the best presentations I’ve ever done. If you’re a member of the Digital Analytics Association, you can see a chunk of that presentation in the recent webinar I did on this topic. [Webinars are brutal – by far the hardest kind of speaking I do – because you are just sitting there talking into the phone for 50 minutes – but I think this one, especially the back-half, just went well] Seriously, if you’re a DAA member, I think you’ll find it worthwhile to replay the webinar.

If you’re not, and you really want to see it, drop me a line, I’m told we can get guest registrations setup by request.

At the end of that webinar I got quite a few questions. I didn’t get a chance to answer them all and I promised I would – so that’s what this post is. I think most of the questions have inherent interest and are easily understood without watching the webinar so do read on even if you didn’t catch it (but watch the darn webinar).

Q: Are metrics valuable to stakeholders even if they don’t tie in to revenues/cost savings?

Absolutely. In point of fact, revenue isn’t even the best metric on the positive side of the balance sheet. For many reasons, lifetime value metrics are generally a better choice than revenue. Regardless, not every useful metric has to, can or should tie back to dollars. There are whole classes of metrics that are important but won’t directly tie to dollars: satisfaction metrics, brand awareness metrics and task completion metrics. That being said, the most controversial type of non-revenue metric are proxies for engagement which is, in turn, a kind of proxy for revenue. These, too, can be useful but they are far more dangerous. My advice is to never use a proxy metric unless you’ve done the work to prove it’s a valid proxy. That means no metrics plucked from thin air because they seem reasonable. If you can’t close the loop on performance with behavioral data, use re-survey methods. It’s absolutely critical that the metrics you optimize with be the right ones – and that means spending the extra time to get them right. Finally, I’ve argued for awhile that rather than metrics our focus should be on delivering models embedded in tools – this allows people to run their business not just look at history.

Q: What is your favorite social advertising KPI? I have been using $ / Site Visit and $ / Conversion to measure our campaigns but there is some pushback from the social team that we are not capturing social reach.

A very related question – and it’s interesting because I actually didn’t talk much about KPIs in the webinar! I think the question boils down to this (in addition to everything I just said about metrics) – is reach a valid metric? It can be, but reach shouldn’t be taken as is. As per my answer above, the value of an impression is quite different on every channel. If you’re not doing the work to figure out the value of an impression in a channel then what’s the point of reporting an arbitrary reach number? How can people possibly assess whether any given reach number makes a buy good or bad once they realize that the value of an impression varies dramatically by channel? I also think a strong case can be made that it’s a mistake to try and optimize digital campaigns using reported metrics even direct conversion and dollars. I just saw a tremendous presentation from Drexel’s Elea Feit at the Philadelphia DAA Symposium that echoed (and improved) what I’ve been saying for years. Namely that non-incremental attribution is garbage and that the best way to get true measures of lift is to use control groups. If your social media team thinks reach is important, then it’s worth trying to prove if they are right – whether that’s because those campaigns generate hidden short-term lift or or because they generate brand awareness that track to long term lift.

Q: For companies that are operating in the way you typically see, what is the one thing you would recommend to help get them started?

This is a tough one because it’s still somewhat dependent on the exact shape of the organization. Here are two things I commonly recommend. First, think about a much different kind of VoC program. Constant updating and targeting of surveys, regular socialization with key decision-makers where they drive the research, an enterprise-wide VoC dashboard in something like Tableau that focuses on customer decision-making not NPS. This is a great and relatively inexpensive way to bootstrap a true strategic decision support capability. Second, totally re-think your testing program as a controlled experimentation capability for decision-making. Almost every organization I work with should consider fundamental change in the nature, scope, and process around testing.

Q: How much does this change when there are no clear conversions (i.e., Non-Profit, B2B, etc)?

I don’t think anything changes. But, of course, everything does change. What I mean is that all of the fundamental precepts are identical. VoC, controlled experiments, customer journey mapping, agile analytics, integration of teams – it’s all exactly the same set of lessons regardless of whether or not you have clear conversions on your website. On the other hand, every single measurement is that much harder. I’d argue that the methods I argue for are even more important when you don’t have the relatively straightforward path to optimization that eCommerce provides. In particular, the absolute importance of closing the loop on important measurements simply can’t be understated when you don’t have a clear conversion to optimize to.

Q: What is the minimum size of analytics team to be able to successfully implement this at scale?

Another tricky question to answer but I’ll try not to weasel out of it. Think about it this way, to drive real transformation at enterprise scale, you need at least 1 analyst covering every significant function. That means an analyst for core digital reporting, digital analytics, experimentation, VoC, data science, customer journey, and implementation. For most large enterprises, that’s still an unrealistically small team. You might scrape by with a single analyst in VoC and customer journey, but you’re going to need at least small teams in core digital reporting, analytics, implementation and probably data science as well. If you’re at all successful, the number of analytics, experimentation and data science folks is going to grow larger – possibly much larger.  It’s not like a single person in a startup can’t drive real change, but that’s just not the way things work in the large enterprise. Large enterprise environments are complex in every respect and it takes a significant number of people to drive effective processes.

Q: Sometimes it feels like agile is just a subject line for the weekly meeting. Do you have any examples of organizations using agile well when it comes to digital?

Couldn’t agree more. My rule of thumb is this: if your organization is studying how to be innovative, it never will be. If your organization is meeting about agile, it isn’t. In the IT world, Agile has gone from a truly innovative approach to development to a ludicrous over-engineered process managed, often enough, by teams of consulting PMs. I do see some organizations that I think are actually quite agile when it comes to digital and doing it very well. They are almost all gaming companies, pure-play internet companies or startups. I’ll be honest – a lot of the ideas in my presentation and approach to digital transformation come from observing those types of companies. Whether I’m right that similar approaches can work for a large enterprise is, frankly, unclear.

Q: As a third party measurement company, what is the best way to approach or the best questions to ask customers to really get at and understand their strategic goals around their customer journeys?

This really is too big to answer inside a blog – maybe even too big to reasonably answer as a blog. I’ll say, too, that I’m increasingly skeptical of our ability to do this. As a consultant, I’m honor-bound to claim that as a group we can come in, ask a series of questions of people who have worked in an industry for 10 or 20 years and, in a few days time, understand their strategic goals. Okay…put this way, it’s obviously absurd. And, in fact, that’s really not how consulting companies work. Most of the people leading strategic engagements at top-tier consulting outfits have actually worked in an industry for a long-time and many have worked on the enterprise side and made exactly those strategic decisions. That’s a huge advantage. Most good consultants in a strategic engagement know 90% of what they are going to recommend before they ask a single question.

Having said that, I’m often personally in a situation where I’m asked to do exactly what I’ve just said is absurd and chances are if you’re a third party measurement company you have the same problem. You have to get at something that’s very hard and very complex in a very short amount of time and your expertise (like mine) is in analytics or technology not insurance or plumbing or publishing or automotive.

Here’s a couple of things I’ve found helpful. First, take the journey’s yourself. It’s surprising how many executives have never bought an online policy from their own company, downloaded a whitepaper to generate a lead, or bought advertising on their own site. You may not be able to replicate every journey, but where you can get hands on, do it. Having a customer’s viewpoint on the journey never hurts and it can give you insight your customers should but often don’t have. Second, remember that the internet is your best friend. A little up-front research from analysts is a huge benefit when setting the table for those conversations. And I’m often frantically googling acronyms and keywords when I’m leading those executive conversations. Third, check out the competition. If you do a lead on the client’s website, try it on their top three competitors too. What you’ll see is often a great table-set for understanding where they are in digital and what their strategy needs to be. Finally, get specific on the journey. In my experience, the biggest failing in senior leaders is their tendency to generality. Big generalities are easy and they sound smart but they usually don’t mean much of anything. The very best leaders don’t ever retreat into useless generality, but most of us will fall into it all too easily.

Q: What are some engagement models where an enterprise engages 3rd party consulting? For how long?

The question every consultant loves to hear! There are three main ways we help drive this type of digital transformation. The first is as strategic planners. We do quite a bit of pure digital analytics strategy work, but for this type of work we typically expand the strategic team a bit (beyond our core digital analytics folks) to include subject matter experts in the industry, in customer journey, and in information management. The goal is to create a “deep” analytics strategy that drives toward enterprise transformation. The second model (which can follow the strategic phase) is to supplement enterprise resources with specific expertise to bootstrap capabilities. This can include things like tackling specific highly strategic analytics projects, providing embedded analysts as part of the team to increase capacity and maturity, building out controlled experiment teams, developing VoC systems, etc. We can also provide – and here’s where being part of a big practice really helps – PM and Change Management experts who can help drive a broader transformation strategy. Finally, we can help soup to nuts building the program. Mind you, that doesn’t mean we do everything. I’m a huge believer that a core part of this vision is transformation in the enterprise. Effectively, that means outsourcing to a consultancy is never the right answer. But in a soup-to-nuts model, we keep strategic people on the ground, helping to hire, train, and plan on an ongoing basis.

Obviously, the how-long depends on the model. Strategic planning exercises are typically 10-12 weeks. Specific projects are all over the map, and the soup-to-nuts model is sustained engagement though it usually starts out hot and then gets gradually smaller over time.

Q: Would really like to better understand how you can identify visitor segments in your 2-tier segmentation when we only know they came to the site and left (without any other info on what segment they might represent).  Do you have any examples or other papers that address how/if this can be done?

A couple years back I was on a panel at a Conference in San Diego and one of the panelists started every response with “In my book…”. It didn’t seem to matter much what the question was. The answer (and not just the first three words) were always the same. I told my daughters about it when I got home, and the gentleman is forever immortalized in my household as the “book guy”. Now I’m going to go all book guy on you. The heart of my book, “Measuring the Digital World” is an attempt to answer this exact question. It’s by far the most detailed explication I’ve ever given of the concepts behind 2-tiered segmentation and how to go from behavior to segmentation. That being said, you can only pre-order now. So I’m also going to point out that I have blogged fairly extensively on this topic over the years. Here’s a couple of posts I dredged out that provide a good overview:



and – even more important – here’s the link to pre-order the book!

That’s it…a pretty darn good list of questions. I hope that’s genuinely reflective of the quality of the webinar. Next week I’m going to break out of this series for a week and write about our recent non-profit analytics hackathon – a very cool event that spurred some new thoughts on the analysis process and the tools we use for it.

Controlled Experimentation and Decision-Making

The key to effective digital transformation isn’t analytics, testing, customer journeys, or Voice of Customer. It’s how you blend these elements together in a fundamentally different kind of organization and process. In the DAA Webinar (link coming) I did this past week on Digital Transformation, I used this graphic to drive home that point:

I’ve already highlighted experience engineering and integrated analytics in this little series, and the truth is I wrote a post on constant customer research too. If you haven’t read it, don’t feel bad. Nobody has. I liked it so much I submitted it to the local PR machine to be published and it’s still grinding through that process. I was hoping to get that relatively quickly so I could push the link, but I’ve given up holding my breath. So while I wait for VoC to emerge into the light of day, let’s move on to controlled experimentation.

I’ll start with definitional stuff. By controlled experimentation I do mean testing, but I don’t just mean A/B testing or even MVT as we’ve come to think about it. I want it to be broader. Almost every analytics project is challenged by the complexity of the world. It’s hard to control for all the constantly changing external factors that drive or impact performance in our systems. What looks like a strong and interesting relationship in a statistical analysis is often no more than an artifact produced by external factors that aren’t being considered. Controlled experiments are the best tool there is for addressing those challenges.

In a controlled experiment, the goal is to create a test whereby the likelihood of external factors driving the results is minimized. In A/B testing, for example, random populations of site visitors are served alternative experiences and their subsequent performance is measured. Provided the selection of visitors into each variant of the test is random and there is sufficient volume, A/B tests make it very unlikely that external factors like campaign sourcing or day-time parting will impact the test results. How unlikely? Well, taking a random sample doesn’t guarantee randomness. You can flip a fair coin fifty times and get fifty heads so even a sample collected in a fully random manner may come out quite biased; it’s just not very likely. The more times you flip, the more likely your sample will be representative.

Controlled experiments aren’t just the domain of website testing though. They are a fundamental part of scientific method and are used extensively in every kind of research. The goal of a controlled experiment is to remove all the variables in an analysis but one. That makes it really easy to analyze.

In the past, I’ve written extensively on the relationship between analytics and website testing (Kelly Wortham and I did a whole series on the topic). In that series, I focused on testing as we think of it in the digital world – A/B and MV tests and the tools that drive those tests. I don’t want to do that here, because the role for controlled experimentation in the digital enterprise is much broader than website testing. In an omni-channel world, many of the most important questions – and most important experiments – can’t be done using website testing. They require experiments which involve the use, absence or role of an entire channel or the media that drives it. You can’t build those kinds of experiments in your CMS or your testing tool.

I also appreciate that controlled experimentation doesn’t carry with it some of the mental baggage of testing. When we talk testing, people start to think about Optimizely vs. SiteSpect, A/B vs. MVT, landing page optimization and other similar issues. And when people think about A/B tests, they tend to think about things like button colors, image A vs. image B and changing the language in a call-to-action. When it comes to digital transformation, that’s all irrelevant.

It’s not that changing the button colors on your website isn’t a controlled experiment. It is; it’s just not a very important one. It’s also representative of the kind of random “throw stuff at a wall” approach to experimentation that makes so many testing programs nearly useless.

One of the great benefits of controlled experimentation is that, done properly, the idea of learning something useful is baked into the process. When you change the button color on your Website, you’re essentially framing a research question like this:

Hypothesis: Changing the color of Button X on Page Y from Red to Yellow will result in more clicks of the button per page view

An A/B test will indeed answer that question. However, it won’t necessarily answer ANY other question of higher generality. Will changing the color of any other button on any other page result in more clicks? That’s not part of the test.

Even with something as inane as button colors, thinking in terms of a controlled experiment can help. A designer might generalize this hypothesis to something that’s a little more interesting. For example, the hypothesis might be:

Hypothesis: Given our standard color pallet, changing a call-to-action on the page to a higher contrast color will result in more clicks per view on the call-to-action

That’s a somewhat more interesting hypothesis and it can be tested with a range of colors with different contrasts. Some of those colors might produce garish or largely unreadable results. Some combinations might work well for click-rates but create negative brand impressions. That, too, can be tested and might perhaps yield a standardized design heuristic for the right level of contrast between the call-to-action and the rest of a page given a particular color palette.

The point is, by casting the test as a controlled experiment we are pushed to generalize the test in terms of some single variable (such as contrast and its impact on behavior). This makes the test a learning experience; something that can be applied to a whole set of cases.

This example could be read as an argument for generalizing isolated tests into generalized controlled experiments. That might be beneficial, but it’s not really ideal. Instead, every decision-maker in the organization should be thinking about controlled experimentation. They should be thinking about it as way to answer questions analytics can’t AND as a way to assess whether the analytics they have are valid. Controlled experimentation, like analytics, is a tool to be used by the organization when it wants to answer questions. Both are most effective when used in a top-down not a bottom-up fashion.

As the sentence above makes clear, controlled experimentation is something you do, but it’s also a way you can think about analytics – a way to evaluate the data decision-makers already have. I’ve complained endlessly, for example, about how misleading online surveys can be when it comes to things like measuring sitewide NPS. My objection isn’t to the NPS metric, it’s to the lack of control in the sample. Every time you shift your marketing or site functionality, you shift the distribution of visitors to your website. That, in turn, will likely shift your average NPS score – irrespective of any other change or difference. You haven’t gotten better or worse. Your customers don’t like you less or more. You’ve simply sampled a somewhat different population of visitors.

That’s a perfect example of a metric/report which isn’t very controlled.  Something outside what you are trying to measure (your customer’s satisfaction or willingness to recommend you) is driving the observed changes.

When decision-makers begin to think in terms of controlled experiments, they have a much better chance of spotting the potential flaws in the analysis and reporting they have, and making more risk-informed decisions. No experiment can ever be perfectly controlled. No analysis can guarantee that outside factors aren’t driving the results. But when decision-makers think about what it would take to create a good experiment, they are much more likely to interpret analysis and reporting correctly.

I’ve framed this in terms of decision-makers, but it’s good advice for analysts too. Many an analyst has missed the mark by failing to control for obvious external drivers in their findings. A huge part of learning to “think like an analyst” is learning to evaluate every analysis in terms of how to best approximate a controlled experiment.

So if controlled experimentation is the best way to make decisions, why not just test everything? Why not, indeed? Controlled experimentation is tremendously underutilized in the enterprise. But having said as much, not every problem is amenable to or worth experimenting on. Sometimes, building a controlled experiment is very expensive compared to an analysis; sometimes it’s not. With an A/B testing tool, it’s often easier to deploy a simple test than try to conduct and analysis of a customer preference. But if you have an hypothesis that involves re-designing the entire website, building all that creative to run a true controlled experiment isn’t going to be cheap, fast or easy.

Media mix analysis is another example of how analysis/experimentation trade-offs come into play. If you do a lot of local advertising, then controlled experimentation is far more effective than mix modeling to determine the impact of media and to tune for the optimum channel blend. But if much of your media buy is national, then it’s pretty much impossible to create a fully controlled experiment that will allow you to test mix hypotheses. So for some kinds of marketing organizations, controlled experimentation is the best approach to mix decisions; for others, mix modelling (analysis in other words – though often supplemented by targeted experimentation) is the best approach.

This may all seem pretty theoretical, so I’ll boil it down to some specific recommendations for the enterprise:

  • Repurpose you’re A/B testing group as a controlled experimentation capability
  • Blend non-digital analytics resources into that group to make sure you aren’t thinking too narrowly – don’t just have a bunch of people who think A/B testing tools
  • Integrate controlled experimentation with analytics – they are two sides of the same coin and you need a single group that can decide which is appropriate for a given problem
  • Train your executives and decision-makers in experimentation and interpreting analysis – probably with a dedicated C-Suite resource
  • Create constant feedback loops in the organization so that decision-makers can request new survey questions, new analysis and new experiments at the same time and with the same group

I see lots of organizations that think they are doing a great job testing. Mostly they aren’t even close. You’re doing a great job testing when every decision maker at every level in the organization is thinking about whether a controlled experiment is possible when they have to make a significant decision. When those same decision-makers know how to interpret the data they have in terms of its ability to approximate a controlled experiment. And when building controlled experiments is deeply integrated into the analytics research team and deployed across digital and omni-channel problems.