Tag Archives: continuous improvement

SPEED: A Process for Continuous Improvement in Digital

Everyone always wants to get better. But without a formal process to drive performance, continuous improvement is more likely to be an empty platitude than a reality in the enterprise. Building that formal process isn’t trivial. Existing methodologies like Six Sigma illustrate the depth and the advantages of a true improvement process versus an ad hoc “let’s get better” attitude, but those methodologies (largely birthed in manufacturing) aren’t directly applicable to digital. In my last post, I laid out six grounding principles that underlie continuous improvement in digital. I’ll summarize them here as:

  • Small is measurable. Big changes (like website redesigns) alter too much to make optimization practical
  • Controlled Experiments are essential to measure any complex change
  • Continuous improvement will broadly target reduction in friction or improvement in segmentation
  • Acquisition and Experience (Content) are inter-related and inter-dependent
  • Audience, use-case, prequalification and target content all drive marketing performance
  • Most content changes shift behavior rather than drive clear positive or negative outcomes

Having guiding principles isn’t the same thing as having a method, but a real methodology can be fashioned from this sub-structure that will drive true continuous improvement. A full methodology needs a way to identify the right areas to work on and a process for improving those areas. At minimum, that process should include techniques for figuring out what to change and for evaluating the direction and impact of those changes. If you have that, you can drive continuous improvement.

I’ll start where I always start: segmentation. Specifically, 2-tiered segmentation. 2-tiered segmentation is a uniquely digital approach to segmentation that slices audiences by who they are (traditional segmentation) and what they are trying to accomplish (this is the second tier) in the digital channel. This matrixed segmentation scheme is the perfect table-set for continuous improvement. In fact, I don’t think it’s possible to drive continuous improvement without this type of segmentation. Real digital improvement is always relative to an audience and a use-case.

But segmentation on its own isn’t a method for continuous improvement. 2-tiered segmentation gives us a powerful framework for understanding where and why improvement might be focused, but it doesn’t tell us where to target improvements or what those improvements might be. To have a real method, we need that.

Here’s where pre-qualification comes in. One of the core principles is that acquisition and experience are inter-related and inter-dependent. This means that if you want to understand whether or not content is working (creating lift of some kind), then you have to understand the pre-existing state of the audience that consumes that content. Content with a 100% success rate may suck. Content with a 0% success rate may be outstanding. It all depends on the population you give them. Every single person in line at the DMV will stay there to get their license. That doesn’t mean the experience is a good one. It just means that the self-selected audience is determined to finish the process. We need that license! Similarly, if you direct garbage traffic to even the best content, it won’t perform at all. Acquisition and content are deeply interdependent. It’s impossible to measure the latter without understanding the former.

Fortunately, there’s a simple technique for measuring the quality of the audience sourced for any given content area that we call pre-qualification. To understand the pre-qualification level of an audience at a given content point, we use a very short (typically nor more than 3-4 questions) pop-up survey. The pre-qualification survey explores what use-case visitors are in, where they are in the buying cycle, and how committed they are to the brand. That’s it.

It may be simple, but pre-qualification is one of the most powerful tools in the digital analytics arsenal and it’s the key to a successful continuous improvement methodology.

First we segment. Then we measure pre-qualification. With these two pieces we can measure content performance by visitor type, use-case and visitor quality. That’s enough to establish which content and which marketing campaigns are truly underperforming.

How?

Hold the population, use-case and pre-qualification level constant and measure the effectiveness of content pieces and sequences in creating successful outcomes. You can’t effectively measure content performance unless you hold these three variables constant, but when you control for these three variables you open up the power of digital analytics.

We now have a way to target potential improvement areas – just pick the content with the worst performance in each cell (visitor type x visit type x qualification level).

But there is much more that we can do with these essential pieces in place. By evaluating whether content underperforms across all pre-qualification levels equally or is much worse for less qualified visitors, you can determine if the content problem is because of friction (see guiding principle #3).

Friction problems tend to impact less qualified visitors disproportionately. So if less qualified visitors within each visitor type perform even worse than expected after consuming a piece of content, then some type of friction is likely the culprit.

Further, by evaluating content performance across visitor type (within use-case and with pre-qualification held constant), you have strong clues as to whether or not there are personalization opportunities to drive segmentation improvement.

Finally, where content performs well for qualified audiences but receives a disproportionate share of unqualified visitors, you know that you have to go upstream to fix the marketing campaigns sourcing the visits and targeting the content.

Segment. Pre-Qualify. Evaluate by qualification for friction and acquisition, and by visitor type for personalization.

Step four is to explore what to change. How do you do that? Often, the best method is to ask. This is yet another area for targeted VoC, where you can explore what content people are looking for, how they make decisions, what they need to know, and how that differs by segment. A rich series of choice/decision questions should create the necessary material to craft alternative approaches to test.

You can also break up the content into discrete chunks (each with a specific meta-data purpose or role) and then create a controlled experiment that tests which content chunks are most important and deliver the most lift. This is a sub-process for testing within the larger continuous improvement process. Analytically, it should also be possible to do a form of conjoint analysis on either behavior or preferences captured in VoC.

Segment. Pre-Qualify. Evaluate. Explore.

Now you’re ready to decide on the next round of tests and experiments based on a formal process for finding where problems are, why they exist, and how they can be tackled.

Segment, Pre-Qualify. Evaluate. Explore. Decide.

SPEED.

Sure, it’s just another consulting acronym. But underneath that acronym is real method. Not squishy and not contentless. It’s a formal procedure for identifying where problems exist, what class of problems they are, what type of solution might be a fit (friction reduction or personalization), and what that solution might consist of. All wrapped together in a process that can be endlessly repeated to drive measurable, discrete improvement for every type of visitor and every type of visit across any digital channel. It’s also specifically designed to be responsive to the guiding principles enumerated above that define digital.

If you’re looking for a real continuous improvement process in digital, there’s SPEED and then there’s…

Well, as far as I know, that’s pretty much it.

 

Interested in knowing more about 2-Tiered Segmentation and Pre-Qualification, the key ingredients to SPEED? “Measuring the Digital World” provides the most detailed descriptions I’ve ever written of how to do both and is now available for pre-order on Amazon.

Continuous Improvement

Is it a Method or a Platitude?

What does it take to be good at digital? The ability to make good decisions, of course. If you run a pro football team and you make consistently good decisions about players and about coaches, and they, in turn, make consistently good decisions about preparation and plays, you’ll be successful. Most organizations aren’t setup to make good decisions in digital. They don’t have the right information to drive strategic decisions and they often lack the right processes to make good tactical decisions. I’ve highlighted four capabilities that must be knitted together to drive consistently good decisions in the digital realm: comprehensive customer journey mapping, analytics support at every level of the organization, aggressive controlled experimentation targeted to decision-support, and constant voice of customer research. For most organizations, none of these capabilities are well-baked and it’s rare that even a very good organization is excellent at more than two of these capabilities.

The Essentials for Digital Transformation
                          The Essentials for Digital Transformation

There’s a fifth spoke of this wheel, however, that isn’t so much a capability as an approach. That’s not so completely different from the others as it might seem. After all, almost every enterprise I see has a digital analytics department, a VoC capability, a customer journey map, and an A/B Testing team. In previous posts, I’ve highlighted how those capabilities are mis-used, mis-deployed or simply misunderstood. Which makes for a pretty big miss. So it’s very much true that a better approach underlies all of these capabilities. When I talk about continuous improvement, it’s not a capability at all. There’s no there, there. It’s just an approach. Yet it’s an approach that, taken seriously, can help weld these other four capabilities into a coherent whole.

The doctrine of continuous improvement is not new – in digital or elsewhere. It has a long and proven track record and it’s one of the few industry best practices with which I am in whole-hearted agreement. Too often, however, continuous improvement is treated as an empty platitude, not a method. It’s interpreted as a squishy injunction that we should always try to get better. Rah! Rah!

No.

Taken this way, it’s as contentless as interpreting evolutionary theory as survival of the fittest. Those most likely to survive are…those most likely to survive. It is the mechanism of natural selection coupled with genetic variation and mutation that gives content to evolutionary doctrine. In other words, without a process for deciding what’s fittest and a method of transmitting that fitness across generations, evolutionary theory would be a contentless tautology. The idea of continuous improvement, too, needs a method to be interesting. Everybody wants to get better all the time. There has to be a real process to make it interesting.

There are such processes, of course. Techniques like Six Sigma famously elaborate a specific method to drive continuous improvement in manufacturing processes. Unfortunately, Six Sigma isn’t directly transferable to digital analytics. We lack the critical optimization variable (defects) against which these methods work. Nor does it work to simply substitute a variable like conversion rate for defects because we lack the controlled environment necessary to believe that every customer should convert.

If Six Sigma doesn’t translate directly into digital analytics, that doesn’t mean we can’t learn from it and cadge some good ideas, though. Here are the core ideas that drive continuous improvement in digital, many of which are rooted in formal continuous improvement methodologies:

  1. It’s much easier to measure a single, specific change than a huge number of simultaneous changes. A website or mobile app is a complex set of interconnecting pieces. If you change your home page, for example, you change the dynamics of every use-case on the site. This may benefit some users and disadvantage others; it may improve one page’s performance and harm another’s. When you change an entire website at once, it’s incredibly difficult to isolate which elements improved and which didn’t. Only the holistic performance of the system can be measured on a before and after basis – and even that can be challenging if new functionality has been introduced. The more discrete and isolated a change, the easier it is to measure its true impact on the system.
  2. Where changes are specific and local, micro-conversion analytics can generally be used to assess improvement. Where changes are numerous or the impact non-local, then a controlled environment is necessary to measure improvement. A true controlled environment in digital is generally impossible but can be effectively replicated via controlled experimentation (such as A/B testing or hold-outs).
  3. Continuous improvement can be driven on a segmented or site-wide basis. Improvements that are site-wide are typically focused on reducing friction. Segmentation improvements are focused on optimizing the conversation with specific populations. Both types of improvement cycles must be addressed in any comprehensive program.
  4. Digital performance is driven by two different systems (acquisition of traffic and content performance). Despite the fact that these two systems function independently, it’s impossible to measure performance of either without measuring their interdependencies. Content performance is ALWAYS relative to the mix of audience created by the acquisition systems. This dependency is even tighter in closed loop systems like Search Engine Optimization – where the content of the page heavily determines the nature of the traffic sent AND the performance of that traffic once sourced (though the two can function quite differently with the best SEO optimized page being a very poor content performer even though it’s sourcing its own traffic).
  5. Marketing performance is a function of four things: the type of audience sourced, the use-case of the audience sourced, the pre-qualification of the audience sourced and the target content to which the audience is sourced. Continuous improvement must target all four factors to be effective.
  6. Content performance is relative to function, audience and use-case. Some content changes will be directly negative or positive (friction causing or reducing), but most will shift the distribution of behaviors. Because most impacts are shifts in the distribution of use-cases or journeys, it’s essential that the relative value of alternative paths be understood when applying continuous improvement.

These are core ideas, not a formal process. In my next post, I’ll take a shot at translating them into a formal process for digital improvement. I’m not really confident how tightly I can describe that process, but I am confident that it will capture something rather different than any current approach to digital analytics.

 

With Thanksgiving upon us now is the time to think about the perfect stocking stuffer for the digital analyst you like best. Pre-order “Measuring the Digital World” now!

Engineering the Digital Journey

Near the end of my last post (describing the concept of analytics across the enterprise), I argued that full spectrum analytics would  provide “a common understanding throughout the enterprise of who your customers are, what journeys they have, which journeys are easy and which a struggle for each type of customer, detailed and constantly improving profiles of those audiences and those journeys and the decision-making and attitudes that drive them, and a rich understanding of how initiatives and changes at every level of the enterprise have succeeded, failed, or changed those journeys over time.”

By my count, that admittedly too long sentence contains the word journey four times and clearly puts understanding the customer journey at the heart of analytics understanding in the enterprise.

I think that’s right.

If you think about what senior decision-makers in an organization should get from analytics, nothing seems more important than a good understanding of customers and their journeys. That same understanding is powerful and important at every level of the organization. And by creating that shared understanding, the enterprise gains something almost priceless – the ability to converse consistently and intelligently, top-to-bottom, about why programs are being implemented and what they are expected to accomplish.

This focus on the journey isn’t particularly new. It’s been almost five years since I began describing Two-Tiered Segmentation as fundamental to digital; it’s a topic I’ve returned to repeatedly and it’s the central theme of my book. In a Two-Tiered Segmentation, you segment along two dimensions: who visitors are and what they are trying to accomplish in a visit. It’s this second piece – the visit intent segmentation – that begins to capture and describe customer journey.

But if Two-Tiered Segmentation is the start of a measurement framework for customer journey, it isn’t a complete solution. It’s too digitally focused and too rooted in displayed behaviors – meaning it’s defined solely by the functionality provided by the enterprise not by the journeys your customers might actually want to take. It’s also designed to capture the points in a journey – not necessarily to lay out the broader journey in a maximally intelligible fashion.

Traditional journey mapping works from the other end of the spectrum. Starting with customers and using higher-level interview techniques, it’s designed to capture the basic things customers want to accomplish and then map those into more detailed potential touchpoints. It’s exploratory and specifically geared toward identifying gaps in functionality where customers CAN’T do the things they want or can’t do them in the channels they’d prefer.

While traditional journey mapping may feel like the right solution to creating enterprise-wide journey maps, it, too, has some problems. Because the techniques used to create journey maps are very high-level, they provide virtually no ability to segment the audience. This leads to a “one-size-fits-all” mentality that simply isn’t correct. In the real world, different audiences have significantly different journey styles, preferences and maps, and it’s only through behavioral analysis that enough detail can be exhumed about those segments to create accurate maps.

Similarly, this high-level journey mapping leads to a “golden-path” mentality that belies real world experience. When you talk to people in the abstract, it’s perfectly possible to create the ideal path to completion for any given task. But in the real world, customers will always surprise you. They start paths in odd places, go in unexpected directions, and choose channels that may not seem ideal. That doesn’t mean you can’t service them appropriately. It does mean that if you try to force every customer into a rigid “best” path you’ll likely create many bad experiences. This myth of the golden path is something we’ve seen repeatedly in traditional web analytics and it’s even more mistaken in omni-channel.

In an omni-channel world, the goal isn’t to create an ideal path to completion. It’s to understand where the customer is in their journey and adapt the immediate Touchpoint to maximize their experience. That’s a fundamentally different mindset – a network approach not a golden-path – and it’s one that isn’t well captured or supported by traditional journey mapping.

There’s one final aspect to traditional journey mapping that I find particularly troublesome – customer experience teams have traditionally approached journey mapping as a one-time, static exercise.

Mistake.

The biggest change digital brings to the enterprise is the move away from traditional project methodologies. This isn’t only an IT issue. It’s not (just) about Agile development vs. Waterfall. It’s about recognition that ALL projects in nearly all their constituent pieces, need to work in iterative fashion. You don’t build once and move on. You build, measure, tune, rebuild, measure, and so on.  Continuous improvement comes from iteration. And the implication is that analytics, design, testing, and, yes, development should all be setup to support continuous cycles of improvement.

In the well-designed digital organization, no project ever stops.

This goes for journey mapping too. Instead of one huge comprehensive journey map that never changes and covers every aspect of the enterprise, customer journeys need to be evolved iteratively as part of an experience factory approach. Yes, a high-level journey framework does need to exist to create the shared language and approach that the organization can use. But like branches on a tree, the journey map should constantly be evolved in increasingly fine-grained and detailed views of specific aspects of the journey. If you’ve commissioned a one-time customer experience journey mapping effort, congratulations; you’re already on the road to failure.

The right approach to journey mapping isn’t two-tiered segmentation or traditional customer experience maps; it’s a synthesis of the two that blends a high-level framework driven primarily by VoC and creative techniques with more detailed, measurement and channel-based approaches (like Two-Tiered Segmentation) that deliver highly segmented network-based views of the journey. The detailed approaches never stop developing, but even the high-level pieces should be continuously iterated. It’s not that you need to constantly re-work the whole framework; it’s that in a large enterprise, there are always new journeys, new content, and new opportunities evolving.

More than anything else, this need for continuous iteration is what’s changed in the world and it’s why digital is such a challenge to the large enterprise.

A great digital organization never stops measuring customer experience. It never stops designing customer experience. It never stops imagining customer experience.

That takes a factory, not a project.

Digital Transformation

With a full first draft of my book in the hands of the publishers, I’m hoping to get back to a more regular schedule of blogging. Frankly, I’m looking forward to it. It’s a lot less of a grind than the “everyday after work and all day on the weekends pace” that was needful for finishing “Measuring the Digital World”! I’ve also accumulated a fair number of ideas for things to talk about; some directly from the book and some from our ongoing practice.

The vast majority of “Measuring the Digital World” concerns topics I’ve blogged about many times: digital segmentation, functionalism, meta-data, voice-of-customer, and tracking user journeys. Essentially, the book proceeds by developing a framework for digital measurement that is independent of any particular tool, report or specific application. It’s an introduction not a bible, so it’s not like I covered tons of new ground.  But, as will happen any time you try to voice what you know, some new understandings did emerge. I spent most of a chapter trying to articulate how the impact of self-selection and site structure can be handled analytically; this isn’t new exactly, but some of the concepts I ended up using were. Sections on rolling your own experiments with analytics not testing, and the idea of use-case demand elasticity and how to measure it, introduced concepts that crystallized for me only as I wrote them down. I’m looking forward to exploring those topics further.

At the same time, we’ve been making significant strides in our digital analytics practice that I’m eager to talk about. Writing a book on digital analytics has forced me to take stock not only of what I know, but also of where we are in our profession and industry. I really don’t know if “Measuring the Digital World” is any good or not (right now, at least, I am heartily sick of it), but I do know it’s ambitious. Its goal is nothing less than to establish a substantive methodology for digital analytics. That’s been needed for a long time. Far too often, analysts don’t understand how measurement in digital actually works and are oblivious to the very real methodological challenges it presents. Their ignorance results in a great deal of bad analysis; bad analysis that is either ignored or, worse, is used by the enterprise.

Even if we fixed all the bad analysis, however, the state of digital analytics in the enterprise would still be disappointing. Perhaps even worse, the state of digital in the enterprise is equally bad. And that’s really what matters. The vast majority of companies I observe, talk to, and work with, aren’t doing digital very well. Most of the digital experiences I study are poorly integrated with offline experiences, lack any useful personalization, have terribly inefficient marketing, are poorly optimized by channel and – if at all complex – harbor major usability flaws.

This isn’t because enterprises don’t invest in digital. They do. They spend on teams, tools and vendors for content development and deployment, for analytics, for testing, and for marketing. They spend millions and millions of dollars on all of these things. They just don’t do it very well.

Why is that?

Well, what happens is this:

Enterprises do analytics. They just don’t use analytics.

Enterprises have A/B testing tools and teams and they run lots of tests. They just don’t learn anything.

Enterprises talk about making data-driven decisions. They don’t really do it. And the people who do the most talking are the worst offenders.

Everyone has gone agile. But somehow nothing is.

Everyone says they are focused on the customer. Nobody really listens to them.

It isn’t about doing analytics or testing or voice of customer. It’s about finding ways to integrate them into the organization’s decision-making. In other words, to do digital well demands a fundamental transformation in the enterprise. It can’t be done on a business as usual basis. You can add an analytics team, build an A/B testing team, spend millions on attribution tools, Hadoop platforms, and every other fancy technology for content management and analytics out there. You can buy a great CMS with all the personalization capabilities you could ever demand. And almost nothing will change.

Analytics, testing, VoC, agile, customer-focus…these are the things you MUST do if you are going to do digital well. It isn’t that people don’t understand what’s necessary. Everyone knows what it takes. It’s that, by and large, these things aren’t being done in ways that drive actual change.

Having the right methodology for digital analytics is a (small) part of that. It’s a way to do digital analytics well. And digital analytics truly is essential to delivering great digital experiences. You can’t be great – or even pretty good – without it. But that’s clearly not enough. To do digital well requires a deeper transformation; it’s a transformation that forces the enterprise to blend analytics and testing into their DNA, and to use both at every level and around every decision in the digital channel.

That’s hard. But that’s what we’re focusing on this year. Not just on doing analytics, but on digital transformation. We’re figuring out how to use our team, our methods, and our processes to drive change at the most fundamental level in the enterprise – to do digital differently: to make decisions differently, to work differently, to deliver differently and, of course, to measure differently.

As we work through delivering on digital transformation, I plan to write about that journey as well: to describe the huge problems in the way most enterprises actually do digital, to describe how analytics and testing can be integrated deep into the organization, to show how measurement can be used to change the way organizations actually think about and understand their customers, and to show how method and process can be blended to create real change. We want to drive change in the digital experience and, equally, change in the controlling enterprise, for it is from the latter that the former must come if we are to deliver sustained success.