Tag Archives: digital strategy

Analytics with a Strategic Edge

The Role of Voice of Customer in Enterprise Analytics

The vast majority of analytics effort is expended on problems that are tactical in nature. That’s not necessarily wrong. Tactics gets a bad rap, sometimes, but the truth is that the vast majority of decisions we make in almost any context are tactical. The problem isn’t that too much analytics is weighted toward tactical issues, it’s really that strategic decisions don’t use analytics at all. The biggest, most important decisions in the digital enterprise nearly always lack a foundation in data or analysis.

I’ve always disliked the idea behind “HIPPOs” – with its Dilbertian assumption that executives are idiots. That isn’t (mostly) my experience at all. But analytics does suffer from what might be described as “virtue” syndrome – the idea that something (say taxes or abstinence) is good for everyone else but not necessarily for me. Just as creative folks tend to think that what they do can’t be driven by analytics, so too is there a perception that strategic decisions must inevitably be more imaginative and intuitive and less number-driven than many decisions further down in the enterprise.

This isn’t completely wrong though it probably short-sells those mid-level decisions. Building good creative takes…creativity. It can’t be churned out by machine. Ditto for strategic decisions. There is NEVER enough information to fully determine a complex strategic decision at the enterprise level.

This doesn’t mean that data isn’t useful or should not be a driver for strategic decisions (and for creative content too). Instinct only works when it’s deeply informed about reality. Nobody has instincts in the abstract. To make a good strategic decision, a decision-maker MUST have certain kinds of data to hand and without that data, there’s nothing on which intuition, knowledge and experience can operate.

What data does a digital decision-maker need for driving strategy?

Key audiences. Customer Journey. Drivers of decision. Competitive choices.

You need to know who your audiences are and what makes them distinct. You need (as described in the last post) to understand the different journeys those audiences take and what journeys they like to take. You need to understand why they make the choices they make – what drives them to choose one product or service or another. Things like demand elasticity, brand awareness, and drivers of choice at each journey stage are critical. And, of course, you need to understand when and why those choices might favor the competition.

None of this stuff will make a strategic decision for you. It won’t tell you how much to invest in digital. Whether or not to build a mobile app. Whether personalization will provide high returns.

But without fully understanding audience, journey, drivers of decision and competitive choices, how can ANY digital decision-maker possibly arrive at an informed strategy? They can’t. And, in fact, they don’t. Because for the vast majority of enteprises, none of this information is part-and-parcel of the information environment.

I’ve seen plenty of executive dashboards that are supposed to help people run their business. They don’t have any of this stuff. I’ve seen the “four personas” puffery that’s supposed to help decision-makers understand their audience. I’ve seen how limited is the exposure executives have to journey mapping and how little it is deployed on a day-to-day basis. Worst of all, I’ve seen how absolutely pathetic is the use of voice of customer (online and offline) to help decision-makers understand why customers make the choices they do.

Voice of customer as it exists today is almost exclusively concerned with measuring customer satisfaction. There’s nothing wrong with measuring NPS or satisfaction. But these measures tell you nothing that will help define a strategy. They are at best (and they are often deeply flawed here too) measures of scoreboard – whether or not you are succeeding in a strategy.

I’m sure that people will object that knowing whether or not a strategy is succeeding is important. It is. It’s even a core part of ongoing strategy development. However, when divorced from particular customer journeys, NPS is essentially meaningless and uninterpretable. And while it truly is critical to measure whether or not a strategy is succeeding, it’s even more important to have data to help shape that strategy in the first place.

Executives just don’t get that context from their analytics teams. At best, they get little pieces of it in dribs and drabs. It is never – as it ought to be – the constant ongoing lifeblood of decision-making.

I subtitled this post “The Role of Voice of Customer in Enterprise Analytics” because of all the different types of information that can help make strategic decisions better, VoC is by far the most important. A good VoC program collects information from every channel: online and offline surveys, call-center, site feedback, social media, etc. It provides a continuing, detailed and sliceable view of audience, journey distribution and (partly) success. It’s by far the best way to help decision-makers understand why customers are making the choices they are, whether those choices are evolving, and how those choices are playing out across the competitive set. In short, it answers the majority of the questions that ought to be on the minds of decision-makers crafting a digital strategy.

This is a very different sort of executive dashboard than we typically see. It’s a true customer insights dashboard. It’s also fundamentally different than almost ANY VoC dashboard we see at any level. The vast majority of VoC reporting doesn’t provide slice-and-dice by audience and use-case – a capability which is absolutely essential to useful VoC reporting. VoC reporting is almost never based on and tied into a journey model so that the customer insights data is immediately reflective of journey stage and actionable arena. And VoC reporting almost never includes a continuous focus on exploring customer decision-making and tying that into the performance of actual initiatives.

It isn’t just a matter of a dashboard. One of the most unique and powerful aspects of digital voice-of-customer is the flexibility it provides to rapidly, efficiently and at very little cost tackle new problems. VoC should be a core part of executive decision-making with a constant cadence of research, analysis, discussion and reporting driven by specific business questions. This open and continuing dialog where VoC is a tool for decision-making is critical to integrating analytics into decisioning. If senior folks aren’t asking for new VoC research on a constant basis, you aren’t doing it right. The single best indicator of a robust VoC program in digital is the speed with which it changes.

Sadly, what decision-makers mostly get right now (if they get anything at all) is a high-level, non-segmented view of audience demographics, an occasional glimpse into high-level decision-factors that is totally divorced from both segment and journey stage, and an overweening focus on a scoreboard metric like NPS.

It’s no wonder, given such thin gruel, that decision-makers aren’t using data for strategic decisions better. If our executives mostly aren’t Dilbertian, they aren’t miracle workers either. They can’t make wine out of information water. If we want analytics to support strategy – and I assume we all do – then building a completely different sort of VoC program is the single best place to start. It isn’t everything. There are other types of data (behavioral, benchmark, econometric, etc.) that can be hugely helpful in shaping digital strategies. But a good VoC program is a huge step forward – a step forward that, if well executed – has the power to immediately transform how the digital enterprise thinks and works.


This is probably my last post of the year – so see you in 2016! In the meantime, my book Measuring the Digital World is now available. Could be a great way to spend your holiday down time (ideally while your resting up from time on the slopes)! Have a great holiday…

SPEED: A Process for Continuous Improvement in Digital

Everyone always wants to get better. But without a formal process to drive performance, continuous improvement is more likely to be an empty platitude than a reality in the enterprise. Building that formal process isn’t trivial. Existing methodologies like Six Sigma illustrate the depth and the advantages of a true improvement process versus an ad hoc “let’s get better” attitude, but those methodologies (largely birthed in manufacturing) aren’t directly applicable to digital. In my last post, I laid out six grounding principles that underlie continuous improvement in digital. I’ll summarize them here as:

  • Small is measurable. Big changes (like website redesigns) alter too much to make optimization practical
  • Controlled Experiments are essential to measure any complex change
  • Continuous improvement will broadly target reduction in friction or improvement in segmentation
  • Acquisition and Experience (Content) are inter-related and inter-dependent
  • Audience, use-case, prequalification and target content all drive marketing performance
  • Most content changes shift behavior rather than drive clear positive or negative outcomes

Having guiding principles isn’t the same thing as having a method, but a real methodology can be fashioned from this sub-structure that will drive true continuous improvement. A full methodology needs a way to identify the right areas to work on and a process for improving those areas. At minimum, that process should include techniques for figuring out what to change and for evaluating the direction and impact of those changes. If you have that, you can drive continuous improvement.

I’ll start where I always start: segmentation. Specifically, 2-tiered segmentation. 2-tiered segmentation is a uniquely digital approach to segmentation that slices audiences by who they are (traditional segmentation) and what they are trying to accomplish (this is the second tier) in the digital channel. This matrixed segmentation scheme is the perfect table-set for continuous improvement. In fact, I don’t think it’s possible to drive continuous improvement without this type of segmentation. Real digital improvement is always relative to an audience and a use-case.

But segmentation on its own isn’t a method for continuous improvement. 2-tiered segmentation gives us a powerful framework for understanding where and why improvement might be focused, but it doesn’t tell us where to target improvements or what those improvements might be. To have a real method, we need that.

Here’s where pre-qualification comes in. One of the core principles is that acquisition and experience are inter-related and inter-dependent. This means that if you want to understand whether or not content is working (creating lift of some kind), then you have to understand the pre-existing state of the audience that consumes that content. Content with a 100% success rate may suck. Content with a 0% success rate may be outstanding. It all depends on the population you give them. Every single person in line at the DMV will stay there to get their license. That doesn’t mean the experience is a good one. It just means that the self-selected audience is determined to finish the process. We need that license! Similarly, if you direct garbage traffic to even the best content, it won’t perform at all. Acquisition and content are deeply interdependent. It’s impossible to measure the latter without understanding the former.

Fortunately, there’s a simple technique for measuring the quality of the audience sourced for any given content area that we call pre-qualification. To understand the pre-qualification level of an audience at a given content point, we use a very short (typically nor more than 3-4 questions) pop-up survey. The pre-qualification survey explores what use-case visitors are in, where they are in the buying cycle, and how committed they are to the brand. That’s it.

It may be simple, but pre-qualification is one of the most powerful tools in the digital analytics arsenal and it’s the key to a successful continuous improvement methodology.

First we segment. Then we measure pre-qualification. With these two pieces we can measure content performance by visitor type, use-case and visitor quality. That’s enough to establish which content and which marketing campaigns are truly underperforming.


Hold the population, use-case and pre-qualification level constant and measure the effectiveness of content pieces and sequences in creating successful outcomes. You can’t effectively measure content performance unless you hold these three variables constant, but when you control for these three variables you open up the power of digital analytics.

We now have a way to target potential improvement areas – just pick the content with the worst performance in each cell (visitor type x visit type x qualification level).

But there is much more that we can do with these essential pieces in place. By evaluating whether content underperforms across all pre-qualification levels equally or is much worse for less qualified visitors, you can determine if the content problem is because of friction (see guiding principle #3).

Friction problems tend to impact less qualified visitors disproportionately. So if less qualified visitors within each visitor type perform even worse than expected after consuming a piece of content, then some type of friction is likely the culprit.

Further, by evaluating content performance across visitor type (within use-case and with pre-qualification held constant), you have strong clues as to whether or not there are personalization opportunities to drive segmentation improvement.

Finally, where content performs well for qualified audiences but receives a disproportionate share of unqualified visitors, you know that you have to go upstream to fix the marketing campaigns sourcing the visits and targeting the content.

Segment. Pre-Qualify. Evaluate by qualification for friction and acquisition, and by visitor type for personalization.

Step four is to explore what to change. How do you do that? Often, the best method is to ask. This is yet another area for targeted VoC, where you can explore what content people are looking for, how they make decisions, what they need to know, and how that differs by segment. A rich series of choice/decision questions should create the necessary material to craft alternative approaches to test.

You can also break up the content into discrete chunks (each with a specific meta-data purpose or role) and then create a controlled experiment that tests which content chunks are most important and deliver the most lift. This is a sub-process for testing within the larger continuous improvement process. Analytically, it should also be possible to do a form of conjoint analysis on either behavior or preferences captured in VoC.

Segment. Pre-Qualify. Evaluate. Explore.

Now you’re ready to decide on the next round of tests and experiments based on a formal process for finding where problems are, why they exist, and how they can be tackled.

Segment, Pre-Qualify. Evaluate. Explore. Decide.


Sure, it’s just another consulting acronym. But underneath that acronym is real method. Not squishy and not contentless. It’s a formal procedure for identifying where problems exist, what class of problems they are, what type of solution might be a fit (friction reduction or personalization), and what that solution might consist of. All wrapped together in a process that can be endlessly repeated to drive measurable, discrete improvement for every type of visitor and every type of visit across any digital channel. It’s also specifically designed to be responsive to the guiding principles enumerated above that define digital.

If you’re looking for a real continuous improvement process in digital, there’s SPEED and then there’s…

Well, as far as I know, that’s pretty much it.


Interested in knowing more about 2-Tiered Segmentation and Pre-Qualification, the key ingredients to SPEED? “Measuring the Digital World” provides the most detailed descriptions I’ve ever written of how to do both and is now available for pre-order on Amazon.

Digital Transformation – How to Get Started, Real KPIs, the Necessary Staff and So Much More!

In the last couple of months, I’ve been writing an extended series on digital transformation that reflects our current practice focus. At the center of this whole series is a simple thesis: if you want to be good at something you have to be able to make good decisions around it. Most enterprises can’t do that in digital. From the top on down, they are setup in ways that make it difficult or impossible for decision-makers to understand how digital systems work and act on that knowledge. It isn’t because people don’t understand what’s necessary to make good decisions. Enterprises have invested in exactly the capabilities that are necessary: analytics, Voice of Customer, customer journey mapping, agile development, and testing. What they haven’t done is changed their processes in ways that take advantage of those capabilities.

I’ve put together what I think is a really compelling presentation of how most organizations make decisions in the digital channel, why it’s ineffective, and what they need to do to get better. I’ve put a lot of time into it (because it’s at the core of our value proposition) and really, it’s one of the best presentations I’ve ever done. If you’re a member of the Digital Analytics Association, you can see a chunk of that presentation in the recent webinar I did on this topic. [Webinars are brutal – by far the hardest kind of speaking I do – because you are just sitting there talking into the phone for 50 minutes – but I think this one, especially the back-half, just went well] Seriously, if you’re a DAA member, I think you’ll find it worthwhile to replay the webinar.

If you’re not, and you really want to see it, drop me a line, I’m told we can get guest registrations setup by request.

At the end of that webinar I got quite a few questions. I didn’t get a chance to answer them all and I promised I would – so that’s what this post is. I think most of the questions have inherent interest and are easily understood without watching the webinar so do read on even if you didn’t catch it (but watch the darn webinar).

Q: Are metrics valuable to stakeholders even if they don’t tie in to revenues/cost savings?

Absolutely. In point of fact, revenue isn’t even the best metric on the positive side of the balance sheet. For many reasons, lifetime value metrics are generally a better choice than revenue. Regardless, not every useful metric has to, can or should tie back to dollars. There are whole classes of metrics that are important but won’t directly tie to dollars: satisfaction metrics, brand awareness metrics and task completion metrics. That being said, the most controversial type of non-revenue metric are proxies for engagement which is, in turn, a kind of proxy for revenue. These, too, can be useful but they are far more dangerous. My advice is to never use a proxy metric unless you’ve done the work to prove it’s a valid proxy. That means no metrics plucked from thin air because they seem reasonable. If you can’t close the loop on performance with behavioral data, use re-survey methods. It’s absolutely critical that the metrics you optimize with be the right ones – and that means spending the extra time to get them right. Finally, I’ve argued for awhile that rather than metrics our focus should be on delivering models embedded in tools – this allows people to run their business not just look at history.

Q: What is your favorite social advertising KPI? I have been using $ / Site Visit and $ / Conversion to measure our campaigns but there is some pushback from the social team that we are not capturing social reach.

A very related question – and it’s interesting because I actually didn’t talk much about KPIs in the webinar! I think the question boils down to this (in addition to everything I just said about metrics) – is reach a valid metric? It can be, but reach shouldn’t be taken as is. As per my answer above, the value of an impression is quite different on every channel. If you’re not doing the work to figure out the value of an impression in a channel then what’s the point of reporting an arbitrary reach number? How can people possibly assess whether any given reach number makes a buy good or bad once they realize that the value of an impression varies dramatically by channel? I also think a strong case can be made that it’s a mistake to try and optimize digital campaigns using reported metrics even direct conversion and dollars. I just saw a tremendous presentation from Drexel’s Elea Feit at the Philadelphia DAA Symposium that echoed (and improved) what I’ve been saying for years. Namely that non-incremental attribution is garbage and that the best way to get true measures of lift is to use control groups. If your social media team thinks reach is important, then it’s worth trying to prove if they are right – whether that’s because those campaigns generate hidden short-term lift or or because they generate brand awareness that track to long term lift.

Q: For companies that are operating in the way you typically see, what is the one thing you would recommend to help get them started?

This is a tough one because it’s still somewhat dependent on the exact shape of the organization. Here are two things I commonly recommend. First, think about a much different kind of VoC program. Constant updating and targeting of surveys, regular socialization with key decision-makers where they drive the research, an enterprise-wide VoC dashboard in something like Tableau that focuses on customer decision-making not NPS. This is a great and relatively inexpensive way to bootstrap a true strategic decision support capability. Second, totally re-think your testing program as a controlled experimentation capability for decision-making. Almost every organization I work with should consider fundamental change in the nature, scope, and process around testing.

Q: How much does this change when there are no clear conversions (i.e., Non-Profit, B2B, etc)?

I don’t think anything changes. But, of course, everything does change. What I mean is that all of the fundamental precepts are identical. VoC, controlled experiments, customer journey mapping, agile analytics, integration of teams – it’s all exactly the same set of lessons regardless of whether or not you have clear conversions on your website. On the other hand, every single measurement is that much harder. I’d argue that the methods I argue for are even more important when you don’t have the relatively straightforward path to optimization that eCommerce provides. In particular, the absolute importance of closing the loop on important measurements simply can’t be understated when you don’t have a clear conversion to optimize to.

Q: What is the minimum size of analytics team to be able to successfully implement this at scale?

Another tricky question to answer but I’ll try not to weasel out of it. Think about it this way, to drive real transformation at enterprise scale, you need at least 1 analyst covering every significant function. That means an analyst for core digital reporting, digital analytics, experimentation, VoC, data science, customer journey, and implementation. For most large enterprises, that’s still an unrealistically small team. You might scrape by with a single analyst in VoC and customer journey, but you’re going to need at least small teams in core digital reporting, analytics, implementation and probably data science as well. If you’re at all successful, the number of analytics, experimentation and data science folks is going to grow larger – possibly much larger.  It’s not like a single person in a startup can’t drive real change, but that’s just not the way things work in the large enterprise. Large enterprise environments are complex in every respect and it takes a significant number of people to drive effective processes.

Q: Sometimes it feels like agile is just a subject line for the weekly meeting. Do you have any examples of organizations using agile well when it comes to digital?

Couldn’t agree more. My rule of thumb is this: if your organization is studying how to be innovative, it never will be. If your organization is meeting about agile, it isn’t. In the IT world, Agile has gone from a truly innovative approach to development to a ludicrous over-engineered process managed, often enough, by teams of consulting PMs. I do see some organizations that I think are actually quite agile when it comes to digital and doing it very well. They are almost all gaming companies, pure-play internet companies or startups. I’ll be honest – a lot of the ideas in my presentation and approach to digital transformation come from observing those types of companies. Whether I’m right that similar approaches can work for a large enterprise is, frankly, unclear.

Q: As a third party measurement company, what is the best way to approach or the best questions to ask customers to really get at and understand their strategic goals around their customer journeys?

This really is too big to answer inside a blog – maybe even too big to reasonably answer as a blog. I’ll say, too, that I’m increasingly skeptical of our ability to do this. As a consultant, I’m honor-bound to claim that as a group we can come in, ask a series of questions of people who have worked in an industry for 10 or 20 years and, in a few days time, understand their strategic goals. Okay…put this way, it’s obviously absurd. And, in fact, that’s really not how consulting companies work. Most of the people leading strategic engagements at top-tier consulting outfits have actually worked in an industry for a long-time and many have worked on the enterprise side and made exactly those strategic decisions. That’s a huge advantage. Most good consultants in a strategic engagement know 90% of what they are going to recommend before they ask a single question.

Having said that, I’m often personally in a situation where I’m asked to do exactly what I’ve just said is absurd and chances are if you’re a third party measurement company you have the same problem. You have to get at something that’s very hard and very complex in a very short amount of time and your expertise (like mine) is in analytics or technology not insurance or plumbing or publishing or automotive.

Here’s a couple of things I’ve found helpful. First, take the journey’s yourself. It’s surprising how many executives have never bought an online policy from their own company, downloaded a whitepaper to generate a lead, or bought advertising on their own site. You may not be able to replicate every journey, but where you can get hands on, do it. Having a customer’s viewpoint on the journey never hurts and it can give you insight your customers should but often don’t have. Second, remember that the internet is your best friend. A little up-front research from analysts is a huge benefit when setting the table for those conversations. And I’m often frantically googling acronyms and keywords when I’m leading those executive conversations. Third, check out the competition. If you do a lead on the client’s website, try it on their top three competitors too. What you’ll see is often a great table-set for understanding where they are in digital and what their strategy needs to be. Finally, get specific on the journey. In my experience, the biggest failing in senior leaders is their tendency to generality. Big generalities are easy and they sound smart but they usually don’t mean much of anything. The very best leaders don’t ever retreat into useless generality, but most of us will fall into it all too easily.

Q: What are some engagement models where an enterprise engages 3rd party consulting? For how long?

The question every consultant loves to hear! There are three main ways we help drive this type of digital transformation. The first is as strategic planners. We do quite a bit of pure digital analytics strategy work, but for this type of work we typically expand the strategic team a bit (beyond our core digital analytics folks) to include subject matter experts in the industry, in customer journey, and in information management. The goal is to create a “deep” analytics strategy that drives toward enterprise transformation. The second model (which can follow the strategic phase) is to supplement enterprise resources with specific expertise to bootstrap capabilities. This can include things like tackling specific highly strategic analytics projects, providing embedded analysts as part of the team to increase capacity and maturity, building out controlled experiment teams, developing VoC systems, etc. We can also provide – and here’s where being part of a big practice really helps – PM and Change Management experts who can help drive a broader transformation strategy. Finally, we can help soup to nuts building the program. Mind you, that doesn’t mean we do everything. I’m a huge believer that a core part of this vision is transformation in the enterprise. Effectively, that means outsourcing to a consultancy is never the right answer. But in a soup-to-nuts model, we keep strategic people on the ground, helping to hire, train, and plan on an ongoing basis.

Obviously, the how-long depends on the model. Strategic planning exercises are typically 10-12 weeks. Specific projects are all over the map, and the soup-to-nuts model is sustained engagement though it usually starts out hot and then gets gradually smaller over time.

Q: Would really like to better understand how you can identify visitor segments in your 2-tier segmentation when we only know they came to the site and left (without any other info on what segment they might represent).  Do you have any examples or other papers that address how/if this can be done?

A couple years back I was on a panel at a Conference in San Diego and one of the panelists started every response with “In my book…”. It didn’t seem to matter much what the question was. The answer (and not just the first three words) were always the same. I told my daughters about it when I got home, and the gentleman is forever immortalized in my household as the “book guy”. Now I’m going to go all book guy on you. The heart of my book, “Measuring the Digital World” is an attempt to answer this exact question. It’s by far the most detailed explication I’ve ever given of the concepts behind 2-tiered segmentation and how to go from behavior to segmentation. That being said, you can only pre-order now. So I’m also going to point out that I have blogged fairly extensively on this topic over the years. Here’s a couple of posts I dredged out that provide a good overview:



and – even more important – here’s the link to pre-order the book!

That’s it…a pretty darn good list of questions. I hope that’s genuinely reflective of the quality of the webinar. Next week I’m going to break out of this series for a week and write about our recent non-profit analytics hackathon – a very cool event that spurred some new thoughts on the analysis process and the tools we use for it.

Engineering the Digital Journey

Near the end of my last post (describing the concept of analytics across the enterprise), I argued that full spectrum analytics would  provide “a common understanding throughout the enterprise of who your customers are, what journeys they have, which journeys are easy and which a struggle for each type of customer, detailed and constantly improving profiles of those audiences and those journeys and the decision-making and attitudes that drive them, and a rich understanding of how initiatives and changes at every level of the enterprise have succeeded, failed, or changed those journeys over time.”

By my count, that admittedly too long sentence contains the word journey four times and clearly puts understanding the customer journey at the heart of analytics understanding in the enterprise.

I think that’s right.

If you think about what senior decision-makers in an organization should get from analytics, nothing seems more important than a good understanding of customers and their journeys. That same understanding is powerful and important at every level of the organization. And by creating that shared understanding, the enterprise gains something almost priceless – the ability to converse consistently and intelligently, top-to-bottom, about why programs are being implemented and what they are expected to accomplish.

This focus on the journey isn’t particularly new. It’s been almost five years since I began describing Two-Tiered Segmentation as fundamental to digital; it’s a topic I’ve returned to repeatedly and it’s the central theme of my book. In a Two-Tiered Segmentation, you segment along two dimensions: who visitors are and what they are trying to accomplish in a visit. It’s this second piece – the visit intent segmentation – that begins to capture and describe customer journey.

But if Two-Tiered Segmentation is the start of a measurement framework for customer journey, it isn’t a complete solution. It’s too digitally focused and too rooted in displayed behaviors – meaning it’s defined solely by the functionality provided by the enterprise not by the journeys your customers might actually want to take. It’s also designed to capture the points in a journey – not necessarily to lay out the broader journey in a maximally intelligible fashion.

Traditional journey mapping works from the other end of the spectrum. Starting with customers and using higher-level interview techniques, it’s designed to capture the basic things customers want to accomplish and then map those into more detailed potential touchpoints. It’s exploratory and specifically geared toward identifying gaps in functionality where customers CAN’T do the things they want or can’t do them in the channels they’d prefer.

While traditional journey mapping may feel like the right solution to creating enterprise-wide journey maps, it, too, has some problems. Because the techniques used to create journey maps are very high-level, they provide virtually no ability to segment the audience. This leads to a “one-size-fits-all” mentality that simply isn’t correct. In the real world, different audiences have significantly different journey styles, preferences and maps, and it’s only through behavioral analysis that enough detail can be exhumed about those segments to create accurate maps.

Similarly, this high-level journey mapping leads to a “golden-path” mentality that belies real world experience. When you talk to people in the abstract, it’s perfectly possible to create the ideal path to completion for any given task. But in the real world, customers will always surprise you. They start paths in odd places, go in unexpected directions, and choose channels that may not seem ideal. That doesn’t mean you can’t service them appropriately. It does mean that if you try to force every customer into a rigid “best” path you’ll likely create many bad experiences. This myth of the golden path is something we’ve seen repeatedly in traditional web analytics and it’s even more mistaken in omni-channel.

In an omni-channel world, the goal isn’t to create an ideal path to completion. It’s to understand where the customer is in their journey and adapt the immediate Touchpoint to maximize their experience. That’s a fundamentally different mindset – a network approach not a golden-path – and it’s one that isn’t well captured or supported by traditional journey mapping.

There’s one final aspect to traditional journey mapping that I find particularly troublesome – customer experience teams have traditionally approached journey mapping as a one-time, static exercise.


The biggest change digital brings to the enterprise is the move away from traditional project methodologies. This isn’t only an IT issue. It’s not (just) about Agile development vs. Waterfall. It’s about recognition that ALL projects in nearly all their constituent pieces, need to work in iterative fashion. You don’t build once and move on. You build, measure, tune, rebuild, measure, and so on.  Continuous improvement comes from iteration. And the implication is that analytics, design, testing, and, yes, development should all be setup to support continuous cycles of improvement.

In the well-designed digital organization, no project ever stops.

This goes for journey mapping too. Instead of one huge comprehensive journey map that never changes and covers every aspect of the enterprise, customer journeys need to be evolved iteratively as part of an experience factory approach. Yes, a high-level journey framework does need to exist to create the shared language and approach that the organization can use. But like branches on a tree, the journey map should constantly be evolved in increasingly fine-grained and detailed views of specific aspects of the journey. If you’ve commissioned a one-time customer experience journey mapping effort, congratulations; you’re already on the road to failure.

The right approach to journey mapping isn’t two-tiered segmentation or traditional customer experience maps; it’s a synthesis of the two that blends a high-level framework driven primarily by VoC and creative techniques with more detailed, measurement and channel-based approaches (like Two-Tiered Segmentation) that deliver highly segmented network-based views of the journey. The detailed approaches never stop developing, but even the high-level pieces should be continuously iterated. It’s not that you need to constantly re-work the whole framework; it’s that in a large enterprise, there are always new journeys, new content, and new opportunities evolving.

More than anything else, this need for continuous iteration is what’s changed in the world and it’s why digital is such a challenge to the large enterprise.

A great digital organization never stops measuring customer experience. It never stops designing customer experience. It never stops imagining customer experience.

That takes a factory, not a project.

Digital Transformation

With a full first draft of my book in the hands of the publishers, I’m hoping to get back to a more regular schedule of blogging. Frankly, I’m looking forward to it. It’s a lot less of a grind than the “everyday after work and all day on the weekends pace” that was needful for finishing “Measuring the Digital World”! I’ve also accumulated a fair number of ideas for things to talk about; some directly from the book and some from our ongoing practice.

The vast majority of “Measuring the Digital World” concerns topics I’ve blogged about many times: digital segmentation, functionalism, meta-data, voice-of-customer, and tracking user journeys. Essentially, the book proceeds by developing a framework for digital measurement that is independent of any particular tool, report or specific application. It’s an introduction not a bible, so it’s not like I covered tons of new ground.  But, as will happen any time you try to voice what you know, some new understandings did emerge. I spent most of a chapter trying to articulate how the impact of self-selection and site structure can be handled analytically; this isn’t new exactly, but some of the concepts I ended up using were. Sections on rolling your own experiments with analytics not testing, and the idea of use-case demand elasticity and how to measure it, introduced concepts that crystallized for me only as I wrote them down. I’m looking forward to exploring those topics further.

At the same time, we’ve been making significant strides in our digital analytics practice that I’m eager to talk about. Writing a book on digital analytics has forced me to take stock not only of what I know, but also of where we are in our profession and industry. I really don’t know if “Measuring the Digital World” is any good or not (right now, at least, I am heartily sick of it), but I do know it’s ambitious. Its goal is nothing less than to establish a substantive methodology for digital analytics. That’s been needed for a long time. Far too often, analysts don’t understand how measurement in digital actually works and are oblivious to the very real methodological challenges it presents. Their ignorance results in a great deal of bad analysis; bad analysis that is either ignored or, worse, is used by the enterprise.

Even if we fixed all the bad analysis, however, the state of digital analytics in the enterprise would still be disappointing. Perhaps even worse, the state of digital in the enterprise is equally bad. And that’s really what matters. The vast majority of companies I observe, talk to, and work with, aren’t doing digital very well. Most of the digital experiences I study are poorly integrated with offline experiences, lack any useful personalization, have terribly inefficient marketing, are poorly optimized by channel and – if at all complex – harbor major usability flaws.

This isn’t because enterprises don’t invest in digital. They do. They spend on teams, tools and vendors for content development and deployment, for analytics, for testing, and for marketing. They spend millions and millions of dollars on all of these things. They just don’t do it very well.

Why is that?

Well, what happens is this:

Enterprises do analytics. They just don’t use analytics.

Enterprises have A/B testing tools and teams and they run lots of tests. They just don’t learn anything.

Enterprises talk about making data-driven decisions. They don’t really do it. And the people who do the most talking are the worst offenders.

Everyone has gone agile. But somehow nothing is.

Everyone says they are focused on the customer. Nobody really listens to them.

It isn’t about doing analytics or testing or voice of customer. It’s about finding ways to integrate them into the organization’s decision-making. In other words, to do digital well demands a fundamental transformation in the enterprise. It can’t be done on a business as usual basis. You can add an analytics team, build an A/B testing team, spend millions on attribution tools, Hadoop platforms, and every other fancy technology for content management and analytics out there. You can buy a great CMS with all the personalization capabilities you could ever demand. And almost nothing will change.

Analytics, testing, VoC, agile, customer-focus…these are the things you MUST do if you are going to do digital well. It isn’t that people don’t understand what’s necessary. Everyone knows what it takes. It’s that, by and large, these things aren’t being done in ways that drive actual change.

Having the right methodology for digital analytics is a (small) part of that. It’s a way to do digital analytics well. And digital analytics truly is essential to delivering great digital experiences. You can’t be great – or even pretty good – without it. But that’s clearly not enough. To do digital well requires a deeper transformation; it’s a transformation that forces the enterprise to blend analytics and testing into their DNA, and to use both at every level and around every decision in the digital channel.

That’s hard. But that’s what we’re focusing on this year. Not just on doing analytics, but on digital transformation. We’re figuring out how to use our team, our methods, and our processes to drive change at the most fundamental level in the enterprise – to do digital differently: to make decisions differently, to work differently, to deliver differently and, of course, to measure differently.

As we work through delivering on digital transformation, I plan to write about that journey as well: to describe the huge problems in the way most enterprises actually do digital, to describe how analytics and testing can be integrated deep into the organization, to show how measurement can be used to change the way organizations actually think about and understand their customers, and to show how method and process can be blended to create real change. We want to drive change in the digital experience and, equally, change in the controlling enterprise, for it is from the latter that the former must come if we are to deliver sustained success.