All posts by garyangel

SPEED: A Process for Continuous Improvement in Digital

Everyone always wants to get better. But without a formal process to drive performance, continuous improvement is more likely to be an empty platitude than a reality in the enterprise. Building that formal process isn’t trivial. Existing methodologies like Six Sigma illustrate the depth and the advantages of a true improvement process versus an ad hoc “let’s get better” attitude, but those methodologies (largely birthed in manufacturing) aren’t directly applicable to digital. In my last post, I laid out six grounding principles that underlie continuous improvement in digital. I’ll summarize them here as:

  • Small is measurable. Big changes (like website redesigns) alter too much to make optimization practical
  • Controlled Experiments are essential to measure any complex change
  • Continuous improvement will broadly target reduction in friction or improvement in segmentation
  • Acquisition and Experience (Content) are inter-related and inter-dependent
  • Audience, use-case, prequalification and target content all drive marketing performance
  • Most content changes shift behavior rather than drive clear positive or negative outcomes

Having guiding principles isn’t the same thing as having a method, but a real methodology can be fashioned from this sub-structure that will drive true continuous improvement. A full methodology needs a way to identify the right areas to work on and a process for improving those areas. At minimum, that process should include techniques for figuring out what to change and for evaluating the direction and impact of those changes. If you have that, you can drive continuous improvement.

I’ll start where I always start: segmentation. Specifically, 2-tiered segmentation. 2-tiered segmentation is a uniquely digital approach to segmentation that slices audiences by who they are (traditional segmentation) and what they are trying to accomplish (this is the second tier) in the digital channel. This matrixed segmentation scheme is the perfect table-set for continuous improvement. In fact, I don’t think it’s possible to drive continuous improvement without this type of segmentation. Real digital improvement is always relative to an audience and a use-case.

But segmentation on its own isn’t a method for continuous improvement. 2-tiered segmentation gives us a powerful framework for understanding where and why improvement might be focused, but it doesn’t tell us where to target improvements or what those improvements might be. To have a real method, we need that.

Here’s where pre-qualification comes in. One of the core principles is that acquisition and experience are inter-related and inter-dependent. This means that if you want to understand whether or not content is working (creating lift of some kind), then you have to understand the pre-existing state of the audience that consumes that content. Content with a 100% success rate may suck. Content with a 0% success rate may be outstanding. It all depends on the population you give them. Every single person in line at the DMV will stay there to get their license. That doesn’t mean the experience is a good one. It just means that the self-selected audience is determined to finish the process. We need that license! Similarly, if you direct garbage traffic to even the best content, it won’t perform at all. Acquisition and content are deeply interdependent. It’s impossible to measure the latter without understanding the former.

Fortunately, there’s a simple technique for measuring the quality of the audience sourced for any given content area that we call pre-qualification. To understand the pre-qualification level of an audience at a given content point, we use a very short (typically nor more than 3-4 questions) pop-up survey. The pre-qualification survey explores what use-case visitors are in, where they are in the buying cycle, and how committed they are to the brand. That’s it.

It may be simple, but pre-qualification is one of the most powerful tools in the digital analytics arsenal and it’s the key to a successful continuous improvement methodology.

First we segment. Then we measure pre-qualification. With these two pieces we can measure content performance by visitor type, use-case and visitor quality. That’s enough to establish which content and which marketing campaigns are truly underperforming.

How?

Hold the population, use-case and pre-qualification level constant and measure the effectiveness of content pieces and sequences in creating successful outcomes. You can’t effectively measure content performance unless you hold these three variables constant, but when you control for these three variables you open up the power of digital analytics.

We now have a way to target potential improvement areas – just pick the content with the worst performance in each cell (visitor type x visit type x qualification level).

But there is much more that we can do with these essential pieces in place. By evaluating whether content underperforms across all pre-qualification levels equally or is much worse for less qualified visitors, you can determine if the content problem is because of friction (see guiding principle #3).

Friction problems tend to impact less qualified visitors disproportionately. So if less qualified visitors within each visitor type perform even worse than expected after consuming a piece of content, then some type of friction is likely the culprit.

Further, by evaluating content performance across visitor type (within use-case and with pre-qualification held constant), you have strong clues as to whether or not there are personalization opportunities to drive segmentation improvement.

Finally, where content performs well for qualified audiences but receives a disproportionate share of unqualified visitors, you know that you have to go upstream to fix the marketing campaigns sourcing the visits and targeting the content.

Segment. Pre-Qualify. Evaluate by qualification for friction and acquisition, and by visitor type for personalization.

Step four is to explore what to change. How do you do that? Often, the best method is to ask. This is yet another area for targeted VoC, where you can explore what content people are looking for, how they make decisions, what they need to know, and how that differs by segment. A rich series of choice/decision questions should create the necessary material to craft alternative approaches to test.

You can also break up the content into discrete chunks (each with a specific meta-data purpose or role) and then create a controlled experiment that tests which content chunks are most important and deliver the most lift. This is a sub-process for testing within the larger continuous improvement process. Analytically, it should also be possible to do a form of conjoint analysis on either behavior or preferences captured in VoC.

Segment. Pre-Qualify. Evaluate. Explore.

Now you’re ready to decide on the next round of tests and experiments based on a formal process for finding where problems are, why they exist, and how they can be tackled.

Segment, Pre-Qualify. Evaluate. Explore. Decide.

SPEED.

Sure, it’s just another consulting acronym. But underneath that acronym is real method. Not squishy and not contentless. It’s a formal procedure for identifying where problems exist, what class of problems they are, what type of solution might be a fit (friction reduction or personalization), and what that solution might consist of. All wrapped together in a process that can be endlessly repeated to drive measurable, discrete improvement for every type of visitor and every type of visit across any digital channel. It’s also specifically designed to be responsive to the guiding principles enumerated above that define digital.

If you’re looking for a real continuous improvement process in digital, there’s SPEED and then there’s…

Well, as far as I know, that’s pretty much it.

 

Interested in knowing more about 2-Tiered Segmentation and Pre-Qualification, the key ingredients to SPEED? “Measuring the Digital World” provides the most detailed descriptions I’ve ever written of how to do both and is now available for pre-order on Amazon.

Continuous Improvement

Is it a Method or a Platitude?

What does it take to be good at digital? The ability to make good decisions, of course. If you run a pro football team and you make consistently good decisions about players and about coaches, and they, in turn, make consistently good decisions about preparation and plays, you’ll be successful. Most organizations aren’t setup to make good decisions in digital. They don’t have the right information to drive strategic decisions and they often lack the right processes to make good tactical decisions. I’ve highlighted four capabilities that must be knitted together to drive consistently good decisions in the digital realm: comprehensive customer journey mapping, analytics support at every level of the organization, aggressive controlled experimentation targeted to decision-support, and constant voice of customer research. For most organizations, none of these capabilities are well-baked and it’s rare that even a very good organization is excellent at more than two of these capabilities.

The Essentials for Digital Transformation
                          The Essentials for Digital Transformation

There’s a fifth spoke of this wheel, however, that isn’t so much a capability as an approach. That’s not so completely different from the others as it might seem. After all, almost every enterprise I see has a digital analytics department, a VoC capability, a customer journey map, and an A/B Testing team. In previous posts, I’ve highlighted how those capabilities are mis-used, mis-deployed or simply misunderstood. Which makes for a pretty big miss. So it’s very much true that a better approach underlies all of these capabilities. When I talk about continuous improvement, it’s not a capability at all. There’s no there, there. It’s just an approach. Yet it’s an approach that, taken seriously, can help weld these other four capabilities into a coherent whole.

The doctrine of continuous improvement is not new – in digital or elsewhere. It has a long and proven track record and it’s one of the few industry best practices with which I am in whole-hearted agreement. Too often, however, continuous improvement is treated as an empty platitude, not a method. It’s interpreted as a squishy injunction that we should always try to get better. Rah! Rah!

No.

Taken this way, it’s as contentless as interpreting evolutionary theory as survival of the fittest. Those most likely to survive are…those most likely to survive. It is the mechanism of natural selection coupled with genetic variation and mutation that gives content to evolutionary doctrine. In other words, without a process for deciding what’s fittest and a method of transmitting that fitness across generations, evolutionary theory would be a contentless tautology. The idea of continuous improvement, too, needs a method to be interesting. Everybody wants to get better all the time. There has to be a real process to make it interesting.

There are such processes, of course. Techniques like Six Sigma famously elaborate a specific method to drive continuous improvement in manufacturing processes. Unfortunately, Six Sigma isn’t directly transferable to digital analytics. We lack the critical optimization variable (defects) against which these methods work. Nor does it work to simply substitute a variable like conversion rate for defects because we lack the controlled environment necessary to believe that every customer should convert.

If Six Sigma doesn’t translate directly into digital analytics, that doesn’t mean we can’t learn from it and cadge some good ideas, though. Here are the core ideas that drive continuous improvement in digital, many of which are rooted in formal continuous improvement methodologies:

  1. It’s much easier to measure a single, specific change than a huge number of simultaneous changes. A website or mobile app is a complex set of interconnecting pieces. If you change your home page, for example, you change the dynamics of every use-case on the site. This may benefit some users and disadvantage others; it may improve one page’s performance and harm another’s. When you change an entire website at once, it’s incredibly difficult to isolate which elements improved and which didn’t. Only the holistic performance of the system can be measured on a before and after basis – and even that can be challenging if new functionality has been introduced. The more discrete and isolated a change, the easier it is to measure its true impact on the system.
  2. Where changes are specific and local, micro-conversion analytics can generally be used to assess improvement. Where changes are numerous or the impact non-local, then a controlled environment is necessary to measure improvement. A true controlled environment in digital is generally impossible but can be effectively replicated via controlled experimentation (such as A/B testing or hold-outs).
  3. Continuous improvement can be driven on a segmented or site-wide basis. Improvements that are site-wide are typically focused on reducing friction. Segmentation improvements are focused on optimizing the conversation with specific populations. Both types of improvement cycles must be addressed in any comprehensive program.
  4. Digital performance is driven by two different systems (acquisition of traffic and content performance). Despite the fact that these two systems function independently, it’s impossible to measure performance of either without measuring their interdependencies. Content performance is ALWAYS relative to the mix of audience created by the acquisition systems. This dependency is even tighter in closed loop systems like Search Engine Optimization – where the content of the page heavily determines the nature of the traffic sent AND the performance of that traffic once sourced (though the two can function quite differently with the best SEO optimized page being a very poor content performer even though it’s sourcing its own traffic).
  5. Marketing performance is a function of four things: the type of audience sourced, the use-case of the audience sourced, the pre-qualification of the audience sourced and the target content to which the audience is sourced. Continuous improvement must target all four factors to be effective.
  6. Content performance is relative to function, audience and use-case. Some content changes will be directly negative or positive (friction causing or reducing), but most will shift the distribution of behaviors. Because most impacts are shifts in the distribution of use-cases or journeys, it’s essential that the relative value of alternative paths be understood when applying continuous improvement.

These are core ideas, not a formal process. In my next post, I’ll take a shot at translating them into a formal process for digital improvement. I’m not really confident how tightly I can describe that process, but I am confident that it will capture something rather different than any current approach to digital analytics.

 

With Thanksgiving upon us now is the time to think about the perfect stocking stuffer for the digital analyst you like best. Pre-order “Measuring the Digital World” now!

Analytics for a (Good) Purpose

I imagine that anyone reading my posts can tell that I love doing analytics. I mean real, hands-on, getting your cuticles data-dirty analytics. But if I have a complaint about the analytics part of what I do, it’s that so often it’s for purposes that just aren’t gripping. There’s nothing wrong with selling more insurance, getting people to view higher-value ads, or cutting a few seconds off the time it takes to complete a process. Making commerce better is a perfectly good thing to do. Commerce matters to all of us. But if there’s nothing wrong with improving commerce, neither is it food for the soul. I’ve been re-reading Tobias Wolff’s wonderful novel “Old School”, and in it, one of the professors says something like this – “Essays? We could live without essays. The world would be a little poorer – like a world without chess – but stories…stories we can’t live without.”

That’s why I’ve always loved the rare occasions when we get to turn an analytics eye on a problem that means something more. Part of my team at EY got that chance a little more than a week back when we hosted an “Analytics Hackathon” for the Earthwatch Institute.

You can check out Earthwatch here at Earthwatch.org – it’s a very cool organization. I love everything about what they do and the way they approach it. I love the science part, which is fascinating. The nature part, which is just something I happen to enjoy – my daughters will attest that I am “crazy hiker guy”. And I love the approach, that assumes we are at our best when we do good not from ideology, which is often cold and artificial, but from passion. Even more, that worthwhile commitment comes from passion tempered by knowledge. We all realize that knowledge without passion achieves little. But passion without knowledge more often does harm than good in our complex society. Building that rare combination of passion for and knowledge of the natural world strikes me as what Earthwatch is all about, and I can’t think of a more rewarding mission.

So Earthwatch provided us six years of data on their expeditioners (folks who volunteer to take field trips to support their scientific endeavors), their donors, and the intersection of the two, and let us have at it for day. They asked three big questions: what can you tell us about donors and donor patterns, how do donors and expeditioners intersect, and are there things we should know to improve the marketing of expeditions to attract volunteers?

Earthwatch Image 1Great questions all, but a lot to ask of a five-hour day.

We pre-loaded their data into Tableau, and after a brief context-setting presentation from the Earthwatch folks, we split up into groups with each group drawing a single question. Each group produced a full-on dashboard and spent some time answering the questions.

One of the great challenges for many non-profits is the split between what you do and those who pay. In the traditional enterprise, good products and service make your customers happy and willing to pay. At Earthwatch, as with many a non-profit, their mission doesn’t directly serve their donors (those who pay). So the challenge (and the opportunity) is how to connect donors to the mission.

The mechanism for doing that at Earthwatch is the expedition. Hands on participation in an Earthwatch expedition is by far the best spur to giving they have. So one of our groups focused specifically on the relationship between expeditions and giving – and what they found was fascinating and unexpected. But it’s also fair to ask what other factors might drive giving – are there demographic, lifestage, or proclivities that can be used to direct social advertising, inform partnerships or target messaging?

Unfortunately, like many an enterprise (and not just non-profits), Earthwatch hasn’t done the greatest job building out their knowledge of their customers – in this case their donors. With only age, gender and zip code to work with (and that data obviously spotty with null values dominating each demographic category), the options for look-alike or advanced targeting are fairly minimal.

However, even with such thin gruel, there are findings to be had and analysis to be done. If you graph Earthwatch’s expeditioners by age, you get a big horseshoe-like graph. Lots of teenagers. Lots of seniors. Not much in-between. That’s no surprise and probably not changeable. Graph donors, and the left-hand side of the horse-shoe (the teenagers) go away. That’s no surprise either. You can’t squeeze much water from a rock. What is surprising is that the middle part of the graph doesn’t fill-in. Aren’t the parents of those teens natural donors? Your children’s connection to an activity ought to be a powerful motivator to giving. I think there’s potentially a missed strategic opportunity here.

There were two other points that emerged from simple graphs of donations by age and donation amount by age. Earthwatch gets lots of donations from seniors. But there’s a big spike right at sixty. And there’s a pretty significant spike in donation amount right around forty. Think about that. Forty and sixty are big inflection points. They are times when almost all of us step outside the lines for at least a short while and think about the shape and nature of our life. That’s a good time to think about an Earthwatch expedition or a donation, right? This is a case where there’s no need to target a broad demographic. The combination of some key interest variables and a big birthday might be enough. It’s at least worth testing. Targeted marketers know the importance of magic moments, and the finer-grained you can make them, the more efficient you can be. For a non-profit like Earthwatch with tiny marketing dollars, the tighter you can draw the boundaries around a magic-moment, the more likely you are to be able to use it effectively.

Thinking about that donor curve also makes plain how important both patience and a long-term strategy are to a non-profit like Earthwatch (and maybe to a lot of for-profits as well). Earthwatch has been around for a long time. That means some of their early expeditioners are retirees now. If you can keep track of people for twenty, thirty or forty years, you have an opportunity to re-ignite those connections. When they have teenagers themselves, they are the right audience to target for expeditions and donations.

This long term view seems hard. But it’s exactly what great schools and universities do. They know their 25 year old graduates aren’t giving them money. But if they can create mechanism to stay in touch till those graduates hit forty, fifty and sixty, that is worth a lot. Social media is, of course, a great way to do this. And facilitating social media connections with volunteers ought to be a long-term strategic goal for any non-profit that engages with young people.

And what about all those folks who took expeditions back in the 80’s and 90’s? Track them down on LinkedIn and Facebook – that’s what interns are for – and send them something to get them back in the fold!

In my recent posts, I’ve been arguing that analytics is under-used in strategy. Mostly, this type of analytics isn’t advanced modelling or big data stuff. It’s macroeconomics not microeconomics. Just looking at the shape of the donor and expeditioner curves can help inform strategic thinking.

From a more tactical standpoint, we also looked at the relationship between their new membership program and repeat giving. Earthwatch has bounced back and forth a bit on membership, but they currently are focused on it. We found that members tended to be smaller donors (their biggest donors weren’t always members). More interesting, however, was the impact of membership on donation pattern and stability. We tracked donors who gave in ’14 before the membership program and then became members in ’15. Did they give less or more? We didn’t have the time or the tools to do this analysis properly, but it looked as if membership, on average, tended to slightly depress average donation but increase frequency of giving resulting in a net positive. As I said, we didn’t have time to really prove this, but analytically, there’s a couple of key points here. If you’re a non-profit trying to assess the impact of something like membership, you need to make sure you break the problem down into analyzable segments. That means creating cohorts of previous donors and tracking their behavior (including whether their behavior tends to improve or deteriorate over time), tracking the impact on new donors and efforts, and, in most cases, using hold-outs and control groups to make sure you’re not fooling yourself about the numbers.

Going back to the shapes of curves, the team that looked into the relationship between giving and expeditions found something truly interesting. They linked the two tables (donors/expeditioners) to isolate just the population that had gone on an expedition and donated money. Then they created a calculated variable that tracked the difference between the donation date and the expedition date and laid it out on a chart (ain’t Tableau wonderful).

Earthwatch Image 2What they found was kind of a shock. I would have expected a curve kind of like a camel’s hump after the expeditions. Not much giving ahead of time, a short latency period after the expedition, then a sharp hump followed by a quick decline and a long slow descent as the halo from the trip gradually dispersed. Much of that is exactly what they found. There isn’t much of a latency period but the there is a sharp hump followed by the quick decline and slow descent. The shocker was on the other side of the curve. It turns out that lots of expeditioners (not the teens but the adults) are quite likely to give BEFORE they travel. The team tackling this called it a “Packing Boost” (this is one of those things that makes me proud – not only did they find something interesting but they did the extra work to attach a business useful name to the phenomenon – that’s good consulting). The pre-trip donation amounts were quite a bit smaller on average, but the number of donations was almost symmetrical.

I would never have expected that.

Apparently, when people are getting ready for an expedition they are also in the mood to make a donation. I can see that, but not only was it a surprise to me, it wasn’t received wisdom at Earthwatch either. Their donation solicitations are not at all focused on the pre-trip period.

That’s potentially a huge win and an easily testable addition to their solicitation marketing program.

The third team looked at the behavior of expeditioners. Their initial analysis focused on when people book an expedition versus the type of expedition. It turns out that there are some pretty distinct types of trip. Expeditions to Africa are usually booked a long time in advance. Expeditions in the US and places like Costa Rica are more typically booked 2-3 months in advance. There are seasonal impacts as well, with most expeditions getting booked in the spring (to take place over summer).

Actionable? You bet it is. If you’re programming the hero section of the website (which happens to have a rotating set of expeditions), knowing the time-horizons for each type of trip can help you get your web marketing right. There’s also a planning element to this. If your Africa expedition isn’t largely staffed six months out, you’re in trouble. But that trip to Costa Rica still has plenty of runway.

Finally, that team looked at the impact of discounts on cancellation behavior and which expeditions were most cancelled (important from a planning perspective). They, too, ran out of time and had some tool limitations but initial analysis seems to suggest that people are less likely to cancel trips when they’ve gotten a discount. Even more suggestive, it didn’t look like the amount of the discount was hugely significant. This might indicate that some discounting is economically beneficial – even it drives no initial lift.  It’s also possible that it’s no more than an artifact of self-selection, since the discounts may be offered to customer segments that are inherently less likely to cancel (previous expeditioners, for example). It’s an unexpected and potentially important finding but like any exploratory finding, it needs testing and controls to see if it’s real.

 

I’m pretty sure our five hours of time won’t change the world. Still, we had a lot of fun doing work we genuinely enjoy for an organization that truly matters. And there’s a chance we helped out a little. That’s good enough for me.

Are there some big takeaways about analytics from our one-day Hackathon? Most of them are things we all should know.

Earthwatch helped make the process more productive by coming to the table with three real and fairly concrete problems. We don’t always get as much from clients that are investing a lot of money. Knowing the questions you want to answer is the single most important step in any analysis.

Like a lot of organizations, Earthwatch hasn’t invested as much in data collection and data quality as is ideal. Limitations on the data place real boundaries on what you can do – not only with analysis but with the fruits of that analysis in targeting and personalization.

Being open to the unexpected is critical (and sometimes that’s easier for an outside consultant without a lot of preconceptions around the business). The team that started by focusing on the impact to donations after taking an expedition ended up talking much more about the impact to donations of planning for an expedition. It wasn’t that their initial hypothesis was wrong. People do donate after going on an expedition. But they had the imagination and sense to see that a more interesting hypothesis emerged from the data.

Tableau is a great tool for visualization and data exploration, but it can’t do everything. Problems like the cohort analysis of membership or the impact of cancellation really required statistical analysis tools with more horsepower and more data manipulation capabilities. Still, the ability to quickly explore a data set across many dimensions is wonderful and the utility of that ease in the right hands is hard to overestimate.

Finally, the biggest part of any analysis is the imagination to map the data to the business problem or opportunity. Strategic insights aren’t usually driven by fancy analysis. They are more often sparked by simple views and cuts of the data (line graphs or bar charts) that make obvious some fundamental fact about the business. Sometimes data can spark new insights; sometimes it’s just a confirmation (or refutation) of strategic thoughts or business intuitions that are already on the table. Either way, it makes for a better strategy and more confident decisions.

 

Finally, one last plug for Earthwatch. What they do is important and, often, very cool (check out that Barrier Reef diving expedition). Like our Hackathon, there’s nothing wrong and everything right with having fun doing something worthwhile. So even if you’re not coming up on forty or sixty, take a look!

Digital Transformation – How to Get Started, Real KPIs, the Necessary Staff and So Much More!

In the last couple of months, I’ve been writing an extended series on digital transformation that reflects our current practice focus. At the center of this whole series is a simple thesis: if you want to be good at something you have to be able to make good decisions around it. Most enterprises can’t do that in digital. From the top on down, they are setup in ways that make it difficult or impossible for decision-makers to understand how digital systems work and act on that knowledge. It isn’t because people don’t understand what’s necessary to make good decisions. Enterprises have invested in exactly the capabilities that are necessary: analytics, Voice of Customer, customer journey mapping, agile development, and testing. What they haven’t done is changed their processes in ways that take advantage of those capabilities.

I’ve put together what I think is a really compelling presentation of how most organizations make decisions in the digital channel, why it’s ineffective, and what they need to do to get better. I’ve put a lot of time into it (because it’s at the core of our value proposition) and really, it’s one of the best presentations I’ve ever done. If you’re a member of the Digital Analytics Association, you can see a chunk of that presentation in the recent webinar I did on this topic. [Webinars are brutal – by far the hardest kind of speaking I do – because you are just sitting there talking into the phone for 50 minutes – but I think this one, especially the back-half, just went well] Seriously, if you’re a DAA member, I think you’ll find it worthwhile to replay the webinar.

If you’re not, and you really want to see it, drop me a line, I’m told we can get guest registrations setup by request.

At the end of that webinar I got quite a few questions. I didn’t get a chance to answer them all and I promised I would – so that’s what this post is. I think most of the questions have inherent interest and are easily understood without watching the webinar so do read on even if you didn’t catch it (but watch the darn webinar).

Q: Are metrics valuable to stakeholders even if they don’t tie in to revenues/cost savings?

Absolutely. In point of fact, revenue isn’t even the best metric on the positive side of the balance sheet. For many reasons, lifetime value metrics are generally a better choice than revenue. Regardless, not every useful metric has to, can or should tie back to dollars. There are whole classes of metrics that are important but won’t directly tie to dollars: satisfaction metrics, brand awareness metrics and task completion metrics. That being said, the most controversial type of non-revenue metric are proxies for engagement which is, in turn, a kind of proxy for revenue. These, too, can be useful but they are far more dangerous. My advice is to never use a proxy metric unless you’ve done the work to prove it’s a valid proxy. That means no metrics plucked from thin air because they seem reasonable. If you can’t close the loop on performance with behavioral data, use re-survey methods. It’s absolutely critical that the metrics you optimize with be the right ones – and that means spending the extra time to get them right. Finally, I’ve argued for awhile that rather than metrics our focus should be on delivering models embedded in tools – this allows people to run their business not just look at history.

Q: What is your favorite social advertising KPI? I have been using $ / Site Visit and $ / Conversion to measure our campaigns but there is some pushback from the social team that we are not capturing social reach.

A very related question – and it’s interesting because I actually didn’t talk much about KPIs in the webinar! I think the question boils down to this (in addition to everything I just said about metrics) – is reach a valid metric? It can be, but reach shouldn’t be taken as is. As per my answer above, the value of an impression is quite different on every channel. If you’re not doing the work to figure out the value of an impression in a channel then what’s the point of reporting an arbitrary reach number? How can people possibly assess whether any given reach number makes a buy good or bad once they realize that the value of an impression varies dramatically by channel? I also think a strong case can be made that it’s a mistake to try and optimize digital campaigns using reported metrics even direct conversion and dollars. I just saw a tremendous presentation from Drexel’s Elea Feit at the Philadelphia DAA Symposium that echoed (and improved) what I’ve been saying for years. Namely that non-incremental attribution is garbage and that the best way to get true measures of lift is to use control groups. If your social media team thinks reach is important, then it’s worth trying to prove if they are right – whether that’s because those campaigns generate hidden short-term lift or or because they generate brand awareness that track to long term lift.

Q: For companies that are operating in the way you typically see, what is the one thing you would recommend to help get them started?

This is a tough one because it’s still somewhat dependent on the exact shape of the organization. Here are two things I commonly recommend. First, think about a much different kind of VoC program. Constant updating and targeting of surveys, regular socialization with key decision-makers where they drive the research, an enterprise-wide VoC dashboard in something like Tableau that focuses on customer decision-making not NPS. This is a great and relatively inexpensive way to bootstrap a true strategic decision support capability. Second, totally re-think your testing program as a controlled experimentation capability for decision-making. Almost every organization I work with should consider fundamental change in the nature, scope, and process around testing.

Q: How much does this change when there are no clear conversions (i.e., Non-Profit, B2B, etc)?

I don’t think anything changes. But, of course, everything does change. What I mean is that all of the fundamental precepts are identical. VoC, controlled experiments, customer journey mapping, agile analytics, integration of teams – it’s all exactly the same set of lessons regardless of whether or not you have clear conversions on your website. On the other hand, every single measurement is that much harder. I’d argue that the methods I argue for are even more important when you don’t have the relatively straightforward path to optimization that eCommerce provides. In particular, the absolute importance of closing the loop on important measurements simply can’t be understated when you don’t have a clear conversion to optimize to.

Q: What is the minimum size of analytics team to be able to successfully implement this at scale?

Another tricky question to answer but I’ll try not to weasel out of it. Think about it this way, to drive real transformation at enterprise scale, you need at least 1 analyst covering every significant function. That means an analyst for core digital reporting, digital analytics, experimentation, VoC, data science, customer journey, and implementation. For most large enterprises, that’s still an unrealistically small team. You might scrape by with a single analyst in VoC and customer journey, but you’re going to need at least small teams in core digital reporting, analytics, implementation and probably data science as well. If you’re at all successful, the number of analytics, experimentation and data science folks is going to grow larger – possibly much larger.  It’s not like a single person in a startup can’t drive real change, but that’s just not the way things work in the large enterprise. Large enterprise environments are complex in every respect and it takes a significant number of people to drive effective processes.

Q: Sometimes it feels like agile is just a subject line for the weekly meeting. Do you have any examples of organizations using agile well when it comes to digital?

Couldn’t agree more. My rule of thumb is this: if your organization is studying how to be innovative, it never will be. If your organization is meeting about agile, it isn’t. In the IT world, Agile has gone from a truly innovative approach to development to a ludicrous over-engineered process managed, often enough, by teams of consulting PMs. I do see some organizations that I think are actually quite agile when it comes to digital and doing it very well. They are almost all gaming companies, pure-play internet companies or startups. I’ll be honest – a lot of the ideas in my presentation and approach to digital transformation come from observing those types of companies. Whether I’m right that similar approaches can work for a large enterprise is, frankly, unclear.

Q: As a third party measurement company, what is the best way to approach or the best questions to ask customers to really get at and understand their strategic goals around their customer journeys?

This really is too big to answer inside a blog – maybe even too big to reasonably answer as a blog. I’ll say, too, that I’m increasingly skeptical of our ability to do this. As a consultant, I’m honor-bound to claim that as a group we can come in, ask a series of questions of people who have worked in an industry for 10 or 20 years and, in a few days time, understand their strategic goals. Okay…put this way, it’s obviously absurd. And, in fact, that’s really not how consulting companies work. Most of the people leading strategic engagements at top-tier consulting outfits have actually worked in an industry for a long-time and many have worked on the enterprise side and made exactly those strategic decisions. That’s a huge advantage. Most good consultants in a strategic engagement know 90% of what they are going to recommend before they ask a single question.

Having said that, I’m often personally in a situation where I’m asked to do exactly what I’ve just said is absurd and chances are if you’re a third party measurement company you have the same problem. You have to get at something that’s very hard and very complex in a very short amount of time and your expertise (like mine) is in analytics or technology not insurance or plumbing or publishing or automotive.

Here’s a couple of things I’ve found helpful. First, take the journey’s yourself. It’s surprising how many executives have never bought an online policy from their own company, downloaded a whitepaper to generate a lead, or bought advertising on their own site. You may not be able to replicate every journey, but where you can get hands on, do it. Having a customer’s viewpoint on the journey never hurts and it can give you insight your customers should but often don’t have. Second, remember that the internet is your best friend. A little up-front research from analysts is a huge benefit when setting the table for those conversations. And I’m often frantically googling acronyms and keywords when I’m leading those executive conversations. Third, check out the competition. If you do a lead on the client’s website, try it on their top three competitors too. What you’ll see is often a great table-set for understanding where they are in digital and what their strategy needs to be. Finally, get specific on the journey. In my experience, the biggest failing in senior leaders is their tendency to generality. Big generalities are easy and they sound smart but they usually don’t mean much of anything. The very best leaders don’t ever retreat into useless generality, but most of us will fall into it all too easily.

Q: What are some engagement models where an enterprise engages 3rd party consulting? For how long?

The question every consultant loves to hear! There are three main ways we help drive this type of digital transformation. The first is as strategic planners. We do quite a bit of pure digital analytics strategy work, but for this type of work we typically expand the strategic team a bit (beyond our core digital analytics folks) to include subject matter experts in the industry, in customer journey, and in information management. The goal is to create a “deep” analytics strategy that drives toward enterprise transformation. The second model (which can follow the strategic phase) is to supplement enterprise resources with specific expertise to bootstrap capabilities. This can include things like tackling specific highly strategic analytics projects, providing embedded analysts as part of the team to increase capacity and maturity, building out controlled experiment teams, developing VoC systems, etc. We can also provide – and here’s where being part of a big practice really helps – PM and Change Management experts who can help drive a broader transformation strategy. Finally, we can help soup to nuts building the program. Mind you, that doesn’t mean we do everything. I’m a huge believer that a core part of this vision is transformation in the enterprise. Effectively, that means outsourcing to a consultancy is never the right answer. But in a soup-to-nuts model, we keep strategic people on the ground, helping to hire, train, and plan on an ongoing basis.

Obviously, the how-long depends on the model. Strategic planning exercises are typically 10-12 weeks. Specific projects are all over the map, and the soup-to-nuts model is sustained engagement though it usually starts out hot and then gets gradually smaller over time.

Q: Would really like to better understand how you can identify visitor segments in your 2-tier segmentation when we only know they came to the site and left (without any other info on what segment they might represent).  Do you have any examples or other papers that address how/if this can be done?

A couple years back I was on a panel at a Conference in San Diego and one of the panelists started every response with “In my book…”. It didn’t seem to matter much what the question was. The answer (and not just the first three words) were always the same. I told my daughters about it when I got home, and the gentleman is forever immortalized in my household as the “book guy”. Now I’m going to go all book guy on you. The heart of my book, “Measuring the Digital World” is an attempt to answer this exact question. It’s by far the most detailed explication I’ve ever given of the concepts behind 2-tiered segmentation and how to go from behavior to segmentation. That being said, you can only pre-order now. So I’m also going to point out that I have blogged fairly extensively on this topic over the years. Here’s a couple of posts I dredged out that provide a good overview:

http://semphonic.blogs.com/semangel/2012/05/digital-segmentation.html

http://semphonic.blogs.com/semangel/2011/06/building-a-two-tiered-segmentation-semphonics-digital-segmentation-techniques.html

and – even more important – here’s the link to pre-order the book!

That’s it…a pretty darn good list of questions. I hope that’s genuinely reflective of the quality of the webinar. Next week I’m going to break out of this series for a week and write about our recent non-profit analytics hackathon – a very cool event that spurred some new thoughts on the analysis process and the tools we use for it.

Controlled Experimentation and Decision-Making

The key to effective digital transformation isn’t analytics, testing, customer journeys, or Voice of Customer. It’s how you blend these elements together in a fundamentally different kind of organization and process. In the DAA Webinar (link coming) I did this past week on Digital Transformation, I used this graphic to drive home that point:


I’ve already highlighted experience engineering and integrated analytics in this little series, and the truth is I wrote a post on constant customer research too. If you haven’t read it, don’t feel bad. Nobody has. I liked it so much I submitted it to the local PR machine to be published and it’s still grinding through that process. I was hoping to get that relatively quickly so I could push the link, but I’ve given up holding my breath. So while I wait for VoC to emerge into the light of day, let’s move on to controlled experimentation.

I’ll start with definitional stuff. By controlled experimentation I do mean testing, but I don’t just mean A/B testing or even MVT as we’ve come to think about it. I want it to be broader. Almost every analytics project is challenged by the complexity of the world. It’s hard to control for all the constantly changing external factors that drive or impact performance in our systems. What looks like a strong and interesting relationship in a statistical analysis is often no more than an artifact produced by external factors that aren’t being considered. Controlled experiments are the best tool there is for addressing those challenges.

In a controlled experiment, the goal is to create a test whereby the likelihood of external factors driving the results is minimized. In A/B testing, for example, random populations of site visitors are served alternative experiences and their subsequent performance is measured. Provided the selection of visitors into each variant of the test is random and there is sufficient volume, A/B tests make it very unlikely that external factors like campaign sourcing or day-time parting will impact the test results. How unlikely? Well, taking a random sample doesn’t guarantee randomness. You can flip a fair coin fifty times and get fifty heads so even a sample collected in a fully random manner may come out quite biased; it’s just not very likely. The more times you flip, the more likely your sample will be representative.

Controlled experiments aren’t just the domain of website testing though. They are a fundamental part of scientific method and are used extensively in every kind of research. The goal of a controlled experiment is to remove all the variables in an analysis but one. That makes it really easy to analyze.

In the past, I’ve written extensively on the relationship between analytics and website testing (Kelly Wortham and I did a whole series on the topic). In that series, I focused on testing as we think of it in the digital world – A/B and MV tests and the tools that drive those tests. I don’t want to do that here, because the role for controlled experimentation in the digital enterprise is much broader than website testing. In an omni-channel world, many of the most important questions – and most important experiments – can’t be done using website testing. They require experiments which involve the use, absence or role of an entire channel or the media that drives it. You can’t build those kinds of experiments in your CMS or your testing tool.

I also appreciate that controlled experimentation doesn’t carry with it some of the mental baggage of testing. When we talk testing, people start to think about Optimizely vs. SiteSpect, A/B vs. MVT, landing page optimization and other similar issues. And when people think about A/B tests, they tend to think about things like button colors, image A vs. image B and changing the language in a call-to-action. When it comes to digital transformation, that’s all irrelevant.

It’s not that changing the button colors on your website isn’t a controlled experiment. It is; it’s just not a very important one. It’s also representative of the kind of random “throw stuff at a wall” approach to experimentation that makes so many testing programs nearly useless.

One of the great benefits of controlled experimentation is that, done properly, the idea of learning something useful is baked into the process. When you change the button color on your Website, you’re essentially framing a research question like this:

Hypothesis: Changing the color of Button X on Page Y from Red to Yellow will result in more clicks of the button per page view

An A/B test will indeed answer that question. However, it won’t necessarily answer ANY other question of higher generality. Will changing the color of any other button on any other page result in more clicks? That’s not part of the test.

Even with something as inane as button colors, thinking in terms of a controlled experiment can help. A designer might generalize this hypothesis to something that’s a little more interesting. For example, the hypothesis might be:

Hypothesis: Given our standard color pallet, changing a call-to-action on the page to a higher contrast color will result in more clicks per view on the call-to-action

That’s a somewhat more interesting hypothesis and it can be tested with a range of colors with different contrasts. Some of those colors might produce garish or largely unreadable results. Some combinations might work well for click-rates but create negative brand impressions. That, too, can be tested and might perhaps yield a standardized design heuristic for the right level of contrast between the call-to-action and the rest of a page given a particular color palette.

The point is, by casting the test as a controlled experiment we are pushed to generalize the test in terms of some single variable (such as contrast and its impact on behavior). This makes the test a learning experience; something that can be applied to a whole set of cases.

This example could be read as an argument for generalizing isolated tests into generalized controlled experiments. That might be beneficial, but it’s not really ideal. Instead, every decision-maker in the organization should be thinking about controlled experimentation. They should be thinking about it as way to answer questions analytics can’t AND as a way to assess whether the analytics they have are valid. Controlled experimentation, like analytics, is a tool to be used by the organization when it wants to answer questions. Both are most effective when used in a top-down not a bottom-up fashion.

As the sentence above makes clear, controlled experimentation is something you do, but it’s also a way you can think about analytics – a way to evaluate the data decision-makers already have. I’ve complained endlessly, for example, about how misleading online surveys can be when it comes to things like measuring sitewide NPS. My objection isn’t to the NPS metric, it’s to the lack of control in the sample. Every time you shift your marketing or site functionality, you shift the distribution of visitors to your website. That, in turn, will likely shift your average NPS score – irrespective of any other change or difference. You haven’t gotten better or worse. Your customers don’t like you less or more. You’ve simply sampled a somewhat different population of visitors.

That’s a perfect example of a metric/report which isn’t very controlled.  Something outside what you are trying to measure (your customer’s satisfaction or willingness to recommend you) is driving the observed changes.

When decision-makers begin to think in terms of controlled experiments, they have a much better chance of spotting the potential flaws in the analysis and reporting they have, and making more risk-informed decisions. No experiment can ever be perfectly controlled. No analysis can guarantee that outside factors aren’t driving the results. But when decision-makers think about what it would take to create a good experiment, they are much more likely to interpret analysis and reporting correctly.

I’ve framed this in terms of decision-makers, but it’s good advice for analysts too. Many an analyst has missed the mark by failing to control for obvious external drivers in their findings. A huge part of learning to “think like an analyst” is learning to evaluate every analysis in terms of how to best approximate a controlled experiment.

So if controlled experimentation is the best way to make decisions, why not just test everything? Why not, indeed? Controlled experimentation is tremendously underutilized in the enterprise. But having said as much, not every problem is amenable to or worth experimenting on. Sometimes, building a controlled experiment is very expensive compared to an analysis; sometimes it’s not. With an A/B testing tool, it’s often easier to deploy a simple test than try to conduct and analysis of a customer preference. But if you have an hypothesis that involves re-designing the entire website, building all that creative to run a true controlled experiment isn’t going to be cheap, fast or easy.

Media mix analysis is another example of how analysis/experimentation trade-offs come into play. If you do a lot of local advertising, then controlled experimentation is far more effective than mix modeling to determine the impact of media and to tune for the optimum channel blend. But if much of your media buy is national, then it’s pretty much impossible to create a fully controlled experiment that will allow you to test mix hypotheses. So for some kinds of marketing organizations, controlled experimentation is the best approach to mix decisions; for others, mix modelling (analysis in other words – though often supplemented by targeted experimentation) is the best approach.

This may all seem pretty theoretical, so I’ll boil it down to some specific recommendations for the enterprise:

  • Repurpose you’re A/B testing group as a controlled experimentation capability
  • Blend non-digital analytics resources into that group to make sure you aren’t thinking too narrowly – don’t just have a bunch of people who think A/B testing tools
  • Integrate controlled experimentation with analytics – they are two sides of the same coin and you need a single group that can decide which is appropriate for a given problem
  • Train your executives and decision-makers in experimentation and interpreting analysis – probably with a dedicated C-Suite resource
  • Create constant feedback loops in the organization so that decision-makers can request new survey questions, new analysis and new experiments at the same time and with the same group

I see lots of organizations that think they are doing a great job testing. Mostly they aren’t even close. You’re doing a great job testing when every decision maker at every level in the organization is thinking about whether a controlled experiment is possible when they have to make a significant decision. When those same decision-makers know how to interpret the data they have in terms of its ability to approximate a controlled experiment. And when building controlled experiments is deeply integrated into the analytics research team and deployed across digital and omni-channel problems.

Engineering the Digital Journey

Near the end of my last post (describing the concept of analytics across the enterprise), I argued that full spectrum analytics would  provide “a common understanding throughout the enterprise of who your customers are, what journeys they have, which journeys are easy and which a struggle for each type of customer, detailed and constantly improving profiles of those audiences and those journeys and the decision-making and attitudes that drive them, and a rich understanding of how initiatives and changes at every level of the enterprise have succeeded, failed, or changed those journeys over time.”

By my count, that admittedly too long sentence contains the word journey four times and clearly puts understanding the customer journey at the heart of analytics understanding in the enterprise.

I think that’s right.

If you think about what senior decision-makers in an organization should get from analytics, nothing seems more important than a good understanding of customers and their journeys. That same understanding is powerful and important at every level of the organization. And by creating that shared understanding, the enterprise gains something almost priceless – the ability to converse consistently and intelligently, top-to-bottom, about why programs are being implemented and what they are expected to accomplish.

This focus on the journey isn’t particularly new. It’s been almost five years since I began describing Two-Tiered Segmentation as fundamental to digital; it’s a topic I’ve returned to repeatedly and it’s the central theme of my book. In a Two-Tiered Segmentation, you segment along two dimensions: who visitors are and what they are trying to accomplish in a visit. It’s this second piece – the visit intent segmentation – that begins to capture and describe customer journey.

But if Two-Tiered Segmentation is the start of a measurement framework for customer journey, it isn’t a complete solution. It’s too digitally focused and too rooted in displayed behaviors – meaning it’s defined solely by the functionality provided by the enterprise not by the journeys your customers might actually want to take. It’s also designed to capture the points in a journey – not necessarily to lay out the broader journey in a maximally intelligible fashion.

Traditional journey mapping works from the other end of the spectrum. Starting with customers and using higher-level interview techniques, it’s designed to capture the basic things customers want to accomplish and then map those into more detailed potential touchpoints. It’s exploratory and specifically geared toward identifying gaps in functionality where customers CAN’T do the things they want or can’t do them in the channels they’d prefer.

While traditional journey mapping may feel like the right solution to creating enterprise-wide journey maps, it, too, has some problems. Because the techniques used to create journey maps are very high-level, they provide virtually no ability to segment the audience. This leads to a “one-size-fits-all” mentality that simply isn’t correct. In the real world, different audiences have significantly different journey styles, preferences and maps, and it’s only through behavioral analysis that enough detail can be exhumed about those segments to create accurate maps.

Similarly, this high-level journey mapping leads to a “golden-path” mentality that belies real world experience. When you talk to people in the abstract, it’s perfectly possible to create the ideal path to completion for any given task. But in the real world, customers will always surprise you. They start paths in odd places, go in unexpected directions, and choose channels that may not seem ideal. That doesn’t mean you can’t service them appropriately. It does mean that if you try to force every customer into a rigid “best” path you’ll likely create many bad experiences. This myth of the golden path is something we’ve seen repeatedly in traditional web analytics and it’s even more mistaken in omni-channel.

In an omni-channel world, the goal isn’t to create an ideal path to completion. It’s to understand where the customer is in their journey and adapt the immediate Touchpoint to maximize their experience. That’s a fundamentally different mindset – a network approach not a golden-path – and it’s one that isn’t well captured or supported by traditional journey mapping.

There’s one final aspect to traditional journey mapping that I find particularly troublesome – customer experience teams have traditionally approached journey mapping as a one-time, static exercise.

Mistake.

The biggest change digital brings to the enterprise is the move away from traditional project methodologies. This isn’t only an IT issue. It’s not (just) about Agile development vs. Waterfall. It’s about recognition that ALL projects in nearly all their constituent pieces, need to work in iterative fashion. You don’t build once and move on. You build, measure, tune, rebuild, measure, and so on.  Continuous improvement comes from iteration. And the implication is that analytics, design, testing, and, yes, development should all be setup to support continuous cycles of improvement.

In the well-designed digital organization, no project ever stops.

This goes for journey mapping too. Instead of one huge comprehensive journey map that never changes and covers every aspect of the enterprise, customer journeys need to be evolved iteratively as part of an experience factory approach. Yes, a high-level journey framework does need to exist to create the shared language and approach that the organization can use. But like branches on a tree, the journey map should constantly be evolved in increasingly fine-grained and detailed views of specific aspects of the journey. If you’ve commissioned a one-time customer experience journey mapping effort, congratulations; you’re already on the road to failure.

The right approach to journey mapping isn’t two-tiered segmentation or traditional customer experience maps; it’s a synthesis of the two that blends a high-level framework driven primarily by VoC and creative techniques with more detailed, measurement and channel-based approaches (like Two-Tiered Segmentation) that deliver highly segmented network-based views of the journey. The detailed approaches never stop developing, but even the high-level pieces should be continuously iterated. It’s not that you need to constantly re-work the whole framework; it’s that in a large enterprise, there are always new journeys, new content, and new opportunities evolving.

More than anything else, this need for continuous iteration is what’s changed in the world and it’s why digital is such a challenge to the large enterprise.

A great digital organization never stops measuring customer experience. It never stops designing customer experience. It never stops imagining customer experience.

That takes a factory, not a project.

Full Spectrum Analytics

Enterprises do analytics. They just don’t use analytics.

That’s the first, and for me the most frustrating, of the litany of failures I listed in my last post that drive digital incompetence in the enterprise. Most readers will assume I mean by this assertion that organizations spend time analyzing the data but then do nothing to act on the implications of that analysis. That’s true, but it’s only a small part of what I mean when I say the enterprises don’t use analytics. Nearly every enterprise that I work with or talk to has a digital analytics team ranging in size from modest to substantial. Some of these teams are very strong, some aren’t. But good or not-so-good, in almost every case, their efforts are focused on a very narrow range of analysis. Reporting on and attributing digital marketing, reporting on digital consumption, and conversion rate optimization around the funnel account for nearly all of the work these organizations produce.

Is that really all there is too digital analytics?

Though I’ve been struggling to find the right term (I’ve called it full-stack, full-spectrum and top-down analytics), the core idea is the same – every decision about digital at every level in the enterprise should be analytically driven. C-Level decision-makers who are deciding how much to invest in digital and what types of products or big-initiatives might bear fruit, senior leaders who are allocating budget and fleshing out major campaigns and initiatives, program managers who are prioritizing audiences, features and functionality, designers who are building content or campaign creative; every level and every decision should be supported and driven by data.

That simply isn’t the case at any enterprise I know. It isn’t even close to the case. Not even at the very best of the best. And the problem almost always begins at the top.

How do really senior decision-makers decide which products to invest in and how to carve up budgets? From a marketing perspective, there are organizations that efficiently use mix-modeling to support high-level decisions around marketing spend. That’s a good thing, but it’s a very small part of the equation. Senior decision-makers ought to have constantly before them a comprehensive and data-driven understanding of their customer types and customer journeys. They ought to understand which of those journeys they as a business perform well at and at which they lag behind. They ought to understand what audiences they don’t do well with, and what the keys to success for that audience are. They ought to have a deep understanding of how previous initiatives have impacted those audiences and journeys – which have been successful and which have failed.

This mostly just doesn’t exist.

Journey mapping in the organization is static, old-fashioned, non-segmented and mostly ignored. There’s no VoC surfaced to decision-makers except NPS – which is entirely useless for actually understanding your customers (instead of understanding what they think about you). There is no monitoring of journey success or failure – either overall or by audience. Where journey maps exist, they exist entirely independent of KPIs and measurement. There is no understanding of how initiatives have impacted either specific audiences or journeys. There is no interesting tracking of audiences in general, no detailed briefings about where the enterprise is failing, no deep-dives into potential target populations and what they care about. In short, C-Level decision-makers get almost no interesting or relevant data on which to base the types of decisions they actually need to make.

Given that complete absence of interesting data, what you typically get is the same old style of decision-making we’ve been at forever. Raise digital budgets by 10% because it sounds about right.  Invest in a mobile app because Gartner says mobile is the coming thing. Create a social media command center because company X has one. This isn’t transformation. It isn’t analytics. It isn’t right.

Things don’t get better as you descend the hierarchy of an organization. The senior leaders taking those high-level decisions and fleshing out programs and initiatives lack all of those same things the C-Level folks lack. They don’t get useful VoC, interesting and data-supported journey mapping, comprehensive segmented performance tracking, or interesting analysis of historical performance by initiative either. They need all that stuff too.

Worse, since they don’t have any of those things and aren’t basing their decisions on them, most initiatives are shaped without having a clear business purpose that will translate into decisions downstream around targeting, creative, functionality and, of course, measurement.

If you’re building a mobile app to have a mobile app, not because you need to improve key aspects of a universally understood and agreed upon set of customer journeys for specific audiences, how much less effective will all of the downstream decisions about that app be? From content development to campaign planning to measurement and testing, a huge number of enterprise digital initiatives are crippled from the get-go by the lack of a consistent and clear vision at the senior levels about what they are designed to accomplish.

That lack of vision is, of course, fueled by a gaping hole in enterprise measurement – the lack of a comprehensive, segmented customer journey framework that is the basis for performance measurement and customer research.

Yes, there are pockets in the enterprise where data is used. Digital campaigns do get attributed (sometimes) and optimized (sometimes). Funnels do get improved with CRO. But even these often ardent users of data work, almost always, without the big picture. They have no better framework or data around that big-picture than anyone else and, unlike their counterparts in the C-Suite, they tend to be focused almost entirely on channel level concerns. This leads, inevitably, to a host of sub-optimal but fully data-driven decisions based on a narrow view of the data, the customer, and the business function.

There are, too, vast swathes of the mid and low level digital enterprise where data is as foreign to day-to-day operations as Texas BBQ would be in Timbuktu. The agencies and internal teams that create campaigns, build content and develop tools live their lives gloriously unconstrained by data. They know almost nothing of the target audiences for which the content and campaigns are built, they have no historical tracking of creative or feature delivery correlated to journey or audience success, they get no VoC information about what those audiences lack, struggle with or make decisions using. They lack, in short, the basic data around which they might understand why they are building an experience, what it should consist of, and how it should address the specific target audiences. They generally have no idea, either, how what they build will be measured or which aspects of its usage will be chosen by the organization as Key Performance Indicators.

Take all this together and what it means is that even in the enterprise with a strong digital analytics department, the overwhelming majority of decisions about digital – including nearly all the most important choices – are made with little or no data.

This isn’t a worst-case picture. It’s almost a best-case picture. Most organizations aren’t even dimly aware of how much they lack when it comes to using data to drive digital decision-making.  Their view of digital analytics is framed by a set of preconceptions that limit its application to evaluating campaign performance or optimizing funnels.

That’s not full-spectrum analytics. It’s one little ray of light – and that a sickly, purplish hue – cast on an otherwise empty gray void. To transform the enterprise around digital – to be really good at digital with all the competitive advantage that implies – it takes analytics. But by analytics I don’t mean this pale, restricted version of digital analytics that claims for its territory nothing but a small set of choices around which marketing campaign to invest in. I mean, instead, a form of analytics that provides support for decision-makers of every type and at every level in the organization. An analytics that provides a common understanding throughout the enterprise of who your customers are, what journeys they have, which journeys are easy and which a struggle for each type of customer, detailed and constantly improving profiles of those audiences and those journeys and the decision-making and attitudes that drive them, and a rich understanding of how initiatives and changes at every level of the enterprise have succeeded, failed, or changed those journeys over time.

You can’t be great, or even very good, at digital without all this.

A flat-out majority of the enterprises I talk to these days are going on about transforming themselves with digital and all that implies for customer-centricity and agility. I’m pretty sure I know what they mean. They mean creating a siloed testing program and adding five people to their digital analytics team. They mean tracking NPS with their online surveys. They mean the sort of “agile” development that has lead the original creators of agile to abandon the term in despair. They mean creating a set of static journey maps which are used once by the web design team and which are never tied to any measurement. They mean, in short, to pursue the same old ways of doing business and of making decisions with a gloss of digital best practices that change almost nothing.

It’s all too easy to guess how transformative and effective these efforts will be.