Tag Archives: strategic analytics

Competitive Advantage and Digital Transformation – Optimizing Retail and eCommerce

In my last posts before the DA Hub, I described the first two parts of an analytics driven digital transformation. The first part covered the foundational activities that help an organization understand digital and think and decide about it intelligently. Things like customer journey, 2-tiered segmentation, a comprehensive VoC system and a unified campaign measurement framework form the core of a great digital organization. Done well, they will transform the way your organization thinks about digital. But, of course, thinking isn’t enough. You don’t build culture by talking but by doing. In the beginning was the deed. That’s why my second post dealt with a whole set of techniques for making analytics a constant part of the organization’s processes. Experimentation driven by a comprehensive analytics-driven testing plan, attribution and mix modelling, analytic reporting, re-survey, and a regular cadence of analytics driven briefings make continuous improvement a reality. If you take this seriously and execute fully on these first two phases, you will be good at digital. That’s a promise.

But as powerful, transformative and important as these first two phases are, they still represent only a fraction of what you can achieve with analytics driven-transformation. The third phase of analytics driven transformation targets areas where analytics changes the way a business operates, prices its products, communicates with and supports its customers.

The third phase of digital transformation is unique. In some ways, it’s easier than the first two phases. It involves much less organization and cultural transformation. If you done those first two phases, you’re already there when it comes to having an analytics culture. On the other hand, in this third phase the analytics projects themselves are often MUCH more complex. This is where we tackle big hard problems. Problems that require big data, advanced statistical analysis, and serious imagination. Well, that’s the fun stuff. Seriously, if you’ve gotten through the first two phases of an analytics transformation successfully, doing the projects in Phase Three is like a taking a victory lap.

There isn’t one single blueprint for the third phase of an analytics driven transformation. The work that gets done in the first two phases is surprisingly similar almost regardless of the industry or specific business. I suppose it’s like laying the foundation for a building. No matter what the building looks like, the concrete block at the bottom is going to look pretty much the same. At this third level, however, we’re above the foundation and what you do will depend mightily on your specific business.

I know that it depends on your business is not much of an answer. As a consultant, it’s not unusual to get caught up in conversations like this:

“So how much would it cost?”

“Well, that depends.”

“What kind of things does it depend on?”

“Well, it depends on how deeply you want to go into it, who you want to have do it, and how you want to get it done.”

All of this is true, of course, but none of it is helpful. I usually try to short-circuit these conversations by presenting a couple of real world alternatives.

I think this is more helpful (though it’s also more dangerous). Similarly, when I present the third phase of an analytics driven transformation I try to make it specific to the business in question. And the more I know about the business, the more pointed, interesting, and – I hope – convincing that third phase is going to look. But if I haven’t spent much time a business, I still customize that third phase by industry – picking out high-level analytics projects that are broadly applicable to everyone in the sector.

That’s what I’m going to try to do here, with the added benefit of picking a couple different industries and showing how the differences play out in this third phase. Do keep in mind, though, that the description of this third phase – unlike that of the first two – is meant to be suggestive only. No real-world third phase (certainly no optimal one) is likely to mirror what I lay out here. It might not even be very close. What’s more, unlike the first phase (at least) which is close-ended (when you’ve done the projects I suggest you’re done with that phase), phase three is open-ended. You never stop doing analytics projects at this level. And that’s a good thing.

For the first example, I decided to start with a classic retail e-commerce view of the world. It’s a sector where we all have, at the very least, a consumer’s understanding of how it works. There are many, many possible projects to choose from, but here are five I often present as a typical starting point.

The first is an analytically driven personalization program. With journey-mapping, 2-tiered segmentation and a robust experimentation program, an enterprise should be a in a good position to drive personalization. Most personalization programs bootstrap themselves by starting with fairly straightforward segmentations (already done) and rule-based personalization decisions targeted to “easy” problems like email offers and returning visitors to the Website. That’s fine. The very best way to build a personalization program is organically – build it by doing it with increasing sophistication in more and more channels and at more and more touchpoints.

Merchandising optimization is another very big opportunity. So much of the merchandising optimization I see is focused on product detail pages. That’s fine as far as it goes, but it misses the much larger opportunity to optimize merchandising on search and aisle pages via analytics. Traditional merchandising folks have been slow to understand how critical moving merchandising upstream is to effective digital performance. This turns out to be analytically both very challenging and very rich.

Assortment optimization (and I might be just as likely to pick pricing or demand signals here) has long been a domain of traditional retail analytics. As such, I have to admit I didn’t think much about it until the last few years. But I’ve come to believe that digital analytics can yield powerful preference information that is typically missing in this analysis. To do effective assortment optimization, you need to understand customer’s potential replacement options. In the offline world, this usually involves making simple guesses based on high-level product sales about which products will be substituted. Using online view data, we can do much, much better. This is a case where digital analytics doesn’t so much replace an existing technique as deepen and enrich it with data heretofore undreamed of. Assortment optimization with digital data gives you highly segmented, localized data about product substitution preferences. It’s a lot better.

I’ve become a strong advocated for a fundamental re-think of loyalty programs based on the idea that surprise-based loyalty with no formal earning system is the future of rewards programs. The advantages of surprise-based loyalty are considerable when stacked up against traditional loyalty programs. You can target rewards where you think they will create lift. You can take advantage of inventory problems or opportunities. You don’t incur ANY financial obligations. You create no customer resentment or class issues. You can scale them and localize them to work with a specially trained staff. And, of course, the biggest bonus of all – you actually create far more impact per dollar spent. Surprise-based loyalty is, inherently, analytic. You can’t really do it any other way. Where it’s an option, it’s always one of the biggest changes you can make in the way your business works.

Finally, I’ve picked digital/store integration as my fifth project for analytics-led transformation. There are a number of different ways to take this. The drives between store and site are complex, important and fruitful. Optimizing those drives should be one of the analytics priorities for any omni-channel retail. And that optimization is a combination of testing and analytics. In this case, however, I’ve chosen to focus on measuring and optimizing digital in-store experiences. You’re surely familiar with endless-aisle retail; where digital is integrated into the in-store experience. The vast majority of these physical-digital experiences have been quite ineffective. Almost always, they’ve been executed from a retail perspective. By which I mean that they’ve been built once, dropped into the store, and left to fail. That’s just not doing it right. In-store experiences are getting more digital. Digital signage is growing rapidly. Physical-digital experiences are increasingly common. But if you want actual competitive advantage out of these experiences, you’d better tackle them from a digital test-and-learn/analytics perspective. Anything less is a prescription for failure.

Digital Transformation Phase III Retail

So here’s my first round of Phase Three projects for an analytics driven transformation in retail. Each is big, complex and hard. They are also important. These are the projects that will truly transform your digital business. They are rubber-meets-the-road stuff that drive competitive advantage. It would be a mistake to try and execute on projects like this without first creating a strong analytics foundation in the organization. You’re chances of misfiring on doing or operationalizing the analytics are simply too great without that foundation. But if you don’t move past the first two phases into analytics like this, you’re missing the big stuff. You can churn out lots of incremental improvement in digital without ever touching projects like these. Those incremental improvements aren’t nothing. They may be valuable enough to justify your time and money. But if that’s all you ever do, you’ll likely find yourself wondering if it was all really worth it. Do any of these projects successfully, and you’ll never ask that question again.

Next week I’ll show a different (non-retail) set of projects and break-down what the differences tell us about how to make analytics a strategic asset.

[Just a reminder that if you’re interested in the U.S. version of the Digital Analytics Hub you can register here!]

The Agile Organization

I’ve been meandering through an extended series on digital transformation: why it’s hard, where things go wrong, and what you need to be able to do to be successful. In this post, I intend to summarize some of that thinking and describe how the large enterprise should organize itself to be good at digital.

Throughout this series, I’ve emphasized the importance of being able to make good decisions in the digital realm. That is, of course, the function of analytics and its my own special concerns when it comes to digital. But there are people who will point out  that decision-making is not the be all and end all of digital excellence. They might suggest that being able to execute is important too.

If you’re a football fan, it’s easy to see the dramatic difference between Peyton Manning – possibly the finest on-field decision-maker in the history of the game – with a good arm and without. It’s one thing to know where to throw the ball on any given play, quite another to be able to get it there accurately. If that wasn’t the case, it’s probably true that many of my readers would be making millions in the NFL!

On the other hand, this divide between decision-making and execution tends to break down if you extend your view to the entire organization. If the GM is doing the job properly, then the decision about which quarterbacks to draft or sign will appropriately balance their physical and decision-making skills. That’s part of what’s involved in good GM decisioning. Meanwhile, the coach has an identical responsibility on a day-to-day basis. A foot injury may limit Peyton to the point where his backup becomes a better option. Then it may heal and the pendulum swings back. The organization makes a series of decisions and if it can make all of those decisions well, then it’s hard to see how execution doesn’t follow along.

If, as an organization, I can make good decisions about the strategy for digital, the technology to run it on, the agencies to build it, the people to optimize it, the way to organize it, and the tactics to drive it, then everything is likely to be pretty good.

Unfortunately, it’s simply not the case that the analytics, organization and capabilities necessary to make good decisions across all these areas are remotely similar. To return to my football analogy, it’s clear that very few organizations are setup to make good decisions in every aspect of their operations. Some organizations excel at particular functions (like game-planning) but are very poor at drafting. Indeed, sometimes success in one-area breeds disaster in another. When a coach like Chip Kelly becomes very successful in his role, there is a tendency for the organization to expand that role so that the coach has increasing control over personnel. This almost always works badly in practice. Even knowing it will work badly doesn’t prevent the problem. Since the coach is so important, it may be that an organization will cede much control over personnel to a successful coach even when everyone (except the coach) believes it’s a bad idea.

If you don’t think similar situations arise constantly in corporate America, you aren’t paying attention.

In my posts in this series, I’ve mapped out the capabilities necessary to give decision-makers the information and capabilities they need to make good decisions about digital experiences. I haven’t touched on (and don’t really intend to touch on) broader themes like deciding who the right people to hire are or what kind of measurement, analysis or knowledge is necessary to make those sorts of meta-decisions.

There are two respects, however, in which I have tried to address at least some of these meta-concerns about execution. First, I’ve described why it is and how it comes to pass that most enterprises don’t use analytics to support strategic decision-making. This seems like a clear miss and a place where thoughtful implementation of good measurement, particularly voice-of-customer measurement of the type I’ve described, should yield high returns.

Second, I took a stab at describing how organizations can think about and work toward building an analytics culture. In these two posts, I argue that most attempts at culture-building approach the problem backwards. The most common culture-building activities in the enterprise are all about “talk”. We talk about diversity. We talk about ethics. We talk about being data-driven in our decision-making. I don’t think this talk adds up to much. I suggest that culture is formed far more through habit than talk; that if an organization wants to build an analytics culture, it needs to find ways to “do” analytics. The word may proceed the deed, but it is only through the force of the deed (good habits) that the word becomes character/culture. This may seem somewhat obvious – no, it is obvious – but people somehow manage to miss the obvious far too often. Those posts don’t just formulate the obvious, they also suggest a set of activities that are particularly efficacious in creating good enterprise habits of decision-making. If you care about enterprise culture and you haven’t already done so, give them a read.

For some folks, however, all these analytics actions miss the key questions. They don’t want to know what the organization should do. They want to know how the organization should work. Who owns digital? Who owns analytics? What lives in a central organization? What lives in a business unit? Is digital a capability or a department?

In the context of the small company, most of these questions aren’t terribly important. In the large enterprise, they mean a lot. But acknowledging that they mean a lot isn’t to suggest that I can answer them – or at least most of them.

I’m skeptical that there is an answer for most of these questions. At least in the abstract, I doubt there is one right organization for digital or one right degree of centralization. I’ve had many conversations with wise folks who recognize that their organizations seem to be in constant motion – swinging like an enormous pendulum between extremes of centralization followed by extremes of decentralization.

Even this peripatetic motion – which can look so irrational from the inside – may make sense. If we assume that centralization and decentralization have distinct advantages, then not only might it be true that changing circumstances might drive a change in the optimal configuration, but it might even be true that swinging the organization from one pole to the other might help capture the benefits of each.

That seems unlikely, but you never know. There is sometimes more logic in the seemingly irrational movements of the crowd than we might first imagine.

Most questions about digital organization are deeply historical. They depend on what type of company you are, in what of market, with what culture and what strategic imperatives. All of which is, of course, Management 101. Obvious stuff that hardly needs to be stated.

However, there are some aspects of digital about which I am willing to be more directive. First, that some balance between centralization and decentralization is essential in analytics. The imperative for centralization is driven by these factors: the need for comparative metrics of success around digital, the need for consistent data collection, the imperatives of the latest generation of highly-complex IT systems, and the need/desire to address customers across the full spectrum of their engagement with the enterprise. Of these, the first and the last are primary. If you don’t need those two, then you may not care about consistent data collection or centralized data systems (this last is debatable).

On the other hand, there are powerful reasons for decentralization of which the biggest is simply that analytics is best done as close to the decision-making as possible. Before the advent of Hadoop, I would have suggested that the vast majority of analytics resources in the digital space be decentralized. Hadoop makes that much harder. The skills are much rarer, the demands for control and governance much higher, and the need for cross-domain expertise much greater in this new world.

That will change. As the open-source analytics stack matures and the market over-rewards skilled practitioners – drawing in more folks, it will become much easier to decentralize again. This isn’t the first time we’ve been down the IT path that goes from centralization to gradual diffusion as technologies become cheaper, easier, and better supported.

At an even more fundamental level than the question of centralization lives the location and nature of digital. Is digital treated as a thing? Is it part of Marketing? Or Operations? Or does each thing have a digital component?

I know I should have more of an opinion about this, but I’m afraid that the right answers seem to me, once again, to be local and historical. In a digital pure-play, to even speak of digital as a thing seems absurd. It’s the core of the company. In a gas company, on the other hand, digital might best be viewed as a customer service channel. In a manufacturer, digital might be a sub-function of brand marketing or, depending on the nature of the digital investment and its importance to the company, a unit unto-itself.

Obviously, one of the huge disadvantages to thinking of digital as a unit unto-itself is how it can then interact correctly with the non-digital functions that share the same purpose. If you have digital customer servicing and non-digital customer servicing, does it really make sense to have one in a digital department and the other as a customer-service department?

There is a case, however, for incubating digital capabilities within a small compact, standalone entity that can protect and nourish the digital investment with a distinct culture and resourcing model. I get that. Ultimately, though, it seems to me that unless digital OWNS an entire function, separating that function across digital and non-digital lines is arbitrary and likely to be ineffective in an omni-channel world.

But here’s the flip side. If you have a single digital property and it shares marketing and customer support functions, how do you allocate real-estate and who gets to determine key things like site structure? I’ve seen organizations where everything but the homepage is owned by somebody and the home page is like Oliver Twist. “Home page for sale, does anybody want one?”

That’s not optimal.

So the more overlap there needs to be between the functions and your digital properties, the more incentive you have to build a purely digital organization.

No matter what structure you pick, there are some trade-offs you’re going to have to live with. That’s part of why there is no magic answer to the right organization.

But far more important than the precise balance you strike around centralization or even where you put digital is the way you organize the core capabilities that belong to digital. Here, the vast majority of enterprises organize along the same general lines. Digital comprises some rough set of capabilities including:

  • IT
  • Creative
  • Marketing
  • Customer
  • UX
  • Analytics
  • Testing
  • VoC

In almost every company I work with, each of these capabilities is instantiated as a separate team. In most organizations, the IT folks are in a completely different reporting structure all the way up. There is no unification till you hit the C-Suite. Often, Marketing and Creative are unified. In some organizations, all of the research functions are unified (VoC, analytics) – sometimes under Customer, sometimes not. UX and Testing can wind up almost anywhere. They typically live under the Marketing department, but they can also live under a Research or Customer function.

None of this, to me, makes any sense.

To do digital well requires a deep integration of these capabilities. What’s more, it requires that these teams work together on a consistent basis. That’s not the way it’s mostly done.

Almost every enterprise I see not only siloes these capabilities, but puts in place budgetary processes that fund each digital asset as a one-time investment and which requires pass-offs between teams.

That’s probably not entirely clear so let me give some concrete examples.

You want to launch a new website. You hire an agency to design the Website. Then your internal IT team builds it. Now the agency goes away. The folks who designed the website no longer have anything to do with it. What’s more, the folks who built it get rotated onto the next project. Sometimes, that’s all that happens. The website just sits there – unimproved. Sometimes the measurement team will now pick it up. Keep in mind that the measurement team almost never had anything to do with the design of the site in the first place. They are just there to report on it. Still, they measure it and if they find some problem, who do they give it to?

Well, maybe they pass it on to the UX team or the testing team. Those teams, neither of which have ever worked with the website or had anything to do with its design are now responsible for implementing changes on it. And, of course, they will be working with developers who had nothing to do with building it.

Meanwhile, on an entirely separate track, the customer team may be designing a broader experience that involves that website. They enlist the VoC team to survey the site’s users and find out what they don’t like about it. Neither team (of course) had anything to do with designing or building the site.

If they come to some conclusion about what they want the site to do, they work with another(!) team of developers to implement their changes. That these changes may be at cross-purposes to the UX team’s changes or the original design intent is neither here nor there.

Does any of this make sense?

If you take continuous improvement to heart (and you should because it is the key to digital excellence), you need to realize that almost everything about the way your digital organization functions is wrong. You budget wrong and you organize wrong.

[Check out my relatively short (20 min) video on digital transformation and analytics organization – it’s the perfect medium for distributing this message through your enterprise!]

Here’s my simple rule about building digital assets. If it’s worth doing, it’s worth improving. Nothing you build will ever be right the first time. Accept that. Embrace it. That means you budget digital teams to build AND improve something. Those teams don’t go away. They don’t rotate. And they include ALL of the capabilities you need to successfully deliver digital experiences. Your developers don’t rotate off, your designers don’t go away, your VoC folks aren’t living in a parallel universe.

When you do things this way, you embody a commitment to continuous improvement deeply into your core organizational processes. It almost forces you to do it right. All those folks in IT and creative will demand analytics and tests to run or they won’t have anything to do.

That’s a good thing.

This type of vertical integration of digital capabilities is far, far more important than the balance around centralization or even the home for digital. Yet it gets far less attention in most enterprise strategic discussions.

The existence or lack of this vertical integration is the single most important factor in driving analytics into digital. Do it right, and you’ll do it well. Do what everyone else does and…well…it won’t be so good.

Controlled Experimentation and Decision-Making

The key to effective digital transformation isn’t analytics, testing, customer journeys, or Voice of Customer. It’s how you blend these elements together in a fundamentally different kind of organization and process. In the DAA Webinar (link coming) I did this past week on Digital Transformation, I used this graphic to drive home that point:


I’ve already highlighted experience engineering and integrated analytics in this little series, and the truth is I wrote a post on constant customer research too. If you haven’t read it, don’t feel bad. Nobody has. I liked it so much I submitted it to the local PR machine to be published and it’s still grinding through that process. I was hoping to get that relatively quickly so I could push the link, but I’ve given up holding my breath. So while I wait for VoC to emerge into the light of day, let’s move on to controlled experimentation.

I’ll start with definitional stuff. By controlled experimentation I do mean testing, but I don’t just mean A/B testing or even MVT as we’ve come to think about it. I want it to be broader. Almost every analytics project is challenged by the complexity of the world. It’s hard to control for all the constantly changing external factors that drive or impact performance in our systems. What looks like a strong and interesting relationship in a statistical analysis is often no more than an artifact produced by external factors that aren’t being considered. Controlled experiments are the best tool there is for addressing those challenges.

In a controlled experiment, the goal is to create a test whereby the likelihood of external factors driving the results is minimized. In A/B testing, for example, random populations of site visitors are served alternative experiences and their subsequent performance is measured. Provided the selection of visitors into each variant of the test is random and there is sufficient volume, A/B tests make it very unlikely that external factors like campaign sourcing or day-time parting will impact the test results. How unlikely? Well, taking a random sample doesn’t guarantee randomness. You can flip a fair coin fifty times and get fifty heads so even a sample collected in a fully random manner may come out quite biased; it’s just not very likely. The more times you flip, the more likely your sample will be representative.

Controlled experiments aren’t just the domain of website testing though. They are a fundamental part of scientific method and are used extensively in every kind of research. The goal of a controlled experiment is to remove all the variables in an analysis but one. That makes it really easy to analyze.

In the past, I’ve written extensively on the relationship between analytics and website testing (Kelly Wortham and I did a whole series on the topic). In that series, I focused on testing as we think of it in the digital world – A/B and MV tests and the tools that drive those tests. I don’t want to do that here, because the role for controlled experimentation in the digital enterprise is much broader than website testing. In an omni-channel world, many of the most important questions – and most important experiments – can’t be done using website testing. They require experiments which involve the use, absence or role of an entire channel or the media that drives it. You can’t build those kinds of experiments in your CMS or your testing tool.

I also appreciate that controlled experimentation doesn’t carry with it some of the mental baggage of testing. When we talk testing, people start to think about Optimizely vs. SiteSpect, A/B vs. MVT, landing page optimization and other similar issues. And when people think about A/B tests, they tend to think about things like button colors, image A vs. image B and changing the language in a call-to-action. When it comes to digital transformation, that’s all irrelevant.

It’s not that changing the button colors on your website isn’t a controlled experiment. It is; it’s just not a very important one. It’s also representative of the kind of random “throw stuff at a wall” approach to experimentation that makes so many testing programs nearly useless.

One of the great benefits of controlled experimentation is that, done properly, the idea of learning something useful is baked into the process. When you change the button color on your Website, you’re essentially framing a research question like this:

Hypothesis: Changing the color of Button X on Page Y from Red to Yellow will result in more clicks of the button per page view

An A/B test will indeed answer that question. However, it won’t necessarily answer ANY other question of higher generality. Will changing the color of any other button on any other page result in more clicks? That’s not part of the test.

Even with something as inane as button colors, thinking in terms of a controlled experiment can help. A designer might generalize this hypothesis to something that’s a little more interesting. For example, the hypothesis might be:

Hypothesis: Given our standard color pallet, changing a call-to-action on the page to a higher contrast color will result in more clicks per view on the call-to-action

That’s a somewhat more interesting hypothesis and it can be tested with a range of colors with different contrasts. Some of those colors might produce garish or largely unreadable results. Some combinations might work well for click-rates but create negative brand impressions. That, too, can be tested and might perhaps yield a standardized design heuristic for the right level of contrast between the call-to-action and the rest of a page given a particular color palette.

The point is, by casting the test as a controlled experiment we are pushed to generalize the test in terms of some single variable (such as contrast and its impact on behavior). This makes the test a learning experience; something that can be applied to a whole set of cases.

This example could be read as an argument for generalizing isolated tests into generalized controlled experiments. That might be beneficial, but it’s not really ideal. Instead, every decision-maker in the organization should be thinking about controlled experimentation. They should be thinking about it as way to answer questions analytics can’t AND as a way to assess whether the analytics they have are valid. Controlled experimentation, like analytics, is a tool to be used by the organization when it wants to answer questions. Both are most effective when used in a top-down not a bottom-up fashion.

As the sentence above makes clear, controlled experimentation is something you do, but it’s also a way you can think about analytics – a way to evaluate the data decision-makers already have. I’ve complained endlessly, for example, about how misleading online surveys can be when it comes to things like measuring sitewide NPS. My objection isn’t to the NPS metric, it’s to the lack of control in the sample. Every time you shift your marketing or site functionality, you shift the distribution of visitors to your website. That, in turn, will likely shift your average NPS score – irrespective of any other change or difference. You haven’t gotten better or worse. Your customers don’t like you less or more. You’ve simply sampled a somewhat different population of visitors.

That’s a perfect example of a metric/report which isn’t very controlled.  Something outside what you are trying to measure (your customer’s satisfaction or willingness to recommend you) is driving the observed changes.

When decision-makers begin to think in terms of controlled experiments, they have a much better chance of spotting the potential flaws in the analysis and reporting they have, and making more risk-informed decisions. No experiment can ever be perfectly controlled. No analysis can guarantee that outside factors aren’t driving the results. But when decision-makers think about what it would take to create a good experiment, they are much more likely to interpret analysis and reporting correctly.

I’ve framed this in terms of decision-makers, but it’s good advice for analysts too. Many an analyst has missed the mark by failing to control for obvious external drivers in their findings. A huge part of learning to “think like an analyst” is learning to evaluate every analysis in terms of how to best approximate a controlled experiment.

So if controlled experimentation is the best way to make decisions, why not just test everything? Why not, indeed? Controlled experimentation is tremendously underutilized in the enterprise. But having said as much, not every problem is amenable to or worth experimenting on. Sometimes, building a controlled experiment is very expensive compared to an analysis; sometimes it’s not. With an A/B testing tool, it’s often easier to deploy a simple test than try to conduct and analysis of a customer preference. But if you have an hypothesis that involves re-designing the entire website, building all that creative to run a true controlled experiment isn’t going to be cheap, fast or easy.

Media mix analysis is another example of how analysis/experimentation trade-offs come into play. If you do a lot of local advertising, then controlled experimentation is far more effective than mix modeling to determine the impact of media and to tune for the optimum channel blend. But if much of your media buy is national, then it’s pretty much impossible to create a fully controlled experiment that will allow you to test mix hypotheses. So for some kinds of marketing organizations, controlled experimentation is the best approach to mix decisions; for others, mix modelling (analysis in other words – though often supplemented by targeted experimentation) is the best approach.

This may all seem pretty theoretical, so I’ll boil it down to some specific recommendations for the enterprise:

  • Repurpose you’re A/B testing group as a controlled experimentation capability
  • Blend non-digital analytics resources into that group to make sure you aren’t thinking too narrowly – don’t just have a bunch of people who think A/B testing tools
  • Integrate controlled experimentation with analytics – they are two sides of the same coin and you need a single group that can decide which is appropriate for a given problem
  • Train your executives and decision-makers in experimentation and interpreting analysis – probably with a dedicated C-Suite resource
  • Create constant feedback loops in the organization so that decision-makers can request new survey questions, new analysis and new experiments at the same time and with the same group

I see lots of organizations that think they are doing a great job testing. Mostly they aren’t even close. You’re doing a great job testing when every decision maker at every level in the organization is thinking about whether a controlled experiment is possible when they have to make a significant decision. When those same decision-makers know how to interpret the data they have in terms of its ability to approximate a controlled experiment. And when building controlled experiments is deeply integrated into the analytics research team and deployed across digital and omni-channel problems.

Full Spectrum Analytics

Enterprises do analytics. They just don’t use analytics.

That’s the first, and for me the most frustrating, of the litany of failures I listed in my last post that drive digital incompetence in the enterprise. Most readers will assume I mean by this assertion that organizations spend time analyzing the data but then do nothing to act on the implications of that analysis. That’s true, but it’s only a small part of what I mean when I say the enterprises don’t use analytics. Nearly every enterprise that I work with or talk to has a digital analytics team ranging in size from modest to substantial. Some of these teams are very strong, some aren’t. But good or not-so-good, in almost every case, their efforts are focused on a very narrow range of analysis. Reporting on and attributing digital marketing, reporting on digital consumption, and conversion rate optimization around the funnel account for nearly all of the work these organizations produce.

Is that really all there is too digital analytics?

Though I’ve been struggling to find the right term (I’ve called it full-stack, full-spectrum and top-down analytics), the core idea is the same – every decision about digital at every level in the enterprise should be analytically driven. C-Level decision-makers who are deciding how much to invest in digital and what types of products or big-initiatives might bear fruit, senior leaders who are allocating budget and fleshing out major campaigns and initiatives, program managers who are prioritizing audiences, features and functionality, designers who are building content or campaign creative; every level and every decision should be supported and driven by data.

That simply isn’t the case at any enterprise I know. It isn’t even close to the case. Not even at the very best of the best. And the problem almost always begins at the top.

How do really senior decision-makers decide which products to invest in and how to carve up budgets? From a marketing perspective, there are organizations that efficiently use mix-modeling to support high-level decisions around marketing spend. That’s a good thing, but it’s a very small part of the equation. Senior decision-makers ought to have constantly before them a comprehensive and data-driven understanding of their customer types and customer journeys. They ought to understand which of those journeys they as a business perform well at and at which they lag behind. They ought to understand what audiences they don’t do well with, and what the keys to success for that audience are. They ought to have a deep understanding of how previous initiatives have impacted those audiences and journeys – which have been successful and which have failed.

This mostly just doesn’t exist.

Journey mapping in the organization is static, old-fashioned, non-segmented and mostly ignored. There’s no VoC surfaced to decision-makers except NPS – which is entirely useless for actually understanding your customers (instead of understanding what they think about you). There is no monitoring of journey success or failure – either overall or by audience. Where journey maps exist, they exist entirely independent of KPIs and measurement. There is no understanding of how initiatives have impacted either specific audiences or journeys. There is no interesting tracking of audiences in general, no detailed briefings about where the enterprise is failing, no deep-dives into potential target populations and what they care about. In short, C-Level decision-makers get almost no interesting or relevant data on which to base the types of decisions they actually need to make.

Given that complete absence of interesting data, what you typically get is the same old style of decision-making we’ve been at forever. Raise digital budgets by 10% because it sounds about right.  Invest in a mobile app because Gartner says mobile is the coming thing. Create a social media command center because company X has one. This isn’t transformation. It isn’t analytics. It isn’t right.

Things don’t get better as you descend the hierarchy of an organization. The senior leaders taking those high-level decisions and fleshing out programs and initiatives lack all of those same things the C-Level folks lack. They don’t get useful VoC, interesting and data-supported journey mapping, comprehensive segmented performance tracking, or interesting analysis of historical performance by initiative either. They need all that stuff too.

Worse, since they don’t have any of those things and aren’t basing their decisions on them, most initiatives are shaped without having a clear business purpose that will translate into decisions downstream around targeting, creative, functionality and, of course, measurement.

If you’re building a mobile app to have a mobile app, not because you need to improve key aspects of a universally understood and agreed upon set of customer journeys for specific audiences, how much less effective will all of the downstream decisions about that app be? From content development to campaign planning to measurement and testing, a huge number of enterprise digital initiatives are crippled from the get-go by the lack of a consistent and clear vision at the senior levels about what they are designed to accomplish.

That lack of vision is, of course, fueled by a gaping hole in enterprise measurement – the lack of a comprehensive, segmented customer journey framework that is the basis for performance measurement and customer research.

Yes, there are pockets in the enterprise where data is used. Digital campaigns do get attributed (sometimes) and optimized (sometimes). Funnels do get improved with CRO. But even these often ardent users of data work, almost always, without the big picture. They have no better framework or data around that big-picture than anyone else and, unlike their counterparts in the C-Suite, they tend to be focused almost entirely on channel level concerns. This leads, inevitably, to a host of sub-optimal but fully data-driven decisions based on a narrow view of the data, the customer, and the business function.

There are, too, vast swathes of the mid and low level digital enterprise where data is as foreign to day-to-day operations as Texas BBQ would be in Timbuktu. The agencies and internal teams that create campaigns, build content and develop tools live their lives gloriously unconstrained by data. They know almost nothing of the target audiences for which the content and campaigns are built, they have no historical tracking of creative or feature delivery correlated to journey or audience success, they get no VoC information about what those audiences lack, struggle with or make decisions using. They lack, in short, the basic data around which they might understand why they are building an experience, what it should consist of, and how it should address the specific target audiences. They generally have no idea, either, how what they build will be measured or which aspects of its usage will be chosen by the organization as Key Performance Indicators.

Take all this together and what it means is that even in the enterprise with a strong digital analytics department, the overwhelming majority of decisions about digital – including nearly all the most important choices – are made with little or no data.

This isn’t a worst-case picture. It’s almost a best-case picture. Most organizations aren’t even dimly aware of how much they lack when it comes to using data to drive digital decision-making.  Their view of digital analytics is framed by a set of preconceptions that limit its application to evaluating campaign performance or optimizing funnels.

That’s not full-spectrum analytics. It’s one little ray of light – and that a sickly, purplish hue – cast on an otherwise empty gray void. To transform the enterprise around digital – to be really good at digital with all the competitive advantage that implies – it takes analytics. But by analytics I don’t mean this pale, restricted version of digital analytics that claims for its territory nothing but a small set of choices around which marketing campaign to invest in. I mean, instead, a form of analytics that provides support for decision-makers of every type and at every level in the organization. An analytics that provides a common understanding throughout the enterprise of who your customers are, what journeys they have, which journeys are easy and which a struggle for each type of customer, detailed and constantly improving profiles of those audiences and those journeys and the decision-making and attitudes that drive them, and a rich understanding of how initiatives and changes at every level of the enterprise have succeeded, failed, or changed those journeys over time.

You can’t be great, or even very good, at digital without all this.

A flat-out majority of the enterprises I talk to these days are going on about transforming themselves with digital and all that implies for customer-centricity and agility. I’m pretty sure I know what they mean. They mean creating a siloed testing program and adding five people to their digital analytics team. They mean tracking NPS with their online surveys. They mean the sort of “agile” development that has lead the original creators of agile to abandon the term in despair. They mean creating a set of static journey maps which are used once by the web design team and which are never tied to any measurement. They mean, in short, to pursue the same old ways of doing business and of making decisions with a gloss of digital best practices that change almost nothing.

It’s all too easy to guess how transformative and effective these efforts will be.