Tag Archives: analytics strategy

How to Drive Digital Transformation when You’re Not a Digital Expert : Addressing the Reverse Hierarchy of Understanding

In my last post I described some of the biggest challenges to a traditional enterprise trying to drive digital transformation. This isn’t just the usual “this stuff is hard” blather – there are real hurdles for the traditional large enterprise trying to do digital well. The pace of change and frictionless competition drive organizations used to winning through “weight of metal” not agility, crazy. The need for customer-centricity penalizes organizations setup in careful siloes. And these very real hurdles are exacerbated by the way digital often creates poor decision-making in otherwise skilled organizations because of what I termed the reverse hierarchy of understanding.

The reverse hierarchy of understanding is a pretty simple concept. Organizations work best when the most senior folks know the most about the business. When, in other words, knowledge and seniority track. For the most part (and despite a penchant for folks lower down in the organization to always think otherwise), I think they do track rather well in most companies. That, at least, has been my fairly consistent experience.

There are, of course, many pockets of specialized knowledge in a large company where knowledge and seniority don’t track. The CFO may not be able to drive TM1. The CTO probably doesn’t know Swift. That’s not a problem. However, when something is both strategic and core to the business, it’s critical that knowledge and seniority track appropriately. If they don’t, then it’s hard for the enterprise to make good decisions. The people who are usually empowered to make decisions aren’t as qualified as they typically are, and the folks who have the specific knowledge probably don’t have either the strategic skills or business understanding to fill-in. And, of course, they probably don’t have the power either.

Digital can create exactly this inversion in the appropriate hierarchy of decision-making in the traditional enterprise, and it does so at many levels in the organization. Digital has become strategic and core far more rapidly than most large organizations can adapt, creating reverse hierarchies of understanding that can cripple efforts to do digital better.

So if you want to transform a traditional business and you know your organization has a reverse hierarchy of understanding (or maybe just a complete lack of understanding at every level), what do you do?

There’s not one answer of course. No magic key to unlocking the secret to digital transformation. And I’ve written plenty of stuff previously on ways to do digital better – all of which still applies. But here are some strategies that I think might help – strategies geared toward tackling the specific problem created by reverse hierarchies of understanding.

 

Incubation

I’m sensitive to the many draw-backs to incubating digital inside a larger organization. If incubation succeeds, then it creates long-term integration challenges. It potentially retards the growth of digital expertise in the main business and it may even cannibalize what digital knowledge there is in the organization. These are all real negatives. Despite that, I’ve seen incubation work fairly effectively as a strategy. Incubation creates a protected pocket in the organization that can be staffed and setup in a way that creates the desired knowledge hierarchy through most levels.  Would I always recommend incubation? Absolutely not. In many organizations, years of at least partial learning and transfusions of outside talent have created enough digital savvy so that incubation is unnecessary and probably undesirable. If digital knowledge in your organization is still nascent and particularly if you have layers of management still skeptical or negative to digital, then incubation is a strategy to consider.

 

Transfusion

And speaking of talent transfusions, the role of appropriate hiring in effectively transforming the organization can hardly be overstated. The best, simplest and most impactful way to address the reverse hierarchy of understanding is to…fix the problem. And the easiest way to fix the problem is by hiring folks with deep digital understanding at multiple levels of the organization. In some cases, of course, this means hiring someone to run digital. If you’re a traditional enterprise looking to hire a chief digital officer, the natural place to look is to organization’s that are great in digital – especially the companies that dominate the Web and that we all, rightly, admire. I tell my clients that’s a mistake. It’s not that those folks aren’t really good at digital; they are. What they aren’t good at is digital transformation. If you’ve grown up managing digital platforms and marketing for a digital pure-play, chances are you’re going to be massively frustrated trying to change a traditional enterprise. To drive transformation, you have to be a great coach. That isn’t at all the same as being a great player. In fact, not only isn’t it the same, it’s negatively correlated. The best coaches are almost NEVER the best players.

Getting the right person to lead digital isn’t the place where most organizations go wrong though. If you’re committed to digital transformation, you need to look for digital savvy in every hiring decision that is at all related to your digital enterprise. You need digital savvy in HR, in accounting, analytics, in customer, in supply chain, in branding and corporate communication. Etc. Etc. This is the long game, but it’s ultimately the most important game you’ll play in digital transformation – especially when you’re trying to drive transformation outside of massive disruption. In my last post, I mentioned FDR’s many efforts to prepare the U.S. for WWII before there was any political consensus for war. Every leader is constrained by the realities on the ground. Great leaders find ways to at least lay the essential groundwork for transformation BEFORE – not after – disaster strikes. You need to make sure that digital savvy becomes a basic qualifier for a wide range of positions in your organization.

 

Analytics

Dare I say that analytics has the potential to play a decisive role in solving the reverse hierarchy of understanding? Well, at the very least, it can be a powerful tool. In a normal hierarchy of understanding, seniority comes pre-loaded with better intuitions. Intuitions born of both experience and selection. And those intuitions, naturally, drive to better decisions. It’s darn hard to replace those intuitions, but analytics is a great leveler. A good analyst may not be quite the decision-maker that an experienced expert is – but at the very least a good analyst equipped with relevant data will come much closer to that level of competent decisioning than would otherwise be possible.

Thankfully, this works both ways. Where senior decision-makers can’t rely on their experience and knowledge, they, too, benefit from analytics to close the gap. An executive willing to look at analytics and learn may not be quite in the league of an experienced digital expert, but they can come surprisingly close.

This works all up and down the organization.

So how do you get your team using analytics? I addressed this in depth in a series of posts on building analytic culture. Read this and this. It’s good stuff. But here’s a simple management technique that can help drive your whole team to start using analytics. Every time there’s an argument over something, instead of voicing an opinion, ask for the numbers. If your team is debating whether to deliver Feature X or Feature Y in digital, ask questions like “What do our customers say is more important?” or “Which do high-value customers say they’ll use more?”

Ask questions about what gets used more. About whether people like an experience. About whether people who do something are actually more likely to convert. If you keep asking questions, eventually people are going to start getting used to thinking this way and will start asking (and answering) the questions themselves.

Way back in the early days of Semphonic, I often had junior programmers ask me how to do some coding task. At the time, I was still a pretty solid programmer with years of experience writing commercial software in C++. But since I wasn’t actively programming and my memory tends to be a bit short-term, I almost never just knew the answer. Instead, I’d ask Google. Almost always, I could find some code that solved the problem with only a few minutes’ search. Usually, we’d do this together staring at my screen. Eventually, they got the message and bypassed me by looking for code directly on Google.

That’s a win.

Nowadays, programmers do this automatically. But back in the aughts, I had to teach programmers that the easiest way to solve most coding problems is to find examples on Google. In ten years, looking at digital analytics and voice of customer will be second-nature throughout your organization.  But for right now, if you can make your team do the analytics work to answer the types of questions I’ve outlined above, you’ll have dramatically raised the level of digital sophistication in your organization. This isn’t as foreign to most good enterprise leaders as I used to think. Sure, folks at the top of most companies are used to offering their opinions. But they’re also pretty experienced at having to make decisions in areas where they aren’t that expert and they know that asking questions is a powerful tool for pushing people to demonstrate (or arrive at) understanding. The key is knowing the right questions to ask. In digital, that usually means asking customer-focused questions like the one’s I enumerated above.

 

Consulting

I’m probably too deeply involved in the sausage-making to give good advice on how organizations should use consulting to drive transformation. But here’s a few pointers that I think are worth bearing in mind. Consulting is a tempting way to solve a reverse hierarchy of understanding. You can bring in hired guns to build a digital strategy or drive specific digital initiatives. And if you’re lucky or choose wisely, there’s no reason why consultants can’t provide real benefits – helping speed up digital initiatives and supplement your organizational expertise. I genuinely believe we do this on a pretty consistent basis. Nevertheless, consultants don’t fix the problems created by a reverse hierarchy of understanding; they are, at best, a band aid. Not only is it too expensive to pay consultants to make your decisions on a continuing basis, it just doesn’t work very well. There are so many reasons why it doesn’t work well that I can attempt only a very partial enumeration: outside of a specific project, your consultant’s KPIs are almost never well aligned with your KPIs (we’re measured by how much stuff we sell), it’s difficult to integrate consultants into a chain of command and often damaging if you try too hard to do so, consultants can become a crutch for weaker managers, and consultants rarely understand your business well enough to make detailed tactical decisions.

Don’t get me wrong. Building talent internally takes time and there aren’t many traditional enterprises where I wouldn’t honestly recommend the thoughtful use of consulting services to help drive digital transformation. Just don’t lose sight of the fact that most of the work is always going to be yours.

 

That last sentence probably rings true across every kind of problem! And while digital transformation is legitimately hard and some of the challenges digital presents ARE different, it’s good to keep in mind that in many respects it is just another problem.

I’ve never believed in one “right” organization, and when it comes to digital transformation there are strong arguments both for and against incubation. I think a decision around incubation ultimately comes down to whether digital needs protection or just expertise. If the former, incubation is probably necessary. If the latter, it may not be. Similarly, we’re all used to the idea that if we need new expertise in an organization we probably have to hire it. But digital introduces two twists. First, the best candidate to lead a digital transformation isn’t necessarily the best digital candidate. Second, real digital transformation doesn’t just come from having a leader or a digital organization. You should bake digital qualifications into hiring at almost every level of your organization. It’s the long game, but it will make a huge difference. And when it comes to leveling the playing field when faced with a reverse hierarchy of knowledge, remember that analytics is your friend. Teaching the organization to use analytics doesn’t require you to be an analytics wizard. It mostly demands that you ask the right questions. Over and over. Finally, and this really is no different in digital transformation than anywhere else, consulting is kind of like a cold medicine – it fixes symptoms but it doesn’t cure the disease. That doesn’t mean I don’t want my bottle of Nyquil handy when I have a cold! It just means I know I won’t wake up all better. The mere fact of a reverse hierarchy of understanding can make over-reliance on consulting a temptation. When you’re used to knowing better than everyone, it’s kind of scary when you don’t. Make sure your digital strategy includes thought about the way to use and not abuse your consulting partners (and no, don’t expect that to come from even the best consultants).

Keep these four lessons in mind, and you’re at least half-way to a real strategy for transformation.

Digital Transformation and the Reverse Hierarchy of Understanding

Why is it so hard for the traditional enterprise to do digital well? That’s the question that lurks at the heart of every digital transformation discussion. After all, there’s plenty of evidence that digital can be done well. No one looks at the myriad FinTech, social, and ecommerce companies that are born digital and says “Why can’t they do digital well?” When digital is in your DNA it seems perfectly manageable. Of course, mastering any complex and competitive field is going to be a challenge. But for companies born into digital, doing it well is just the age-old challenge of doing ANY business well. For most traditional enterprises, however, digital has been consistently hard.

So what is it that makes digital a particular challenge for the traditional enterprise?

That was the topic of my last conversational session at the Digital Analytics Hub this past week in Monterey (and if you didn’t go…well, sucks for you…great conference). And with a group that included analytics leaders in the traditional enterprise across almost every major industry and a couple of new tech and digital pure plays, we had the right people in the room to answer the question. What follows is, for the most part, a distillation of a discussion that was deep, probing, consistently engaging, and – believe it or not – pretty darn enlightening. Everything, in short, that a conversation is supposed to be but, like digital transformation itself, rarely succeeds in being.

There are some factors that make digital a peculiar challenge for everyone – from startup to omni-channel giant. These aren’t necessarily peculiar to the large traditional enterprise.

Digital changes fast. The speed of change in digital greatly exceeds that in most other fields. It’s not that digital is entirely unique here. Digital isn’t the only discipline where, as one participant put it, organizations have to operate in chaos. But digital is at the upper-end of the curve when it comes to pace of change and that constant chaos means that organizations will have to work hard not just to get good at digital, but to stay good at digital.

The speed of change in digital is a contributing factor to and a consequence of the frictionless nature of digital competition and the resulting tendency toward natural monopoly. I recently wrote a detailed explanation of this phenomenon, beginning with the surprising tendency of digital verticals to tend toward monopoly. Why is it that many online verticals are dominated by a single company – even in places like retail that have traditionally resisted monopolization in the physical world? The answer seems to be that in a world with little or no friction, even small advantages can become decisive. The physical world, on the other hand, provides enough inherent friction that gas stations on opposite sides of the street can charge differently for an identical product and still survive.

This absence of friction means that every single digital property is competing against a set of competitors that is at least national in scope and sometimes global. Local markets and the protection they provide for a business to start, learn and grow are much harder to find and protect in the digital world.

That’s a big problem for businesses trying to learn to do digital well.

However, it’s not quite true that it’s an equal problem for every kind of company. In that article on digital monopoly, I argued for the importance of segmentation in combating the tendency toward frictionless monopoly. If you can find a small group of customers that you can serve better by customizing your digital efforts to their particular needs and interests, you may be able to carve out that protected niche that makes it possible to learn and grow.

Big enterprise – by its very (big) nature – loses that opportunity. Most big brands have to try an appeal to broad audience segments in digital. That means they often lack the opportunity to evolve organically in the digital world.

Still, the challenges posed by a frictionless, high-chaos environment are almost as daunting to a digital startup as they are to a traditional enterprise. The third big challenge – the demand in digital for customer centricity – is a little bit different.

Digital environments put a huge premium on the ability to understand who a customer is and provide them a personalized experience across multiple touches. It’s personalization that drives competitive advantage in digital and the deeper and wider you can extend that personalization, the better. Almost every traditional enterprise is setup to silo each aspect of the customer journey. Call-Center owns one silo. Store another. Digital a third. That just doesn’t work very well.

Omni-channel enterprises not only have a harder challenge (more types of touches to handle and integrate), they are almost always setup in a fashion that makes it difficult to provide a consistent customer experience.

Customer-centricity, frictionless competition and rapidity of change are the high-level, big picture challenges that make digital hard for everyone and, in some respects, particularly hard for the large, traditional enterprise.

These top-level challenges result, inevitably, in a set of more tactical problems many of which are specific to the large traditional enterprise that wasn’t created specifically to address them. Looming large among these is the need to develop cross-functional teams (engineers, creative, analytics, etc.) that work together to drive continuous improvement over time. Rapidity of change, frictionless competition and the need for cross-silo customer-centricity make it impossible to compete using a traditional project mentality with large, one-time waterfall developments. That methodology simply doesn’t work.

Large, traditional enterprise is also plagued by IT and Marketing conflicts and Brand departments that are extremely resistant to change and unwilling to submit to measurement discipline. This is all pretty familiar territory and material that I’ve explored before.

Adapting to an environment where IT and Marketing HAVE to work together is hard. A world where traditional budgeting doesn’t work requires fundamental change in organizational process. A system where continuous improvement is essential and where you can’t silo customer data, customer experience or customer thinking is simply foreign to most large enterprises.

This stuff is hard because big organizations are hard to change. To get the change you want, a burning platform may be essential. And, in fact, in our group the teams that had most successfully navigated large enterprise transformation came from places that had been massively disrupted.

No good leader wants to accept that. If you lead a large enterprise, you don’t want to have to wait till your company’s very existence is threatened to drive digital transformation. That sucks.

So the real trick is finding ways to drive change BEFORE massive disruption makes it a question of survival.

And here, a principle I’ve been thinking about and discussed for the first time at the DA Hub enjoyed considerable interest. I call it the reverse hierarchy of understanding.

Organizations work best when an organization’s management hierarchy generally matches to its knowledge hierarchy. And believe it or not, my general experience is that that’s actually the case most of the time. We’re all used to specialized pieces of knowledge and specific expertise existing exclusively deep down in the organization. A financial planner may have deep knowledge of TM1 that the CFO lacks. But I’ve met a fair number of CFO’s and a fair number of financial planners and I can tell you there is usually a world (or perhaps two decades) of difference in their understanding of the business and its financial imperatives.

When that hierarchy doesn’t hold, it’s hard for a business to function effectively. When privates know more than sergeants, and sergeants know more than lieutenants and lieutenants know more than generals, the results aren’t pretty. Tactics and strategy get confused. The rank and file lose faith in their leaders. Leaders – and this may be even worse – tend to lose faith in themselves.

The thing about digital is that it does sometimes create a true reverse hierarchy of understanding in the large traditional enterprise. This doesn’t matter very much when digital is peripheral to the organization. Reverse hierarchies exist in all sorts of peripheral areas of the business and they don’t spell doom. But if digital become core to the organization, allowing a reverse hierarchy to persist is disastrous.

And here’s where digital transformation is incredibly tricky for the large traditional enterprise. You can’t invert the organization. Not only is it impossible, it’s stupid. Large traditional organizations can’t simply abandon what they are – which means that they have to figure out how to work with two separate knowledge hierarchies while they transform.

So the trick with digital transformation is building a digital knowledge hierarchy and finding ways to incorporate it in the existing management hierarchy of the business. It’s also where great leadership makes an enormous difference. Because most companies wait too long to begin that process – ultimately relying on a burning platform to drive the essential change. But while it’s hard to effect complete transformation without the pressure of massive disruption, it’s eminently possible to prepare for transformation by nurturing a digital knowledge hierarchy.

Think of it like FDR building out the U.S. military prior to WWII. He couldn’t fight the war, but he could prepare for it. We tend to define great leaders by what they do in crisis. But effecting change in crisis is relatively easy. The really great leaders have the vision to prepare for change before the onset of crisis.

So how can leadership address a reverse hierarchy of understanding in digital – especially since they are part of the problem? That’s the topic for my next post.

 

[A final thanks to all the great participants in my Digital Analytics Hub Conference session on this topic. You guys were brilliant and I hope this post does at least small justice to the conversation!]

Productivity is Our Business. And Business isn’t Good

A little while back there was a fascinating article on the lack of productivity growth in the U.S. in the past 4-5 years. I’ll try to summarize the key points below (and then tell you why I think they’re important) – but the full article is very much worth the read.

Productivity Growth

Let’s start with the facts. In the last year, the total number of hours worked in the U.S. rose by 1.9%. GDP growth in the last quarter exactly matched that rate – 1.9%. So we added hours and we got an exact match in output. That might sound okay, but it means that there was zero productivity growth. We didn’t get one whit more efficient in producing stuff. Nor is this just a short term blip. In the last four years, we’ve recorded .4% annual growth in productivity. That’s not very good. Take a look at the chart above (from the New York Times article and originally from the Labor Department) – it looks bad. We’re in late ‘70s and early ‘80s territory. Those weren’t good years.

The Times article advances three theories about why productivity growth has been so tepid. They classify them as the “Depressing” theory, the “Neutral” theory and the “Happy” theory. Here’s a quick description of each.

Depressing Theory

The trend is real and will be sustained. Capex is down. The digital revolution is largely complete. People aren’t getting significantly more productive and the people returning to the work-force post-recession are the least productive segment of our workforce. On this view, we’re not getting richer anytime soon.

Neutral Theory

There’s a lot of imprecision in measuring productivity. With fundamental changes in the economy it may be that the imprecision is increasing – and we’re undercounting true productivity. As measurement professionals, we all know this one needs to be reckoned with.

Happy Theory

We’re in an “investment” period where companies are hiring and investing – resulting in a period of lower-productivity before that investment begins to show returns and productivity accelerates. Interestingly, this story played out in the late ‘90s when productivity slowed and then accelerated sharply in the 2000s.

 

Which theory is right? The Times article doesn’t really draw any firm conclusions – and that’s probably reasonable. When it comes to macro-economic trends, the answers are rarely simple and obvious. From my perspective, though, this lack of productivity is troubling. We live in a profession (analytics) that’s supposed to be the next great driver or productivity. Computers, internet, now analytics. We’re on the hook for the next great advance in productivity. From a macro-economic perspective, no one’s thinking about analytics. But out here in the field, analytics is THE thing companies are investing in to drive productivity.

And the bad news? We’re clearly not delivering.

Now I don’t take it as all bad news. There’s a pretty good chance that the Happy theory is dead-on. Analytics is a difficult transformation and one that many companies struggle with. And while they’re struggling with big data systems and advanced analytics, you have a lot of money getting poured into rather unproductive holes. Word processing was almost certainly more immediately productive than analytics (anybody out there remember Wang?) – but every sea change in how we do things is going to take time, effort and money. Analytics takes more than most.

Here’s the flip side, though. It’s easy to see how all that investment in analytics might turn out to be as unproductive as building nuclear missiles and parking them into the ground. If they were ever used, those missiles would produce a pretty big bang for the buck. In the case of ICBM’s, we’re all happiest when they don’t get used. That’s not what we hope for from analytics.

Of course, I’ve been doing this extended series on the challenges of digital transformation – most of which revolves around why we aren’t more productive with analytics. Those challenges are not, in my opinion, the exception. They’re the rule. The vast majority of enterprises aren’t doing analytics well and aren’t boosting their productivity with it. That doesn’t mean I don’t believe in the power of analytics to drive real productivity. I do. But before those productivity gains start to appear, we have to do better.

Doing better isn’t about one single thing. Heaven knows it’s not just about having the newest technologies. We have those aplenty. It’s about finding highly repeatable methods in analytics so that we can drive improvement without rock-stars. It’s very much about re-thinking the way the organization is setup so that analytics is embedded and operationalized. It’s even more about finding ways to re-tool our thinking so that agile concepts and controlled experimentation are everywhere.

Most companies still need a blueprint for how to turn analytics into increased productivity. That’s what this series on digital transformation is all about.

If you haven’t yet had the opportunity to spin through my 20min presentation on transforming the organization with analytics – check it out.

After all, productivity is our business.

Building Analytics Culture – One Decision at a Time

In my last post, I argued that much of what passes for “building culture” in corporate America is worthless. It’s all about talk. And whether that talk is about diversity, ethics or analytics, it’s equally arid. Because you don’t build culture by talking. You build culture though actions. By doing things right (or wrong if that’s the kind of culture you want). Not only are words not effective in building culture, they can be positively toxic. When words and actions don’t align, the dishonesty casts other – possibly more meaningful words – into disrepute. Think about which is worse – a culture where bribery is simply the accepted and normal way of getting things done (and is cheerfully acknowledged) and one where bribery is ubiquitous but is cloaked behind constant protestations of disinterest and honesty? If you’re not sure about your answer, take it down to a personal level and ask yourself the same question. Do we not like an honest villain better than a hypocrite? If hypocrisy is the compliment vice pays to virtue, it is a particularly nasty form of flattery.

What this means is that you can’t build an analytics culture by telling people to be data driven. You can’t build an analytics culture by touting the virtues of analysis. You can’t even build an analytics culture by hiring analysts. You build an analytics culture by making good (data-driven) decisions.

That’s the only way.

But how do you get an organization to make data-driven decisions? That’s the art of building culture. And in that last post, I laid out seven (a baker’s half-dozen?) tactics for building good decision-making habits: analytic reporting, analytics briefing sessions, hiring a C-Suite analytics advisor, creating measurement standards, building a rich meta-data system for campaigns and content, creating a rapid VoC capability and embracing a continuous improvement methodology like SPEED.

These aren’t just random parts of making analytic decisions. They are tactics that seem to me particularly effective in driving good habits in the organization and building the right kind of culture. But seven tactics doesn’t nearly exhaust my list. Here’s another set of techniques that are equally important in helping drive good decision-making in the organization (my original list wasn’t in any particular order so it’s not like the previous list had all the important stuff):

Yearly Agency Performance Measurement and Reviews

What it is: Having an independent annual analysis of your agency’s performance. This should include review of goals and metrics, consideration of the appropriateness of KPIs and analysis of variation in campaign performance along three dimensions (inside the campaign by element, over time, and across campaigns). This must not be done by the agency itself (duh!) or by the owners of the relationship.

Why it builds culture: Most agencies work by building strong personal relationships. There are times and ways that this can work in your favor, but from a cultural perspective it both limits and discourages analytic thinking. I see many enterprises where the agency is so strongly entrenched you literally cannot criticize them. Not only does the resulting marketing nearly always suck, but this drains the life out of an analytics culture. This is one of many ways in which building an analytic culture can conflict with other goals, but here I definitely believe analytics should win. You don’t need a too cozy relationship with your agency. You do need objective measurement of their performance.

 

Analytics Annotation / Collaboration Tool like Insight Rocket

What it is: A tool that provides a method for rich data annotation and the creation and distribution of analytic stories across the analytics team and into the organization. In Analytic Reporting, I argued for a focus on democratizing knowledge not data. Tools like Insight Rocket are a part of that strategy, since they provide a way to create and rapidly disseminate a layer of meaning on top of powerful data exploration tools like Tableau.

Why it builds culture: There aren’t that many places where technology makes much difference to culture, but there are a few. As some of my other suggestions make clear, you get better analytics culture the more you drive analytics across and into the organization (analytic reporting, C-Suite Advisor, SPEED, etc.). Tools like Insight Rocket have three virtues: they help disseminate analytics thinking not just data, they boost analytics collaboration making for better analytic teams, and they provide a repository of analytics which increases long-term leverage in the enterprise. Oh, here’s a fourth advantage, they force analysts to tell stories – meaning they have to engage with the business. That makes this piece of technology a really nice complement to my suggestion about a regular cadence of analytics briefings and a rare instance of technology deepening culture.

 

In-sourcing

What it is: Building analytics expertise internally instead of hiring it out and, most especially, instead of off-shoring it.

Why it builds culture: I’d be the last person to tell you that consulting shouldn’t have a role in the large enterprise. I’ve been a consultant for most of my working life. But we routinely advise our clients to change the way they think about consulting – to use it not as a replacement for an internal capability but as a bootstrap and supplement to that capability. If analytics is core to digital (and it is) and if digital is core to your business (which it probably is), then you need analytics to be part of your internal capability. Having strong, capable, influential on-shore employees who are analysts is absolutely necessary to analytics culture. I’ll add that while off-shoring, too, has a role, it’s a far more effective culture killer than normal consulting. Off-shoring creates a sharp divide between the analyst and the business that is fatal to good performance and good culture on EITHER side.

 

Learning-based Testing Plan

What it is: Testing plans that include significant focus on developing best design practices and resolving political issues instead of on micro-optimizations of the funnel.

Why it works: Testing is a way to make decisions. But as long as its primary use is to decide whether to show image A or image B or a button in this color or that color, it will never be used properly. To illustrate learning-based testing, I’ve used the example of video integration – testing different methods of on-page video integration, different lengths, different content types and different placements against each key segment and use-case to determine UI parameters for ALL future videos. When you test this way, you resolve hundreds of future questions and save endless future debate about what to do with this or that video. That’s learning based testing. It’s also about picking key places in the organization where political battles determine design – things like home page real-estate and the amount of advertising load on a page – and resolving them with testing; that’s learning based testing, too. Learning based testing builds culture in two ways. First, in and of itself, it drives analytic decision-making. Almost as important, it demonstrates the proper role of experimentation and should help set the table for decision-makers tests to ask for more interesting tests.

 

Control Groups

What it is: Use of control groups to measure effectiveness whenever new programs (operational or marketing) are implemented. Control groups use small population subsets chosen randomly from a target population who are given either no experience or a neutral (existing) experience instead. Nearly all tests feature a baseline control group as part of the test, but the use of control groups transcends A/B testing tools. Use of control groups common in traditional direct response marketing and can be used in a wide variety of on and offline contexts (most especially as I recently saw Elea Feit of Drexel hammer home at the DAA Symposium – as a much more effective approach to attribution).

Why it works: One of the real barriers to building culture is a classic problem in education. When you first teach students something, they almost invariably use it poorly. That can sour others on the value of the knowledge itself. When people in an organization first start using analytics, they are, quite inevitably, going to fall into the correlation trap. Correlation is not causation. But in many cases, it sure looks like it is and this leads to many, many bad decisions. How to prevent the most common error in analytics? Control groups. Control groups build culture because they get decision-makers thinking the right way about measurement and because they protect the organization from mistakes that will otherwise sour the culture on analytics.

 

Unified Success Framework

What it is: A standardized, pre-determined framework for content and campaign success measurement that includes definition of campaign types, description of key metrics for those types, and methods of comparing like campaigns on an apples-to-apples basis.

Why it works: You may not be able to make the horse drink, but leading it to water is a good start. A unified success framework puts rigor around success measurement – a critical part of building good analytics culture. On the producer side, it forces the analytics team to make real decisions about what matters and, one hopes, pushes them to prove that proxy measures (such as engagement) are real. On the consumer side, it prevents that most insidious destroyer of analytics culture, the post hoc success analysis. If you can pick your success after the game is over, you’ll always win.

 

The Enterprise VoC Dashboard

What it is: An enterprise-wide state-of-the-customer dashboard that provides a snapshot and trended look at how customer attitudes are evolving. It should include built in segmentation so that attitudinal views are ALWAYS shown sliced by key customer types with additional segmentation possible.

Why it works: There are so many good things going on here that it’s hard to enumerate them all. First, this type of dashboard is one of the best ways to distill customer-first thinking in the organization. You can’t think customer-first, until you know what the customer thinks. Second, this type of dashboard enforces a segmented view of the world. Segmentation is fundamental to critical thinking about digital problems and this sets the table for better questions and better answers in the organization. Third, opinion data is easier to absorb and use than behavioral data, making this type of dashboard particularly valuable for encouraging decision-makers to use analytics.

 

Two-Tiered Segmentation

What it is: A method that creates two-levels of segmentation in the digital channel. The first level is the traditional “who” someone is – whether in terms of persona or business relationship or key demographics. The second level captures “what” they are trying to accomplish. Each customer touch-point can be described in this type of segmentation as the intersection of who a visitor is and what their visit was for.

Why it works: Much like the VoC Dashboard, Two-Tiered Segmentation makes for dramatically better clarity around digital channel decision-making and evaluation of success. Questions like ‘Is our Website successful?’ get morphed into the much more tractable and analyzable question ‘Is our Website successful for this audience trying to do this task?’. That’s a much better question and big part of building analytics culture is getting people to ask better questions. This also happens to be the main topic of my book “Measuring the Digital World” and in it you can get a full description of both the power and the methods behind Two-Tiered Segmentation.

 

I have more, but I’m going to roll the rest into my next post on building an agile organization since they are all deeply related to the integration of capabilities in the organization. Still, that’s fifteen different tactics for building culture. None of which include mission statements, organizational alignment or C-Level support (okay, Walking the Walk is kind of that but not exactly and I didn’t include it in the fifteen) and none of which will take place in corporate retreats or all-hands conferences. That’s a good thing and makes me believe they might actually work.

Ask yourself this: is it possible to imagine an organization that does even half these things and doesn’t have a great analytics culture? I don’t think it is. Because culture just is the sum of the way your organization works and these are powerful drivers of good analytic thinking. You can imagine an organization that does these things and isn’t friendly, collaborative, responsible, flat, diverse, caring or even innovative. There are all kinds of culture, and good decision-making isn’t the only aspect of culture to care about*. But if you do these things, you will have an organization that makes consistently good decisions.

*Incidentally, if you want to build culture in any of these other ways, you have to think about similar approaches. Astronomers have a clever technique for seeing very faint objects called averted vision. The idea is that you look just to the side of the object if you want to get the most light-gathering power from your eyes. It’s the same with culture. You can’t tackle it head-on by talking about it. You have to build it just a little from the side!

Continuous Improvement

Is it a Method or a Platitude?

What does it take to be good at digital? The ability to make good decisions, of course. If you run a pro football team and you make consistently good decisions about players and about coaches, and they, in turn, make consistently good decisions about preparation and plays, you’ll be successful. Most organizations aren’t setup to make good decisions in digital. They don’t have the right information to drive strategic decisions and they often lack the right processes to make good tactical decisions. I’ve highlighted four capabilities that must be knitted together to drive consistently good decisions in the digital realm: comprehensive customer journey mapping, analytics support at every level of the organization, aggressive controlled experimentation targeted to decision-support, and constant voice of customer research. For most organizations, none of these capabilities are well-baked and it’s rare that even a very good organization is excellent at more than two of these capabilities.

The Essentials for Digital Transformation
                          The Essentials for Digital Transformation

There’s a fifth spoke of this wheel, however, that isn’t so much a capability as an approach. That’s not so completely different from the others as it might seem. After all, almost every enterprise I see has a digital analytics department, a VoC capability, a customer journey map, and an A/B Testing team. In previous posts, I’ve highlighted how those capabilities are mis-used, mis-deployed or simply misunderstood. Which makes for a pretty big miss. So it’s very much true that a better approach underlies all of these capabilities. When I talk about continuous improvement, it’s not a capability at all. There’s no there, there. It’s just an approach. Yet it’s an approach that, taken seriously, can help weld these other four capabilities into a coherent whole.

The doctrine of continuous improvement is not new – in digital or elsewhere. It has a long and proven track record and it’s one of the few industry best practices with which I am in whole-hearted agreement. Too often, however, continuous improvement is treated as an empty platitude, not a method. It’s interpreted as a squishy injunction that we should always try to get better. Rah! Rah!

No.

Taken this way, it’s as contentless as interpreting evolutionary theory as survival of the fittest. Those most likely to survive are…those most likely to survive. It is the mechanism of natural selection coupled with genetic variation and mutation that gives content to evolutionary doctrine. In other words, without a process for deciding what’s fittest and a method of transmitting that fitness across generations, evolutionary theory would be a contentless tautology. The idea of continuous improvement, too, needs a method to be interesting. Everybody wants to get better all the time. There has to be a real process to make it interesting.

There are such processes, of course. Techniques like Six Sigma famously elaborate a specific method to drive continuous improvement in manufacturing processes. Unfortunately, Six Sigma isn’t directly transferable to digital analytics. We lack the critical optimization variable (defects) against which these methods work. Nor does it work to simply substitute a variable like conversion rate for defects because we lack the controlled environment necessary to believe that every customer should convert.

If Six Sigma doesn’t translate directly into digital analytics, that doesn’t mean we can’t learn from it and cadge some good ideas, though. Here are the core ideas that drive continuous improvement in digital, many of which are rooted in formal continuous improvement methodologies:

  1. It’s much easier to measure a single, specific change than a huge number of simultaneous changes. A website or mobile app is a complex set of interconnecting pieces. If you change your home page, for example, you change the dynamics of every use-case on the site. This may benefit some users and disadvantage others; it may improve one page’s performance and harm another’s. When you change an entire website at once, it’s incredibly difficult to isolate which elements improved and which didn’t. Only the holistic performance of the system can be measured on a before and after basis – and even that can be challenging if new functionality has been introduced. The more discrete and isolated a change, the easier it is to measure its true impact on the system.
  2. Where changes are specific and local, micro-conversion analytics can generally be used to assess improvement. Where changes are numerous or the impact non-local, then a controlled environment is necessary to measure improvement. A true controlled environment in digital is generally impossible but can be effectively replicated via controlled experimentation (such as A/B testing or hold-outs).
  3. Continuous improvement can be driven on a segmented or site-wide basis. Improvements that are site-wide are typically focused on reducing friction. Segmentation improvements are focused on optimizing the conversation with specific populations. Both types of improvement cycles must be addressed in any comprehensive program.
  4. Digital performance is driven by two different systems (acquisition of traffic and content performance). Despite the fact that these two systems function independently, it’s impossible to measure performance of either without measuring their interdependencies. Content performance is ALWAYS relative to the mix of audience created by the acquisition systems. This dependency is even tighter in closed loop systems like Search Engine Optimization – where the content of the page heavily determines the nature of the traffic sent AND the performance of that traffic once sourced (though the two can function quite differently with the best SEO optimized page being a very poor content performer even though it’s sourcing its own traffic).
  5. Marketing performance is a function of four things: the type of audience sourced, the use-case of the audience sourced, the pre-qualification of the audience sourced and the target content to which the audience is sourced. Continuous improvement must target all four factors to be effective.
  6. Content performance is relative to function, audience and use-case. Some content changes will be directly negative or positive (friction causing or reducing), but most will shift the distribution of behaviors. Because most impacts are shifts in the distribution of use-cases or journeys, it’s essential that the relative value of alternative paths be understood when applying continuous improvement.

These are core ideas, not a formal process. In my next post, I’ll take a shot at translating them into a formal process for digital improvement. I’m not really confident how tightly I can describe that process, but I am confident that it will capture something rather different than any current approach to digital analytics.

 

With Thanksgiving upon us now is the time to think about the perfect stocking stuffer for the digital analyst you like best. Pre-order “Measuring the Digital World” now!

Analytics for a (Good) Purpose

I imagine that anyone reading my posts can tell that I love doing analytics. I mean real, hands-on, getting your cuticles data-dirty analytics. But if I have a complaint about the analytics part of what I do, it’s that so often it’s for purposes that just aren’t gripping. There’s nothing wrong with selling more insurance, getting people to view higher-value ads, or cutting a few seconds off the time it takes to complete a process. Making commerce better is a perfectly good thing to do. Commerce matters to all of us. But if there’s nothing wrong with improving commerce, neither is it food for the soul. I’ve been re-reading Tobias Wolff’s wonderful novel “Old School”, and in it, one of the professors says something like this – “Essays? We could live without essays. The world would be a little poorer – like a world without chess – but stories…stories we can’t live without.”

That’s why I’ve always loved the rare occasions when we get to turn an analytics eye on a problem that means something more. Part of my team at EY got that chance a little more than a week back when we hosted an “Analytics Hackathon” for the Earthwatch Institute.

You can check out Earthwatch here at Earthwatch.org – it’s a very cool organization. I love everything about what they do and the way they approach it. I love the science part, which is fascinating. The nature part, which is just something I happen to enjoy – my daughters will attest that I am “crazy hiker guy”. And I love the approach, that assumes we are at our best when we do good not from ideology, which is often cold and artificial, but from passion. Even more, that worthwhile commitment comes from passion tempered by knowledge. We all realize that knowledge without passion achieves little. But passion without knowledge more often does harm than good in our complex society. Building that rare combination of passion for and knowledge of the natural world strikes me as what Earthwatch is all about, and I can’t think of a more rewarding mission.

So Earthwatch provided us six years of data on their expeditioners (folks who volunteer to take field trips to support their scientific endeavors), their donors, and the intersection of the two, and let us have at it for day. They asked three big questions: what can you tell us about donors and donor patterns, how do donors and expeditioners intersect, and are there things we should know to improve the marketing of expeditions to attract volunteers?

Earthwatch Image 1Great questions all, but a lot to ask of a five-hour day.

We pre-loaded their data into Tableau, and after a brief context-setting presentation from the Earthwatch folks, we split up into groups with each group drawing a single question. Each group produced a full-on dashboard and spent some time answering the questions.

One of the great challenges for many non-profits is the split between what you do and those who pay. In the traditional enterprise, good products and service make your customers happy and willing to pay. At Earthwatch, as with many a non-profit, their mission doesn’t directly serve their donors (those who pay). So the challenge (and the opportunity) is how to connect donors to the mission.

The mechanism for doing that at Earthwatch is the expedition. Hands on participation in an Earthwatch expedition is by far the best spur to giving they have. So one of our groups focused specifically on the relationship between expeditions and giving – and what they found was fascinating and unexpected. But it’s also fair to ask what other factors might drive giving – are there demographic, lifestage, or proclivities that can be used to direct social advertising, inform partnerships or target messaging?

Unfortunately, like many an enterprise (and not just non-profits), Earthwatch hasn’t done the greatest job building out their knowledge of their customers – in this case their donors. With only age, gender and zip code to work with (and that data obviously spotty with null values dominating each demographic category), the options for look-alike or advanced targeting are fairly minimal.

However, even with such thin gruel, there are findings to be had and analysis to be done. If you graph Earthwatch’s expeditioners by age, you get a big horseshoe-like graph. Lots of teenagers. Lots of seniors. Not much in-between. That’s no surprise and probably not changeable. Graph donors, and the left-hand side of the horse-shoe (the teenagers) go away. That’s no surprise either. You can’t squeeze much water from a rock. What is surprising is that the middle part of the graph doesn’t fill-in. Aren’t the parents of those teens natural donors? Your children’s connection to an activity ought to be a powerful motivator to giving. I think there’s potentially a missed strategic opportunity here.

There were two other points that emerged from simple graphs of donations by age and donation amount by age. Earthwatch gets lots of donations from seniors. But there’s a big spike right at sixty. And there’s a pretty significant spike in donation amount right around forty. Think about that. Forty and sixty are big inflection points. They are times when almost all of us step outside the lines for at least a short while and think about the shape and nature of our life. That’s a good time to think about an Earthwatch expedition or a donation, right? This is a case where there’s no need to target a broad demographic. The combination of some key interest variables and a big birthday might be enough. It’s at least worth testing. Targeted marketers know the importance of magic moments, and the finer-grained you can make them, the more efficient you can be. For a non-profit like Earthwatch with tiny marketing dollars, the tighter you can draw the boundaries around a magic-moment, the more likely you are to be able to use it effectively.

Thinking about that donor curve also makes plain how important both patience and a long-term strategy are to a non-profit like Earthwatch (and maybe to a lot of for-profits as well). Earthwatch has been around for a long time. That means some of their early expeditioners are retirees now. If you can keep track of people for twenty, thirty or forty years, you have an opportunity to re-ignite those connections. When they have teenagers themselves, they are the right audience to target for expeditions and donations.

This long term view seems hard. But it’s exactly what great schools and universities do. They know their 25 year old graduates aren’t giving them money. But if they can create mechanism to stay in touch till those graduates hit forty, fifty and sixty, that is worth a lot. Social media is, of course, a great way to do this. And facilitating social media connections with volunteers ought to be a long-term strategic goal for any non-profit that engages with young people.

And what about all those folks who took expeditions back in the 80’s and 90’s? Track them down on LinkedIn and Facebook – that’s what interns are for – and send them something to get them back in the fold!

In my recent posts, I’ve been arguing that analytics is under-used in strategy. Mostly, this type of analytics isn’t advanced modelling or big data stuff. It’s macroeconomics not microeconomics. Just looking at the shape of the donor and expeditioner curves can help inform strategic thinking.

From a more tactical standpoint, we also looked at the relationship between their new membership program and repeat giving. Earthwatch has bounced back and forth a bit on membership, but they currently are focused on it. We found that members tended to be smaller donors (their biggest donors weren’t always members). More interesting, however, was the impact of membership on donation pattern and stability. We tracked donors who gave in ’14 before the membership program and then became members in ’15. Did they give less or more? We didn’t have the time or the tools to do this analysis properly, but it looked as if membership, on average, tended to slightly depress average donation but increase frequency of giving resulting in a net positive. As I said, we didn’t have time to really prove this, but analytically, there’s a couple of key points here. If you’re a non-profit trying to assess the impact of something like membership, you need to make sure you break the problem down into analyzable segments. That means creating cohorts of previous donors and tracking their behavior (including whether their behavior tends to improve or deteriorate over time), tracking the impact on new donors and efforts, and, in most cases, using hold-outs and control groups to make sure you’re not fooling yourself about the numbers.

Going back to the shapes of curves, the team that looked into the relationship between giving and expeditions found something truly interesting. They linked the two tables (donors/expeditioners) to isolate just the population that had gone on an expedition and donated money. Then they created a calculated variable that tracked the difference between the donation date and the expedition date and laid it out on a chart (ain’t Tableau wonderful).

Earthwatch Image 2What they found was kind of a shock. I would have expected a curve kind of like a camel’s hump after the expeditions. Not much giving ahead of time, a short latency period after the expedition, then a sharp hump followed by a quick decline and a long slow descent as the halo from the trip gradually dispersed. Much of that is exactly what they found. There isn’t much of a latency period but the there is a sharp hump followed by the quick decline and slow descent. The shocker was on the other side of the curve. It turns out that lots of expeditioners (not the teens but the adults) are quite likely to give BEFORE they travel. The team tackling this called it a “Packing Boost” (this is one of those things that makes me proud – not only did they find something interesting but they did the extra work to attach a business useful name to the phenomenon – that’s good consulting). The pre-trip donation amounts were quite a bit smaller on average, but the number of donations was almost symmetrical.

I would never have expected that.

Apparently, when people are getting ready for an expedition they are also in the mood to make a donation. I can see that, but not only was it a surprise to me, it wasn’t received wisdom at Earthwatch either. Their donation solicitations are not at all focused on the pre-trip period.

That’s potentially a huge win and an easily testable addition to their solicitation marketing program.

The third team looked at the behavior of expeditioners. Their initial analysis focused on when people book an expedition versus the type of expedition. It turns out that there are some pretty distinct types of trip. Expeditions to Africa are usually booked a long time in advance. Expeditions in the US and places like Costa Rica are more typically booked 2-3 months in advance. There are seasonal impacts as well, with most expeditions getting booked in the spring (to take place over summer).

Actionable? You bet it is. If you’re programming the hero section of the website (which happens to have a rotating set of expeditions), knowing the time-horizons for each type of trip can help you get your web marketing right. There’s also a planning element to this. If your Africa expedition isn’t largely staffed six months out, you’re in trouble. But that trip to Costa Rica still has plenty of runway.

Finally, that team looked at the impact of discounts on cancellation behavior and which expeditions were most cancelled (important from a planning perspective). They, too, ran out of time and had some tool limitations but initial analysis seems to suggest that people are less likely to cancel trips when they’ve gotten a discount. Even more suggestive, it didn’t look like the amount of the discount was hugely significant. This might indicate that some discounting is economically beneficial – even it drives no initial lift.  It’s also possible that it’s no more than an artifact of self-selection, since the discounts may be offered to customer segments that are inherently less likely to cancel (previous expeditioners, for example). It’s an unexpected and potentially important finding but like any exploratory finding, it needs testing and controls to see if it’s real.

 

I’m pretty sure our five hours of time won’t change the world. Still, we had a lot of fun doing work we genuinely enjoy for an organization that truly matters. And there’s a chance we helped out a little. That’s good enough for me.

Are there some big takeaways about analytics from our one-day Hackathon? Most of them are things we all should know.

Earthwatch helped make the process more productive by coming to the table with three real and fairly concrete problems. We don’t always get as much from clients that are investing a lot of money. Knowing the questions you want to answer is the single most important step in any analysis.

Like a lot of organizations, Earthwatch hasn’t invested as much in data collection and data quality as is ideal. Limitations on the data place real boundaries on what you can do – not only with analysis but with the fruits of that analysis in targeting and personalization.

Being open to the unexpected is critical (and sometimes that’s easier for an outside consultant without a lot of preconceptions around the business). The team that started by focusing on the impact to donations after taking an expedition ended up talking much more about the impact to donations of planning for an expedition. It wasn’t that their initial hypothesis was wrong. People do donate after going on an expedition. But they had the imagination and sense to see that a more interesting hypothesis emerged from the data.

Tableau is a great tool for visualization and data exploration, but it can’t do everything. Problems like the cohort analysis of membership or the impact of cancellation really required statistical analysis tools with more horsepower and more data manipulation capabilities. Still, the ability to quickly explore a data set across many dimensions is wonderful and the utility of that ease in the right hands is hard to overestimate.

Finally, the biggest part of any analysis is the imagination to map the data to the business problem or opportunity. Strategic insights aren’t usually driven by fancy analysis. They are more often sparked by simple views and cuts of the data (line graphs or bar charts) that make obvious some fundamental fact about the business. Sometimes data can spark new insights; sometimes it’s just a confirmation (or refutation) of strategic thoughts or business intuitions that are already on the table. Either way, it makes for a better strategy and more confident decisions.

 

Finally, one last plug for Earthwatch. What they do is important and, often, very cool (check out that Barrier Reef diving expedition). Like our Hackathon, there’s nothing wrong and everything right with having fun doing something worthwhile. So even if you’re not coming up on forty or sixty, take a look!

Controlled Experimentation and Decision-Making

The key to effective digital transformation isn’t analytics, testing, customer journeys, or Voice of Customer. It’s how you blend these elements together in a fundamentally different kind of organization and process. In the DAA Webinar (link coming) I did this past week on Digital Transformation, I used this graphic to drive home that point:


I’ve already highlighted experience engineering and integrated analytics in this little series, and the truth is I wrote a post on constant customer research too. If you haven’t read it, don’t feel bad. Nobody has. I liked it so much I submitted it to the local PR machine to be published and it’s still grinding through that process. I was hoping to get that relatively quickly so I could push the link, but I’ve given up holding my breath. So while I wait for VoC to emerge into the light of day, let’s move on to controlled experimentation.

I’ll start with definitional stuff. By controlled experimentation I do mean testing, but I don’t just mean A/B testing or even MVT as we’ve come to think about it. I want it to be broader. Almost every analytics project is challenged by the complexity of the world. It’s hard to control for all the constantly changing external factors that drive or impact performance in our systems. What looks like a strong and interesting relationship in a statistical analysis is often no more than an artifact produced by external factors that aren’t being considered. Controlled experiments are the best tool there is for addressing those challenges.

In a controlled experiment, the goal is to create a test whereby the likelihood of external factors driving the results is minimized. In A/B testing, for example, random populations of site visitors are served alternative experiences and their subsequent performance is measured. Provided the selection of visitors into each variant of the test is random and there is sufficient volume, A/B tests make it very unlikely that external factors like campaign sourcing or day-time parting will impact the test results. How unlikely? Well, taking a random sample doesn’t guarantee randomness. You can flip a fair coin fifty times and get fifty heads so even a sample collected in a fully random manner may come out quite biased; it’s just not very likely. The more times you flip, the more likely your sample will be representative.

Controlled experiments aren’t just the domain of website testing though. They are a fundamental part of scientific method and are used extensively in every kind of research. The goal of a controlled experiment is to remove all the variables in an analysis but one. That makes it really easy to analyze.

In the past, I’ve written extensively on the relationship between analytics and website testing (Kelly Wortham and I did a whole series on the topic). In that series, I focused on testing as we think of it in the digital world – A/B and MV tests and the tools that drive those tests. I don’t want to do that here, because the role for controlled experimentation in the digital enterprise is much broader than website testing. In an omni-channel world, many of the most important questions – and most important experiments – can’t be done using website testing. They require experiments which involve the use, absence or role of an entire channel or the media that drives it. You can’t build those kinds of experiments in your CMS or your testing tool.

I also appreciate that controlled experimentation doesn’t carry with it some of the mental baggage of testing. When we talk testing, people start to think about Optimizely vs. SiteSpect, A/B vs. MVT, landing page optimization and other similar issues. And when people think about A/B tests, they tend to think about things like button colors, image A vs. image B and changing the language in a call-to-action. When it comes to digital transformation, that’s all irrelevant.

It’s not that changing the button colors on your website isn’t a controlled experiment. It is; it’s just not a very important one. It’s also representative of the kind of random “throw stuff at a wall” approach to experimentation that makes so many testing programs nearly useless.

One of the great benefits of controlled experimentation is that, done properly, the idea of learning something useful is baked into the process. When you change the button color on your Website, you’re essentially framing a research question like this:

Hypothesis: Changing the color of Button X on Page Y from Red to Yellow will result in more clicks of the button per page view

An A/B test will indeed answer that question. However, it won’t necessarily answer ANY other question of higher generality. Will changing the color of any other button on any other page result in more clicks? That’s not part of the test.

Even with something as inane as button colors, thinking in terms of a controlled experiment can help. A designer might generalize this hypothesis to something that’s a little more interesting. For example, the hypothesis might be:

Hypothesis: Given our standard color pallet, changing a call-to-action on the page to a higher contrast color will result in more clicks per view on the call-to-action

That’s a somewhat more interesting hypothesis and it can be tested with a range of colors with different contrasts. Some of those colors might produce garish or largely unreadable results. Some combinations might work well for click-rates but create negative brand impressions. That, too, can be tested and might perhaps yield a standardized design heuristic for the right level of contrast between the call-to-action and the rest of a page given a particular color palette.

The point is, by casting the test as a controlled experiment we are pushed to generalize the test in terms of some single variable (such as contrast and its impact on behavior). This makes the test a learning experience; something that can be applied to a whole set of cases.

This example could be read as an argument for generalizing isolated tests into generalized controlled experiments. That might be beneficial, but it’s not really ideal. Instead, every decision-maker in the organization should be thinking about controlled experimentation. They should be thinking about it as way to answer questions analytics can’t AND as a way to assess whether the analytics they have are valid. Controlled experimentation, like analytics, is a tool to be used by the organization when it wants to answer questions. Both are most effective when used in a top-down not a bottom-up fashion.

As the sentence above makes clear, controlled experimentation is something you do, but it’s also a way you can think about analytics – a way to evaluate the data decision-makers already have. I’ve complained endlessly, for example, about how misleading online surveys can be when it comes to things like measuring sitewide NPS. My objection isn’t to the NPS metric, it’s to the lack of control in the sample. Every time you shift your marketing or site functionality, you shift the distribution of visitors to your website. That, in turn, will likely shift your average NPS score – irrespective of any other change or difference. You haven’t gotten better or worse. Your customers don’t like you less or more. You’ve simply sampled a somewhat different population of visitors.

That’s a perfect example of a metric/report which isn’t very controlled.  Something outside what you are trying to measure (your customer’s satisfaction or willingness to recommend you) is driving the observed changes.

When decision-makers begin to think in terms of controlled experiments, they have a much better chance of spotting the potential flaws in the analysis and reporting they have, and making more risk-informed decisions. No experiment can ever be perfectly controlled. No analysis can guarantee that outside factors aren’t driving the results. But when decision-makers think about what it would take to create a good experiment, they are much more likely to interpret analysis and reporting correctly.

I’ve framed this in terms of decision-makers, but it’s good advice for analysts too. Many an analyst has missed the mark by failing to control for obvious external drivers in their findings. A huge part of learning to “think like an analyst” is learning to evaluate every analysis in terms of how to best approximate a controlled experiment.

So if controlled experimentation is the best way to make decisions, why not just test everything? Why not, indeed? Controlled experimentation is tremendously underutilized in the enterprise. But having said as much, not every problem is amenable to or worth experimenting on. Sometimes, building a controlled experiment is very expensive compared to an analysis; sometimes it’s not. With an A/B testing tool, it’s often easier to deploy a simple test than try to conduct and analysis of a customer preference. But if you have an hypothesis that involves re-designing the entire website, building all that creative to run a true controlled experiment isn’t going to be cheap, fast or easy.

Media mix analysis is another example of how analysis/experimentation trade-offs come into play. If you do a lot of local advertising, then controlled experimentation is far more effective than mix modeling to determine the impact of media and to tune for the optimum channel blend. But if much of your media buy is national, then it’s pretty much impossible to create a fully controlled experiment that will allow you to test mix hypotheses. So for some kinds of marketing organizations, controlled experimentation is the best approach to mix decisions; for others, mix modelling (analysis in other words – though often supplemented by targeted experimentation) is the best approach.

This may all seem pretty theoretical, so I’ll boil it down to some specific recommendations for the enterprise:

  • Repurpose you’re A/B testing group as a controlled experimentation capability
  • Blend non-digital analytics resources into that group to make sure you aren’t thinking too narrowly – don’t just have a bunch of people who think A/B testing tools
  • Integrate controlled experimentation with analytics – they are two sides of the same coin and you need a single group that can decide which is appropriate for a given problem
  • Train your executives and decision-makers in experimentation and interpreting analysis – probably with a dedicated C-Suite resource
  • Create constant feedback loops in the organization so that decision-makers can request new survey questions, new analysis and new experiments at the same time and with the same group

I see lots of organizations that think they are doing a great job testing. Mostly they aren’t even close. You’re doing a great job testing when every decision maker at every level in the organization is thinking about whether a controlled experiment is possible when they have to make a significant decision. When those same decision-makers know how to interpret the data they have in terms of its ability to approximate a controlled experiment. And when building controlled experiments is deeply integrated into the analytics research team and deployed across digital and omni-channel problems.