Tag Archives: digital

Competitive Advantage and Digital Transformation – Optimizing Retail and eCommerce

In my last posts before the DA Hub, I described the first two parts of an analytics driven digital transformation. The first part covered the foundational activities that help an organization understand digital and think and decide about it intelligently. Things like customer journey, 2-tiered segmentation, a comprehensive VoC system and a unified campaign measurement framework form the core of a great digital organization. Done well, they will transform the way your organization thinks about digital. But, of course, thinking isn’t enough. You don’t build culture by talking but by doing. In the beginning was the deed. That’s why my second post dealt with a whole set of techniques for making analytics a constant part of the organization’s processes. Experimentation driven by a comprehensive analytics-driven testing plan, attribution and mix modelling, analytic reporting, re-survey, and a regular cadence of analytics driven briefings make continuous improvement a reality. If you take this seriously and execute fully on these first two phases, you will be good at digital. That’s a promise.

But as powerful, transformative and important as these first two phases are, they still represent only a fraction of what you can achieve with analytics driven-transformation. The third phase of analytics driven transformation targets areas where analytics changes the way a business operates, prices its products, communicates with and supports its customers.

The third phase of digital transformation is unique. In some ways, it’s easier than the first two phases. It involves much less organization and cultural transformation. If you done those first two phases, you’re already there when it comes to having an analytics culture. On the other hand, in this third phase the analytics projects themselves are often MUCH more complex. This is where we tackle big hard problems. Problems that require big data, advanced statistical analysis, and serious imagination. Well, that’s the fun stuff. Seriously, if you’ve gotten through the first two phases of an analytics transformation successfully, doing the projects in Phase Three is like a taking a victory lap.

There isn’t one single blueprint for the third phase of an analytics driven transformation. The work that gets done in the first two phases is surprisingly similar almost regardless of the industry or specific business. I suppose it’s like laying the foundation for a building. No matter what the building looks like, the concrete block at the bottom is going to look pretty much the same. At this third level, however, we’re above the foundation and what you do will depend mightily on your specific business.

I know that it depends on your business is not much of an answer. As a consultant, it’s not unusual to get caught up in conversations like this:

“So how much would it cost?”

“Well, that depends.”

“What kind of things does it depend on?”

“Well, it depends on how deeply you want to go into it, who you want to have do it, and how you want to get it done.”

All of this is true, of course, but none of it is helpful. I usually try to short-circuit these conversations by presenting a couple of real world alternatives.

I think this is more helpful (though it’s also more dangerous). Similarly, when I present the third phase of an analytics driven transformation I try to make it specific to the business in question. And the more I know about the business, the more pointed, interesting, and – I hope – convincing that third phase is going to look. But if I haven’t spent much time a business, I still customize that third phase by industry – picking out high-level analytics projects that are broadly applicable to everyone in the sector.

That’s what I’m going to try to do here, with the added benefit of picking a couple different industries and showing how the differences play out in this third phase. Do keep in mind, though, that the description of this third phase – unlike that of the first two – is meant to be suggestive only. No real-world third phase (certainly no optimal one) is likely to mirror what I lay out here. It might not even be very close. What’s more, unlike the first phase (at least) which is close-ended (when you’ve done the projects I suggest you’re done with that phase), phase three is open-ended. You never stop doing analytics projects at this level. And that’s a good thing.

For the first example, I decided to start with a classic retail e-commerce view of the world. It’s a sector where we all have, at the very least, a consumer’s understanding of how it works. There are many, many possible projects to choose from, but here are five I often present as a typical starting point.

The first is an analytically driven personalization program. With journey-mapping, 2-tiered segmentation and a robust experimentation program, an enterprise should be a in a good position to drive personalization. Most personalization programs bootstrap themselves by starting with fairly straightforward segmentations (already done) and rule-based personalization decisions targeted to “easy” problems like email offers and returning visitors to the Website. That’s fine. The very best way to build a personalization program is organically – build it by doing it with increasing sophistication in more and more channels and at more and more touchpoints.

Merchandising optimization is another very big opportunity. So much of the merchandising optimization I see is focused on product detail pages. That’s fine as far as it goes, but it misses the much larger opportunity to optimize merchandising on search and aisle pages via analytics. Traditional merchandising folks have been slow to understand how critical moving merchandising upstream is to effective digital performance. This turns out to be analytically both very challenging and very rich.

Assortment optimization (and I might be just as likely to pick pricing or demand signals here) has long been a domain of traditional retail analytics. As such, I have to admit I didn’t think much about it until the last few years. But I’ve come to believe that digital analytics can yield powerful preference information that is typically missing in this analysis. To do effective assortment optimization, you need to understand customer’s potential replacement options. In the offline world, this usually involves making simple guesses based on high-level product sales about which products will be substituted. Using online view data, we can do much, much better. This is a case where digital analytics doesn’t so much replace an existing technique as deepen and enrich it with data heretofore undreamed of. Assortment optimization with digital data gives you highly segmented, localized data about product substitution preferences. It’s a lot better.

I’ve become a strong advocated for a fundamental re-think of loyalty programs based on the idea that surprise-based loyalty with no formal earning system is the future of rewards programs. The advantages of surprise-based loyalty are considerable when stacked up against traditional loyalty programs. You can target rewards where you think they will create lift. You can take advantage of inventory problems or opportunities. You don’t incur ANY financial obligations. You create no customer resentment or class issues. You can scale them and localize them to work with a specially trained staff. And, of course, the biggest bonus of all – you actually create far more impact per dollar spent. Surprise-based loyalty is, inherently, analytic. You can’t really do it any other way. Where it’s an option, it’s always one of the biggest changes you can make in the way your business works.

Finally, I’ve picked digital/store integration as my fifth project for analytics-led transformation. There are a number of different ways to take this. The drives between store and site are complex, important and fruitful. Optimizing those drives should be one of the analytics priorities for any omni-channel retail. And that optimization is a combination of testing and analytics. In this case, however, I’ve chosen to focus on measuring and optimizing digital in-store experiences. You’re surely familiar with endless-aisle retail; where digital is integrated into the in-store experience. The vast majority of these physical-digital experiences have been quite ineffective. Almost always, they’ve been executed from a retail perspective. By which I mean that they’ve been built once, dropped into the store, and left to fail. That’s just not doing it right. In-store experiences are getting more digital. Digital signage is growing rapidly. Physical-digital experiences are increasingly common. But if you want actual competitive advantage out of these experiences, you’d better tackle them from a digital test-and-learn/analytics perspective. Anything less is a prescription for failure.

Digital Transformation Phase III Retail

So here’s my first round of Phase Three projects for an analytics driven transformation in retail. Each is big, complex and hard. They are also important. These are the projects that will truly transform your digital business. They are rubber-meets-the-road stuff that drive competitive advantage. It would be a mistake to try and execute on projects like this without first creating a strong analytics foundation in the organization. You’re chances of misfiring on doing or operationalizing the analytics are simply too great without that foundation. But if you don’t move past the first two phases into analytics like this, you’re missing the big stuff. You can churn out lots of incremental improvement in digital without ever touching projects like these. Those incremental improvements aren’t nothing. They may be valuable enough to justify your time and money. But if that’s all you ever do, you’ll likely find yourself wondering if it was all really worth it. Do any of these projects successfully, and you’ll never ask that question again.

Next week I’ll show a different (non-retail) set of projects and break-down what the differences tell us about how to make analytics a strategic asset.

[Just a reminder that if you’re interested in the U.S. version of the Digital Analytics Hub you can register here!]

Digital Transformation in the Enterprise – Creating Continuous Improvement

I’m writing this post as I fly to London for the Digital Analytics Hub. The Hub is in its fourth year now (two in Berlin and two in London) and I’ve managed to make it every time. Of course, doing these Conference/Vacations is a bit of a mixed blessing. I really enjoyed my time in Italy but that was more vacation than Conference. The Hub is more Conference than vacation – it’s filled with Europe’s top analytics practitioners in deep conversation on analytics. In fact, it’s my favorite analytics conference going right now. And here’s the good news, it’s coming to the States in September! So I have one more of these analytics vacations on my calendar and that should be the best one of all. If you’re looking for the ultimate analytics experience – an immersion in deep conversation with the some of the best analytics practitioners around – you should check it out.

I’ve got three topics I’m bringing to the Hub. Machine Learning for digital analytics, digital analytics forecasting and, of course, the topic at hand today, enterprise digital transformation.

In my last post, I described five initiatives that lay the foundation for analytics driven digital transformation. Those projects focus on data collection, journey mapping, behavioral segmentation, enterprise Voice of Customer (VoC) and unified marketing measurement. Together, these five initiatives provide a way to think about digital from a customer perspective. The data piece is focused on making sure that data collection to support personalization and segmentation is in place. The Journey mapping and the behavioral segmentation provide the customer context for every digital touchpoint – why it exists and what it’s supposed to do. The VoC system provides a window into who customers want and need and how they make decisions at every touchpoint. Finally, the marketing framework ensures that digital spend is optimized on an apples-to-apples basis and is focused on the right customers and actions to drive the business.

In a way, these projects are all designed to help the enterprise think and talk intelligently about the digital business. The data collection piece is designed to get organizations thinking about personalization cues in the digital experience. Journey mapping is designed to expand and frame customer experience and place customer thinking at the center of the digital strategy. Two-tiered segmentation serves to get people talking about digital success in terms of customer’s and their intent. Instead of asking questions like whether a Website is successful, it gets people thinking about whether the Website is successful for a certain type of customer with a specific journey intent. That’s a much better way to think. Similarly, the VoC system is all about getting people to focus on customer and to realize that analytics can serve decision-making on an ongoing basis. The marketing framework is all about making sure that campaigns and creative are measured to real business goals – set within the customer journey and the behavioral segmentation.

The foundational elements are also designed to help integrate analytics into different parts of the digital business. The data collection piece is targeted toward direct response optimization. Journey mapping is designed to help weld strategic decisions to line manager responsibilities. Behavioral segmentation is focused on line and product managers needing tactical experience optimization. VoC is targeted toward strategic thinking and decision-making, and, of course, the marketing framework is designed to support the campaign and creative teams.

If a way to think and talk intelligently about the digital enterprise and its operations is the first step, what comes next?

All five of the initiatives that I’ve slated into the next phase are about one thing – creating a discipline of continuous improvement in the enterprise. That discipline can’t be built on top of thin air – it only works if your foundation (data, metrics, framework) supports optimization. Once it does, however, the focus should be on taking advantage of that to create continuous improvement.

The first step is massive experimentation via an analytics driven testing plan. This is partly about doing lots of experiments, yes. But even more important is that the experimentation be done as part of an overall optimization plan with tests targeted by behavioral and VoC analytics to specific experiences where the opportunity for improvement is highest. If all you’re thinking about is how many experiments you run, you’re not doing it right. Every type of customer and every part of their journey should have tests targeted toward its improvement.

Similarly on the marketing side, phase II is about optimizing against the unified measurement framework with both mix and control group testing. Mix is a top-down approach that works against your overall spending – regardless of channel type or individual measurement. Control group testing is nothing more than experimentation in the marketing world. Control groups have been a key part of marketing since the early direct response days. They’re easier to implement and more accurate in establishing true lift and incrementality than mathematical attribution solutions.

The drive toward continuous improvement doesn’t end there, however. I’m a big fan for tool-based reporting as a key part of the second phase of analytics driven transformation. The idea behind tool-based reporting is simple but profound. Instead of reports as static, historical tools to describe what happened, the idea is that reports contain embedded predictive models that transform them into tools that can be used to understand the levers of the business and test what might happen based on different business strategies. Building tool-based reports for marketing, for product launch, for conversion funnels and for other key digital systems is deeply transformative. I describe this as shift in the organization from democratizing data to democratizing knowledge. Knowledge is better. But the advantages to tool-based reporting run even deeper. The models embedded in these reports are your best analytic thinking about how the business works. And guess what? They’ll be wrong a lot of the time and that’s a good thing. It’s a good thing because by making analytically thinking about how the business works explicit, you’ve created feedback mechanisms in the organization. When things don’t work out the way the model predicts, your analysts will hear about it and have to figure out why and how to do better. That drives continuous improvement in analytics.

A fourth key part of creating the agile enterprise – at least for sites without direct ecommerce – is value-based optimization. One of the great sins in digital measurement is leaving gaps in your ability to measure customers across their journey. I call this “closing measurement loops”. If you’re digital properties are lead generating or brand focused or informational or designed to drive off-channel or off-property (to Amazon or to a Call-Center), it’s much harder to measure whether or not they’re successful. You can measure proxies like content consumption or site satisfaction, but unless these proxies actually track to real outcomes, you’re just fooling yourself. This is important. To be good at digital and to use measurement effectively, every important measurement gap needs to be closed. There’s no one tool or method for closing measurement gaps, instead, a whole lot of different techniques with a bunch of sweat is required. Some of the most common methods for closing measurement gaps include re-survey, panels, device binding and dynamic 800 numbers.

Lastly, a key part of this whole phase is training the organization to think in terms of continuous improvement. That doesn’t happen magically and while all of the initiatives described here support that transformation, they aren’t, by themselves, enough. In my two posts on building analytics culture, I laid out a fairly straightforward vision of culture. The basic idea is that you build analytics culture my using data and analytics. Not by talking about how important data is or how people should behave. In the beginning was the deed.

Creating a constant cadence of analytics-based briefings and discussions forces the organization to think analytically. It forces analysts to understand the questions that are meaningful to the business. It forces decision-makers to reckon with data and lets them experience the power of being able to ask questions and get real answers. Just the imperative of having to say something interesting is good discipline for driving continuous improvement.

foundational transformation Step 2

That’s phase two of enterprise digital transformation. It’s all about baking continuous improvement into the organization and building on top of each element of the foundation the never ending process of getting better.

 

You might think that’s pretty much all there is to the analytics side of the digital transformation equation. Not so. In my next post, I’ll cover the next phase of analytics transformation – driving big analytics wins. So far, most of what I’ve covered is valid for any enterprise in any industry. But in the next phase, initiatives tend to be quite different depending on your industry and business model.

See you after the Hub!

Getting Started with Digital Transformation

For most of this year I’ve been writing an extended series on digital transformation in the enterprise. Along the way, I’ve described why organizations (particularly large ones) struggle with digital, the core capabilities necessary to do digital well, and ways in which organizations can build a better, more analytic culture. I’ve even put together a series of videos that describe how enterprises are currently driving digital and how they can do better.

I think both the current-state (what we do wrong) and the end-state (doing digital right) are compelling. In the next few posts, I’m going to wrap this series up with a discussion around how you get from here to there.

I don’t suppose anyone thinks the journey from here to there is trivial. Doing digital the way I’ve described it (see the Agile Organization) involves some pretty fundamental change: change to the way enterprises budget, change to the way they organize, and change to the way they do digital at almost every level. It also involves, and this is totally unsurprising, investments in people and technology and more than a dollop of patience. It would actually be much easier to build a good digital organization from scratch than to adapt the pieces that exist in the typical enterprise.

Change is harder than creation. It has more friction and more fail points. But change is the reality for most enterprise.

So where do you start and how do you go about building a great digital organization?

I’m going to answer that question here from an analytics perspective. That’s the easy part. Once I’ve worked through the steps in building analytics maturity and digital decisioning, I’ll tackle the organizational component, wherein I expect to hazard a series of guesses, speculation and unlikely theory to paper over the fact that almost no one has done this transformation successfully and every organization has fundamentally unique structures and people that make its dynamics deeply specific.

The foundation of any analytics program is, of course, data. One of the most satisfying developments in digital analytics in the past 3-5 years has been the dramatic improvement in the state of data collection. It used to be that EVERY engagement we undertook began with a plodding slog through data auditing and clean-up. These days, that’s more the exception than the rule. Still, there are plenty of exceptions. So the first step in just about any analytics effort is to make sure the data foundation is solid. There’s a second aspect to this that’s worth pointing out. For a lot of my clients, basic data collection is no longer much of an issue. But even where that’s true, there are often significant gaps in digital analytics data collection for personalization. So many Adobe designs are predicated on meeting reporting requirements that it’s not at all unusual for key personalization elements like filtering selections, image expansions, sorting behaviors and DHTML exposures to go largely untracked. That’s true on both the Web and Mobile sides. Part of auditing your data collection should be a careful look at whether your capturing all the personalization cues you could – and that’s often a critical foundational element for the steps to follow.

Right along with auditing your data collection comes building a comprehensive customer journey framework. I’ve added the word “framework” here not to be all “consulty” but to emphasize that a customer journey isn’t built once as a static map. That’s the old way – and it’s wrong in every respect (so be careful what you buy). It’s wrong because it’s not segmented. It’s wrong because it’s too high-level. And most of all it’s wrong because it’s too static. So while a customer journey framework is more a capability and a process than a “thing”, it’s also true that you have to start somewhere. Getting that initial segmented journey map in place provides the high-level strategic framework for your digital strategy and for your analytics and testing. It’s the key strategic piece welding your operational capabilities to your strategic vision.

My third foundational building block is (Chorus sings refrain) “2-Tiered segmentation”. I’ve written voluminously on digital segmentation and how it works, so I won’t add much more here. But if journey mapping is the piece linking your strategic vision to your operational capabilities, 2-tiered segmentation is the equivalent piece linking at the tactical level. At every touchpoint in a customer journey there is the need to understand who somebody is and where in their journey they are. That’s what 2-tiered segmentation provides.

Auditing your data, creating a journey mapping and tying that to a digital segmentation are truly foundational. They are all “you can’t get there from here without going through these” kind of activities. Almost every significant report, analysis and decision that you make will rely on these three activities.

That’s not really true for my next two foundational activities. I chose building an integrated voice of customer (VoC) capability as my fourth key building block. If you’ve read my book, you know that one of the main uses for a VoC program is to refine and tune your journey map and segmentation. So in one sense, this capability may be prior to either of those. But you can do enough VoC to support those two activities without really building a full VoC program. And what I have in mind here is a full program. What do I mean by a full program? I mean an enterprise feedback management system that makes it easy to deploy surveys at any point in the journey across any device. I mean a set of organizational processes that ideate, design, deploy, interpret and socialize VoC information constantly. I mean an enterprise-wide reporting capability that integrates different VoC sources, classifies them, tracks them, and provides drill-down (and that’s important because VoC data is virtually useless without cross-tabulation) access to them across the organization. I also mean a culture where one of the natural and immediate parts of making a decision is looking at what customer’s think and – if that isn’t available – launching a survey to figure it out. I put VoC as part of this foundational set because I think it’s one of the easiest ways to deliver real wins to the organization. I also like the idea of driving a combination of tactical (data, segmentation) and strategic (journey, VoC) initiatives in your early phases. As I’ve pointed out elsewhere, we analytics folks tend to over-focus on the tactical.

Finally, I’ve included building a campaign measurement framework into the initial set of foundational activities. This might not be the right choice for every organization, but if you spend a significant amount of money on marketing, it’s a critical element in evolving your maturity. Like data audits, a lot of my clients are already pretty good at this. For many folks, campaigns are already measured using a pretty rich and well-thought out framework and the pain point tends to be deeper – around attribution and mix. But I also see organizations jumping right to questions of attribution before they’ve really done the work necessary to pick the right KPIs to optimize against. That’s a prescription for disaster. If you don’t put in the intellectual sweat equity to understand how campaigns should be measured (and it’s often surprisingly complicated in real-world businesses where conversion rate is rarely the be-all-and-end-all of optimization), then your attribution modelling is doomed to fail.

So here’s the first five things to tackle in building out the analytics part of a digital transformation effort:

foundational Transformation Step 1Small

These five activities provide a rich foundation for analytics driven transformation along with some core strategic analytic capabilities. I’ll cover what comes after this in my next post.

Gelato was the word I meant

I spent most of the last week on holiday in Italy. But since the holiday was built around a speaking gig in Italy at the Be Wizard Digital Marketing conference I still spent a couple of days talking analytics and digital. A couple of days I thoroughly enjoyed. The conference closed with a Q&A for a small group of speakers and while I got a few real analytics questions it felt more like a meet and greet – with plenty of puff-ball questions like “what word would use to describe the conference?” A question I failed miserably with the very pathetic answer “fun”.

I guess that’s why it’s better to ask me analytics questions.

The word I probably should have chosen is “gelato”.

And not just because I hogged down my usual totally ridiculous amount of fragola, melone, cioccolato, and pesca – scoop by scoop from Rimini to Venice.

Gelato because I had a series of rich conversations with Mat Sweezey from Salesforce (nee Pardot) who gave a terrific presentation on authenticity and what it means in this new digital marketing world. It’s easy to forget how dramatically digital has changed marketing and miss some of the really important lessons from those changes. Mat also showed me a presentation on agile that blends beautifully with the digital transformation story I’ve been trying to tell in the last six months. It’s a terrific deck with some slides that explain why test&learn and agile methods work so much better than traditional methods. It’s a presentation with the signal virtue of taking very difficult concepts and making them not just clear but compelling. That’s hard to do well.

Gelato because I also talked with and enjoyed a great presentation from Chris Anderson of Cornell. Chris led a two-hour workshop in the revenue management track (which happens to be a kind of side interest of mine). His presentation focused on the impact of social media content on sites like TripAdvisor on room pricing strategies. He’s done several compelling research projects with OTAs (Online Travel Agents) looking at the influence of social media content on buying decisions. His research has looked at key variables that drive influence (number of reviews and rating), how sensitive demand is to those factors, and how that sensitivity plays out by hotel class (turns out that the riskier the lodging decision the more impactful social reviews are). He’s also looked at review response strategies on TripAdvisor and has some compelling research showing how review response can significantly improve ratings outcomes but how it’s also possible to over-respond. Respond to everything, and you actually do worse than if you respond to nothing.

That’s a fascinating finding and very much in keeping with Mat’s arguments around authenticity. If you make responding to every social media post a corporate policy, what you say is necessarily going to sound forced and artificial.

That’s why it doesn’t work.

If you’re in the hospitality industry, you should see this presentation. In fact, there are lessons here for any company interested in the impact of reviews and social content and interested in taking a more strategic view of social outreach and branding. I think Chris’ data suggest significant and largely unexplored opportunities for both better revenue management decisions around OTA pricing and better strategies around the review ask.

Gelato because there was one question I didn’t get to answer that I wanted to (and somehow no matter how much gelato I consume I always want a little more).

Since I had to have translations of the panel questions at the end, I didn’t always get a chance to respond. Sometimes the discussion had moved on by the time I understood the question! And one of the questions – how can companies compete with publishers when it comes to content creation – seemed to me deeply related to both Mat and Chris’ presentations.

Here’s the question as I remember it:

If you’re a manufacturer or a hotel chain or a retailer, all you ever hear in digital marketing is how content is king. But you’re not a content company. So how do you compete?

The old-fashioned way is to hire an agency to write some content for you. That’s not going to work. You won’t have enough content, you’ll have to pay a lot for it, and it won’t be any good. To Mat’s point around authenticity, you’re not going to fool people. You’re not going to convince them that your content isn’t corporate, mass-produced, ad agency hack-work. Because it is and because people aren’t stupid. Building a personalization strategy to make bad content more relevant isn’t going to help much either. That’s why you don’t make it a corporate policy to reply to every review and why you don’t write replies from a central team of ad writers.

Stop trying to play by the old rules.

Make sure your customer relations, desk folks, and managers understand how to build relationships with social media and give them the tools to do it. If you want authentic content, find your evangelists. People who actually make, design, support or use your products. Give them a forum. A real one. And turn them loose. Find ways to encourage them. Find ways to magnify their voice. But turn them loose.

You can’t have it both ways. You can’t be authentic while you try to wrap every message in a Madison Avenue gift wrapping bought from the clever folks at your ad agency. Check out Mat’s presentation (he’s a Slideshare phenom). Think about the implications of unlimited content and the ways we filter. Process the implications. The world has changed and the worst strategy in the world is to keep doing things the old way.

So gelato because the Be Wizard conference, like Italy in general, was rich, sweet, cool and left me wanting to hear (and say) a bit more!

And speaking of conferences, we’re not that far away from my second European holiday with analytics baked in – The Digital Analytics Hub in London (early June). I’ve been to DA Hub several years running now – ever since two old friends of mine started it. It’s an all conversational conference modeled on X Change and it’s always one of the highlights of my year. In addition to facilitating a couple conversations, I’m also going to be leading a very deep-dive workshop into digital forecasting. I plan to walk through forecasting from the simplest sort of forecast (everything will stay the same) to increasingly advanced techniques that rely, first on averages and smoothing, and then to models. If you’re thinking about forecasting, I really think this workshop will be worth the whole conference (and the Hub is always great anyway)…

If you’ve got a chance to be in London in early June, don’t miss the Hub.

The Agile Organization

I’ve been meandering through an extended series on digital transformation: why it’s hard, where things go wrong, and what you need to be able to do to be successful. In this post, I intend to summarize some of that thinking and describe how the large enterprise should organize itself to be good at digital.

Throughout this series, I’ve emphasized the importance of being able to make good decisions in the digital realm. That is, of course, the function of analytics and its my own special concerns when it comes to digital. But there are people who will point out  that decision-making is not the be all and end all of digital excellence. They might suggest that being able to execute is important too.

If you’re a football fan, it’s easy to see the dramatic difference between Peyton Manning – possibly the finest on-field decision-maker in the history of the game – with a good arm and without. It’s one thing to know where to throw the ball on any given play, quite another to be able to get it there accurately. If that wasn’t the case, it’s probably true that many of my readers would be making millions in the NFL!

On the other hand, this divide between decision-making and execution tends to break down if you extend your view to the entire organization. If the GM is doing the job properly, then the decision about which quarterbacks to draft or sign will appropriately balance their physical and decision-making skills. That’s part of what’s involved in good GM decisioning. Meanwhile, the coach has an identical responsibility on a day-to-day basis. A foot injury may limit Peyton to the point where his backup becomes a better option. Then it may heal and the pendulum swings back. The organization makes a series of decisions and if it can make all of those decisions well, then it’s hard to see how execution doesn’t follow along.

If, as an organization, I can make good decisions about the strategy for digital, the technology to run it on, the agencies to build it, the people to optimize it, the way to organize it, and the tactics to drive it, then everything is likely to be pretty good.

Unfortunately, it’s simply not the case that the analytics, organization and capabilities necessary to make good decisions across all these areas are remotely similar. To return to my football analogy, it’s clear that very few organizations are setup to make good decisions in every aspect of their operations. Some organizations excel at particular functions (like game-planning) but are very poor at drafting. Indeed, sometimes success in one-area breeds disaster in another. When a coach like Chip Kelly becomes very successful in his role, there is a tendency for the organization to expand that role so that the coach has increasing control over personnel. This almost always works badly in practice. Even knowing it will work badly doesn’t prevent the problem. Since the coach is so important, it may be that an organization will cede much control over personnel to a successful coach even when everyone (except the coach) believes it’s a bad idea.

If you don’t think similar situations arise constantly in corporate America, you aren’t paying attention.

In my posts in this series, I’ve mapped out the capabilities necessary to give decision-makers the information and capabilities they need to make good decisions about digital experiences. I haven’t touched on (and don’t really intend to touch on) broader themes like deciding who the right people to hire are or what kind of measurement, analysis or knowledge is necessary to make those sorts of meta-decisions.

There are two respects, however, in which I have tried to address at least some of these meta-concerns about execution. First, I’ve described why it is and how it comes to pass that most enterprises don’t use analytics to support strategic decision-making. This seems like a clear miss and a place where thoughtful implementation of good measurement, particularly voice-of-customer measurement of the type I’ve described, should yield high returns.

Second, I took a stab at describing how organizations can think about and work toward building an analytics culture. In these two posts, I argue that most attempts at culture-building approach the problem backwards. The most common culture-building activities in the enterprise are all about “talk”. We talk about diversity. We talk about ethics. We talk about being data-driven in our decision-making. I don’t think this talk adds up to much. I suggest that culture is formed far more through habit than talk; that if an organization wants to build an analytics culture, it needs to find ways to “do” analytics. The word may proceed the deed, but it is only through the force of the deed (good habits) that the word becomes character/culture. This may seem somewhat obvious – no, it is obvious – but people somehow manage to miss the obvious far too often. Those posts don’t just formulate the obvious, they also suggest a set of activities that are particularly efficacious in creating good enterprise habits of decision-making. If you care about enterprise culture and you haven’t already done so, give them a read.

For some folks, however, all these analytics actions miss the key questions. They don’t want to know what the organization should do. They want to know how the organization should work. Who owns digital? Who owns analytics? What lives in a central organization? What lives in a business unit? Is digital a capability or a department?

In the context of the small company, most of these questions aren’t terribly important. In the large enterprise, they mean a lot. But acknowledging that they mean a lot isn’t to suggest that I can answer them – or at least most of them.

I’m skeptical that there is an answer for most of these questions. At least in the abstract, I doubt there is one right organization for digital or one right degree of centralization. I’ve had many conversations with wise folks who recognize that their organizations seem to be in constant motion – swinging like an enormous pendulum between extremes of centralization followed by extremes of decentralization.

Even this peripatetic motion – which can look so irrational from the inside – may make sense. If we assume that centralization and decentralization have distinct advantages, then not only might it be true that changing circumstances might drive a change in the optimal configuration, but it might even be true that swinging the organization from one pole to the other might help capture the benefits of each.

That seems unlikely, but you never know. There is sometimes more logic in the seemingly irrational movements of the crowd than we might first imagine.

Most questions about digital organization are deeply historical. They depend on what type of company you are, in what of market, with what culture and what strategic imperatives. All of which is, of course, Management 101. Obvious stuff that hardly needs to be stated.

However, there are some aspects of digital about which I am willing to be more directive. First, that some balance between centralization and decentralization is essential in analytics. The imperative for centralization is driven by these factors: the need for comparative metrics of success around digital, the need for consistent data collection, the imperatives of the latest generation of highly-complex IT systems, and the need/desire to address customers across the full spectrum of their engagement with the enterprise. Of these, the first and the last are primary. If you don’t need those two, then you may not care about consistent data collection or centralized data systems (this last is debatable).

On the other hand, there are powerful reasons for decentralization of which the biggest is simply that analytics is best done as close to the decision-making as possible. Before the advent of Hadoop, I would have suggested that the vast majority of analytics resources in the digital space be decentralized. Hadoop makes that much harder. The skills are much rarer, the demands for control and governance much higher, and the need for cross-domain expertise much greater in this new world.

That will change. As the open-source analytics stack matures and the market over-rewards skilled practitioners – drawing in more folks, it will become much easier to decentralize again. This isn’t the first time we’ve been down the IT path that goes from centralization to gradual diffusion as technologies become cheaper, easier, and better supported.

At an even more fundamental level than the question of centralization lives the location and nature of digital. Is digital treated as a thing? Is it part of Marketing? Or Operations? Or does each thing have a digital component?

I know I should have more of an opinion about this, but I’m afraid that the right answers seem to me, once again, to be local and historical. In a digital pure-play, to even speak of digital as a thing seems absurd. It’s the core of the company. In a gas company, on the other hand, digital might best be viewed as a customer service channel. In a manufacturer, digital might be a sub-function of brand marketing or, depending on the nature of the digital investment and its importance to the company, a unit unto-itself.

Obviously, one of the huge disadvantages to thinking of digital as a unit unto-itself is how it can then interact correctly with the non-digital functions that share the same purpose. If you have digital customer servicing and non-digital customer servicing, does it really make sense to have one in a digital department and the other as a customer-service department?

There is a case, however, for incubating digital capabilities within a small compact, standalone entity that can protect and nourish the digital investment with a distinct culture and resourcing model. I get that. Ultimately, though, it seems to me that unless digital OWNS an entire function, separating that function across digital and non-digital lines is arbitrary and likely to be ineffective in an omni-channel world.

But here’s the flip side. If you have a single digital property and it shares marketing and customer support functions, how do you allocate real-estate and who gets to determine key things like site structure? I’ve seen organizations where everything but the homepage is owned by somebody and the home page is like Oliver Twist. “Home page for sale, does anybody want one?”

That’s not optimal.

So the more overlap there needs to be between the functions and your digital properties, the more incentive you have to build a purely digital organization.

No matter what structure you pick, there are some trade-offs you’re going to have to live with. That’s part of why there is no magic answer to the right organization.

But far more important than the precise balance you strike around centralization or even where you put digital is the way you organize the core capabilities that belong to digital. Here, the vast majority of enterprises organize along the same general lines. Digital comprises some rough set of capabilities including:

  • IT
  • Creative
  • Marketing
  • Customer
  • UX
  • Analytics
  • Testing
  • VoC

In almost every company I work with, each of these capabilities is instantiated as a separate team. In most organizations, the IT folks are in a completely different reporting structure all the way up. There is no unification till you hit the C-Suite. Often, Marketing and Creative are unified. In some organizations, all of the research functions are unified (VoC, analytics) – sometimes under Customer, sometimes not. UX and Testing can wind up almost anywhere. They typically live under the Marketing department, but they can also live under a Research or Customer function.

None of this, to me, makes any sense.

To do digital well requires a deep integration of these capabilities. What’s more, it requires that these teams work together on a consistent basis. That’s not the way it’s mostly done.

Almost every enterprise I see not only siloes these capabilities, but puts in place budgetary processes that fund each digital asset as a one-time investment and which requires pass-offs between teams.

That’s probably not entirely clear so let me give some concrete examples.

You want to launch a new website. You hire an agency to design the Website. Then your internal IT team builds it. Now the agency goes away. The folks who designed the website no longer have anything to do with it. What’s more, the folks who built it get rotated onto the next project. Sometimes, that’s all that happens. The website just sits there – unimproved. Sometimes the measurement team will now pick it up. Keep in mind that the measurement team almost never had anything to do with the design of the site in the first place. They are just there to report on it. Still, they measure it and if they find some problem, who do they give it to?

Well, maybe they pass it on to the UX team or the testing team. Those teams, neither of which have ever worked with the website or had anything to do with its design are now responsible for implementing changes on it. And, of course, they will be working with developers who had nothing to do with building it.

Meanwhile, on an entirely separate track, the customer team may be designing a broader experience that involves that website. They enlist the VoC team to survey the site’s users and find out what they don’t like about it. Neither team (of course) had anything to do with designing or building the site.

If they come to some conclusion about what they want the site to do, they work with another(!) team of developers to implement their changes. That these changes may be at cross-purposes to the UX team’s changes or the original design intent is neither here nor there.

Does any of this make sense?

If you take continuous improvement to heart (and you should because it is the key to digital excellence), you need to realize that almost everything about the way your digital organization functions is wrong. You budget wrong and you organize wrong.

[Check out my relatively short (20 min) video on digital transformation and analytics organization – it’s the perfect medium for distributing this message through your enterprise!]

Here’s my simple rule about building digital assets. If it’s worth doing, it’s worth improving. Nothing you build will ever be right the first time. Accept that. Embrace it. That means you budget digital teams to build AND improve something. Those teams don’t go away. They don’t rotate. And they include ALL of the capabilities you need to successfully deliver digital experiences. Your developers don’t rotate off, your designers don’t go away, your VoC folks aren’t living in a parallel universe.

When you do things this way, you embody a commitment to continuous improvement deeply into your core organizational processes. It almost forces you to do it right. All those folks in IT and creative will demand analytics and tests to run or they won’t have anything to do.

That’s a good thing.

This type of vertical integration of digital capabilities is far, far more important than the balance around centralization or even the home for digital. Yet it gets far less attention in most enterprise strategic discussions.

The existence or lack of this vertical integration is the single most important factor in driving analytics into digital. Do it right, and you’ll do it well. Do what everyone else does and…well…it won’t be so good.

Building Analytics Culture – One Decision at a Time

In my last post, I argued that much of what passes for “building culture” in corporate America is worthless. It’s all about talk. And whether that talk is about diversity, ethics or analytics, it’s equally arid. Because you don’t build culture by talking. You build culture though actions. By doing things right (or wrong if that’s the kind of culture you want). Not only are words not effective in building culture, they can be positively toxic. When words and actions don’t align, the dishonesty casts other – possibly more meaningful words – into disrepute. Think about which is worse – a culture where bribery is simply the accepted and normal way of getting things done (and is cheerfully acknowledged) and one where bribery is ubiquitous but is cloaked behind constant protestations of disinterest and honesty? If you’re not sure about your answer, take it down to a personal level and ask yourself the same question. Do we not like an honest villain better than a hypocrite? If hypocrisy is the compliment vice pays to virtue, it is a particularly nasty form of flattery.

What this means is that you can’t build an analytics culture by telling people to be data driven. You can’t build an analytics culture by touting the virtues of analysis. You can’t even build an analytics culture by hiring analysts. You build an analytics culture by making good (data-driven) decisions.

That’s the only way.

But how do you get an organization to make data-driven decisions? That’s the art of building culture. And in that last post, I laid out seven (a baker’s half-dozen?) tactics for building good decision-making habits: analytic reporting, analytics briefing sessions, hiring a C-Suite analytics advisor, creating measurement standards, building a rich meta-data system for campaigns and content, creating a rapid VoC capability and embracing a continuous improvement methodology like SPEED.

These aren’t just random parts of making analytic decisions. They are tactics that seem to me particularly effective in driving good habits in the organization and building the right kind of culture. But seven tactics doesn’t nearly exhaust my list. Here’s another set of techniques that are equally important in helping drive good decision-making in the organization (my original list wasn’t in any particular order so it’s not like the previous list had all the important stuff):

Yearly Agency Performance Measurement and Reviews

What it is: Having an independent annual analysis of your agency’s performance. This should include review of goals and metrics, consideration of the appropriateness of KPIs and analysis of variation in campaign performance along three dimensions (inside the campaign by element, over time, and across campaigns). This must not be done by the agency itself (duh!) or by the owners of the relationship.

Why it builds culture: Most agencies work by building strong personal relationships. There are times and ways that this can work in your favor, but from a cultural perspective it both limits and discourages analytic thinking. I see many enterprises where the agency is so strongly entrenched you literally cannot criticize them. Not only does the resulting marketing nearly always suck, but this drains the life out of an analytics culture. This is one of many ways in which building an analytic culture can conflict with other goals, but here I definitely believe analytics should win. You don’t need a too cozy relationship with your agency. You do need objective measurement of their performance.

 

Analytics Annotation / Collaboration Tool like Insight Rocket

What it is: A tool that provides a method for rich data annotation and the creation and distribution of analytic stories across the analytics team and into the organization. In Analytic Reporting, I argued for a focus on democratizing knowledge not data. Tools like Insight Rocket are a part of that strategy, since they provide a way to create and rapidly disseminate a layer of meaning on top of powerful data exploration tools like Tableau.

Why it builds culture: There aren’t that many places where technology makes much difference to culture, but there are a few. As some of my other suggestions make clear, you get better analytics culture the more you drive analytics across and into the organization (analytic reporting, C-Suite Advisor, SPEED, etc.). Tools like Insight Rocket have three virtues: they help disseminate analytics thinking not just data, they boost analytics collaboration making for better analytic teams, and they provide a repository of analytics which increases long-term leverage in the enterprise. Oh, here’s a fourth advantage, they force analysts to tell stories – meaning they have to engage with the business. That makes this piece of technology a really nice complement to my suggestion about a regular cadence of analytics briefings and a rare instance of technology deepening culture.

 

In-sourcing

What it is: Building analytics expertise internally instead of hiring it out and, most especially, instead of off-shoring it.

Why it builds culture: I’d be the last person to tell you that consulting shouldn’t have a role in the large enterprise. I’ve been a consultant for most of my working life. But we routinely advise our clients to change the way they think about consulting – to use it not as a replacement for an internal capability but as a bootstrap and supplement to that capability. If analytics is core to digital (and it is) and if digital is core to your business (which it probably is), then you need analytics to be part of your internal capability. Having strong, capable, influential on-shore employees who are analysts is absolutely necessary to analytics culture. I’ll add that while off-shoring, too, has a role, it’s a far more effective culture killer than normal consulting. Off-shoring creates a sharp divide between the analyst and the business that is fatal to good performance and good culture on EITHER side.

 

Learning-based Testing Plan

What it is: Testing plans that include significant focus on developing best design practices and resolving political issues instead of on micro-optimizations of the funnel.

Why it works: Testing is a way to make decisions. But as long as its primary use is to decide whether to show image A or image B or a button in this color or that color, it will never be used properly. To illustrate learning-based testing, I’ve used the example of video integration – testing different methods of on-page video integration, different lengths, different content types and different placements against each key segment and use-case to determine UI parameters for ALL future videos. When you test this way, you resolve hundreds of future questions and save endless future debate about what to do with this or that video. That’s learning based testing. It’s also about picking key places in the organization where political battles determine design – things like home page real-estate and the amount of advertising load on a page – and resolving them with testing; that’s learning based testing, too. Learning based testing builds culture in two ways. First, in and of itself, it drives analytic decision-making. Almost as important, it demonstrates the proper role of experimentation and should help set the table for decision-makers tests to ask for more interesting tests.

 

Control Groups

What it is: Use of control groups to measure effectiveness whenever new programs (operational or marketing) are implemented. Control groups use small population subsets chosen randomly from a target population who are given either no experience or a neutral (existing) experience instead. Nearly all tests feature a baseline control group as part of the test, but the use of control groups transcends A/B testing tools. Use of control groups common in traditional direct response marketing and can be used in a wide variety of on and offline contexts (most especially as I recently saw Elea Feit of Drexel hammer home at the DAA Symposium – as a much more effective approach to attribution).

Why it works: One of the real barriers to building culture is a classic problem in education. When you first teach students something, they almost invariably use it poorly. That can sour others on the value of the knowledge itself. When people in an organization first start using analytics, they are, quite inevitably, going to fall into the correlation trap. Correlation is not causation. But in many cases, it sure looks like it is and this leads to many, many bad decisions. How to prevent the most common error in analytics? Control groups. Control groups build culture because they get decision-makers thinking the right way about measurement and because they protect the organization from mistakes that will otherwise sour the culture on analytics.

 

Unified Success Framework

What it is: A standardized, pre-determined framework for content and campaign success measurement that includes definition of campaign types, description of key metrics for those types, and methods of comparing like campaigns on an apples-to-apples basis.

Why it works: You may not be able to make the horse drink, but leading it to water is a good start. A unified success framework puts rigor around success measurement – a critical part of building good analytics culture. On the producer side, it forces the analytics team to make real decisions about what matters and, one hopes, pushes them to prove that proxy measures (such as engagement) are real. On the consumer side, it prevents that most insidious destroyer of analytics culture, the post hoc success analysis. If you can pick your success after the game is over, you’ll always win.

 

The Enterprise VoC Dashboard

What it is: An enterprise-wide state-of-the-customer dashboard that provides a snapshot and trended look at how customer attitudes are evolving. It should include built in segmentation so that attitudinal views are ALWAYS shown sliced by key customer types with additional segmentation possible.

Why it works: There are so many good things going on here that it’s hard to enumerate them all. First, this type of dashboard is one of the best ways to distill customer-first thinking in the organization. You can’t think customer-first, until you know what the customer thinks. Second, this type of dashboard enforces a segmented view of the world. Segmentation is fundamental to critical thinking about digital problems and this sets the table for better questions and better answers in the organization. Third, opinion data is easier to absorb and use than behavioral data, making this type of dashboard particularly valuable for encouraging decision-makers to use analytics.

 

Two-Tiered Segmentation

What it is: A method that creates two-levels of segmentation in the digital channel. The first level is the traditional “who” someone is – whether in terms of persona or business relationship or key demographics. The second level captures “what” they are trying to accomplish. Each customer touch-point can be described in this type of segmentation as the intersection of who a visitor is and what their visit was for.

Why it works: Much like the VoC Dashboard, Two-Tiered Segmentation makes for dramatically better clarity around digital channel decision-making and evaluation of success. Questions like ‘Is our Website successful?’ get morphed into the much more tractable and analyzable question ‘Is our Website successful for this audience trying to do this task?’. That’s a much better question and big part of building analytics culture is getting people to ask better questions. This also happens to be the main topic of my book “Measuring the Digital World” and in it you can get a full description of both the power and the methods behind Two-Tiered Segmentation.

 

I have more, but I’m going to roll the rest into my next post on building an agile organization since they are all deeply related to the integration of capabilities in the organization. Still, that’s fifteen different tactics for building culture. None of which include mission statements, organizational alignment or C-Level support (okay, Walking the Walk is kind of that but not exactly and I didn’t include it in the fifteen) and none of which will take place in corporate retreats or all-hands conferences. That’s a good thing and makes me believe they might actually work.

Ask yourself this: is it possible to imagine an organization that does even half these things and doesn’t have a great analytics culture? I don’t think it is. Because culture just is the sum of the way your organization works and these are powerful drivers of good analytic thinking. You can imagine an organization that does these things and isn’t friendly, collaborative, responsible, flat, diverse, caring or even innovative. There are all kinds of culture, and good decision-making isn’t the only aspect of culture to care about*. But if you do these things, you will have an organization that makes consistently good decisions.

*Incidentally, if you want to build culture in any of these other ways, you have to think about similar approaches. Astronomers have a clever technique for seeing very faint objects called averted vision. The idea is that you look just to the side of the object if you want to get the most light-gathering power from your eyes. It’s the same with culture. You can’t tackle it head-on by talking about it. You have to build it just a little from the side!