Tag Archives: agile

How to Drive Digital Transformation when You’re Not a Digital Expert : Addressing the Reverse Hierarchy of Understanding

In my last post I described some of the biggest challenges to a traditional enterprise trying to drive digital transformation. This isn’t just the usual “this stuff is hard” blather – there are real hurdles for the traditional large enterprise trying to do digital well. The pace of change and frictionless competition drive organizations used to winning through “weight of metal” not agility, crazy. The need for customer-centricity penalizes organizations setup in careful siloes. And these very real hurdles are exacerbated by the way digital often creates poor decision-making in otherwise skilled organizations because of what I termed the reverse hierarchy of understanding.

The reverse hierarchy of understanding is a pretty simple concept. Organizations work best when the most senior folks know the most about the business. When, in other words, knowledge and seniority track. For the most part (and despite a penchant for folks lower down in the organization to always think otherwise), I think they do track rather well in most companies. That, at least, has been my fairly consistent experience.

There are, of course, many pockets of specialized knowledge in a large company where knowledge and seniority don’t track. The CFO may not be able to drive TM1. The CTO probably doesn’t know Swift. That’s not a problem. However, when something is both strategic and core to the business, it’s critical that knowledge and seniority track appropriately. If they don’t, then it’s hard for the enterprise to make good decisions. The people who are usually empowered to make decisions aren’t as qualified as they typically are, and the folks who have the specific knowledge probably don’t have either the strategic skills or business understanding to fill-in. And, of course, they probably don’t have the power either.

Digital can create exactly this inversion in the appropriate hierarchy of decision-making in the traditional enterprise, and it does so at many levels in the organization. Digital has become strategic and core far more rapidly than most large organizations can adapt, creating reverse hierarchies of understanding that can cripple efforts to do digital better.

So if you want to transform a traditional business and you know your organization has a reverse hierarchy of understanding (or maybe just a complete lack of understanding at every level), what do you do?

There’s not one answer of course. No magic key to unlocking the secret to digital transformation. And I’ve written plenty of stuff previously on ways to do digital better – all of which still applies. But here are some strategies that I think might help – strategies geared toward tackling the specific problem created by reverse hierarchies of understanding.

 

Incubation

I’m sensitive to the many draw-backs to incubating digital inside a larger organization. If incubation succeeds, then it creates long-term integration challenges. It potentially retards the growth of digital expertise in the main business and it may even cannibalize what digital knowledge there is in the organization. These are all real negatives. Despite that, I’ve seen incubation work fairly effectively as a strategy. Incubation creates a protected pocket in the organization that can be staffed and setup in a way that creates the desired knowledge hierarchy through most levels.  Would I always recommend incubation? Absolutely not. In many organizations, years of at least partial learning and transfusions of outside talent have created enough digital savvy so that incubation is unnecessary and probably undesirable. If digital knowledge in your organization is still nascent and particularly if you have layers of management still skeptical or negative to digital, then incubation is a strategy to consider.

 

Transfusion

And speaking of talent transfusions, the role of appropriate hiring in effectively transforming the organization can hardly be overstated. The best, simplest and most impactful way to address the reverse hierarchy of understanding is to…fix the problem. And the easiest way to fix the problem is by hiring folks with deep digital understanding at multiple levels of the organization. In some cases, of course, this means hiring someone to run digital. If you’re a traditional enterprise looking to hire a chief digital officer, the natural place to look is to organization’s that are great in digital – especially the companies that dominate the Web and that we all, rightly, admire. I tell my clients that’s a mistake. It’s not that those folks aren’t really good at digital; they are. What they aren’t good at is digital transformation. If you’ve grown up managing digital platforms and marketing for a digital pure-play, chances are you’re going to be massively frustrated trying to change a traditional enterprise. To drive transformation, you have to be a great coach. That isn’t at all the same as being a great player. In fact, not only isn’t it the same, it’s negatively correlated. The best coaches are almost NEVER the best players.

Getting the right person to lead digital isn’t the place where most organizations go wrong though. If you’re committed to digital transformation, you need to look for digital savvy in every hiring decision that is at all related to your digital enterprise. You need digital savvy in HR, in accounting, analytics, in customer, in supply chain, in branding and corporate communication. Etc. Etc. This is the long game, but it’s ultimately the most important game you’ll play in digital transformation – especially when you’re trying to drive transformation outside of massive disruption. In my last post, I mentioned FDR’s many efforts to prepare the U.S. for WWII before there was any political consensus for war. Every leader is constrained by the realities on the ground. Great leaders find ways to at least lay the essential groundwork for transformation BEFORE – not after – disaster strikes. You need to make sure that digital savvy becomes a basic qualifier for a wide range of positions in your organization.

 

Analytics

Dare I say that analytics has the potential to play a decisive role in solving the reverse hierarchy of understanding? Well, at the very least, it can be a powerful tool. In a normal hierarchy of understanding, seniority comes pre-loaded with better intuitions. Intuitions born of both experience and selection. And those intuitions, naturally, drive to better decisions. It’s darn hard to replace those intuitions, but analytics is a great leveler. A good analyst may not be quite the decision-maker that an experienced expert is – but at the very least a good analyst equipped with relevant data will come much closer to that level of competent decisioning than would otherwise be possible.

Thankfully, this works both ways. Where senior decision-makers can’t rely on their experience and knowledge, they, too, benefit from analytics to close the gap. An executive willing to look at analytics and learn may not be quite in the league of an experienced digital expert, but they can come surprisingly close.

This works all up and down the organization.

So how do you get your team using analytics? I addressed this in depth in a series of posts on building analytic culture. Read this and this. It’s good stuff. But here’s a simple management technique that can help drive your whole team to start using analytics. Every time there’s an argument over something, instead of voicing an opinion, ask for the numbers. If your team is debating whether to deliver Feature X or Feature Y in digital, ask questions like “What do our customers say is more important?” or “Which do high-value customers say they’ll use more?”

Ask questions about what gets used more. About whether people like an experience. About whether people who do something are actually more likely to convert. If you keep asking questions, eventually people are going to start getting used to thinking this way and will start asking (and answering) the questions themselves.

Way back in the early days of Semphonic, I often had junior programmers ask me how to do some coding task. At the time, I was still a pretty solid programmer with years of experience writing commercial software in C++. But since I wasn’t actively programming and my memory tends to be a bit short-term, I almost never just knew the answer. Instead, I’d ask Google. Almost always, I could find some code that solved the problem with only a few minutes’ search. Usually, we’d do this together staring at my screen. Eventually, they got the message and bypassed me by looking for code directly on Google.

That’s a win.

Nowadays, programmers do this automatically. But back in the aughts, I had to teach programmers that the easiest way to solve most coding problems is to find examples on Google. In ten years, looking at digital analytics and voice of customer will be second-nature throughout your organization.  But for right now, if you can make your team do the analytics work to answer the types of questions I’ve outlined above, you’ll have dramatically raised the level of digital sophistication in your organization. This isn’t as foreign to most good enterprise leaders as I used to think. Sure, folks at the top of most companies are used to offering their opinions. But they’re also pretty experienced at having to make decisions in areas where they aren’t that expert and they know that asking questions is a powerful tool for pushing people to demonstrate (or arrive at) understanding. The key is knowing the right questions to ask. In digital, that usually means asking customer-focused questions like the one’s I enumerated above.

 

Consulting

I’m probably too deeply involved in the sausage-making to give good advice on how organizations should use consulting to drive transformation. But here’s a few pointers that I think are worth bearing in mind. Consulting is a tempting way to solve a reverse hierarchy of understanding. You can bring in hired guns to build a digital strategy or drive specific digital initiatives. And if you’re lucky or choose wisely, there’s no reason why consultants can’t provide real benefits – helping speed up digital initiatives and supplement your organizational expertise. I genuinely believe we do this on a pretty consistent basis. Nevertheless, consultants don’t fix the problems created by a reverse hierarchy of understanding; they are, at best, a band aid. Not only is it too expensive to pay consultants to make your decisions on a continuing basis, it just doesn’t work very well. There are so many reasons why it doesn’t work well that I can attempt only a very partial enumeration: outside of a specific project, your consultant’s KPIs are almost never well aligned with your KPIs (we’re measured by how much stuff we sell), it’s difficult to integrate consultants into a chain of command and often damaging if you try too hard to do so, consultants can become a crutch for weaker managers, and consultants rarely understand your business well enough to make detailed tactical decisions.

Don’t get me wrong. Building talent internally takes time and there aren’t many traditional enterprises where I wouldn’t honestly recommend the thoughtful use of consulting services to help drive digital transformation. Just don’t lose sight of the fact that most of the work is always going to be yours.

 

That last sentence probably rings true across every kind of problem! And while digital transformation is legitimately hard and some of the challenges digital presents ARE different, it’s good to keep in mind that in many respects it is just another problem.

I’ve never believed in one “right” organization, and when it comes to digital transformation there are strong arguments both for and against incubation. I think a decision around incubation ultimately comes down to whether digital needs protection or just expertise. If the former, incubation is probably necessary. If the latter, it may not be. Similarly, we’re all used to the idea that if we need new expertise in an organization we probably have to hire it. But digital introduces two twists. First, the best candidate to lead a digital transformation isn’t necessarily the best digital candidate. Second, real digital transformation doesn’t just come from having a leader or a digital organization. You should bake digital qualifications into hiring at almost every level of your organization. It’s the long game, but it will make a huge difference. And when it comes to leveling the playing field when faced with a reverse hierarchy of knowledge, remember that analytics is your friend. Teaching the organization to use analytics doesn’t require you to be an analytics wizard. It mostly demands that you ask the right questions. Over and over. Finally, and this really is no different in digital transformation than anywhere else, consulting is kind of like a cold medicine – it fixes symptoms but it doesn’t cure the disease. That doesn’t mean I don’t want my bottle of Nyquil handy when I have a cold! It just means I know I won’t wake up all better. The mere fact of a reverse hierarchy of understanding can make over-reliance on consulting a temptation. When you’re used to knowing better than everyone, it’s kind of scary when you don’t. Make sure your digital strategy includes thought about the way to use and not abuse your consulting partners (and no, don’t expect that to come from even the best consultants).

Keep these four lessons in mind, and you’re at least half-way to a real strategy for transformation.

Digital Transformation in the Enterprise – Creating Continuous Improvement

I’m writing this post as I fly to London for the Digital Analytics Hub. The Hub is in its fourth year now (two in Berlin and two in London) and I’ve managed to make it every time. Of course, doing these Conference/Vacations is a bit of a mixed blessing. I really enjoyed my time in Italy but that was more vacation than Conference. The Hub is more Conference than vacation – it’s filled with Europe’s top analytics practitioners in deep conversation on analytics. In fact, it’s my favorite analytics conference going right now. And here’s the good news, it’s coming to the States in September! So I have one more of these analytics vacations on my calendar and that should be the best one of all. If you’re looking for the ultimate analytics experience – an immersion in deep conversation with the some of the best analytics practitioners around – you should check it out.

I’ve got three topics I’m bringing to the Hub. Machine Learning for digital analytics, digital analytics forecasting and, of course, the topic at hand today, enterprise digital transformation.

In my last post, I described five initiatives that lay the foundation for analytics driven digital transformation. Those projects focus on data collection, journey mapping, behavioral segmentation, enterprise Voice of Customer (VoC) and unified marketing measurement. Together, these five initiatives provide a way to think about digital from a customer perspective. The data piece is focused on making sure that data collection to support personalization and segmentation is in place. The Journey mapping and the behavioral segmentation provide the customer context for every digital touchpoint – why it exists and what it’s supposed to do. The VoC system provides a window into who customers want and need and how they make decisions at every touchpoint. Finally, the marketing framework ensures that digital spend is optimized on an apples-to-apples basis and is focused on the right customers and actions to drive the business.

In a way, these projects are all designed to help the enterprise think and talk intelligently about the digital business. The data collection piece is designed to get organizations thinking about personalization cues in the digital experience. Journey mapping is designed to expand and frame customer experience and place customer thinking at the center of the digital strategy. Two-tiered segmentation serves to get people talking about digital success in terms of customer’s and their intent. Instead of asking questions like whether a Website is successful, it gets people thinking about whether the Website is successful for a certain type of customer with a specific journey intent. That’s a much better way to think. Similarly, the VoC system is all about getting people to focus on customer and to realize that analytics can serve decision-making on an ongoing basis. The marketing framework is all about making sure that campaigns and creative are measured to real business goals – set within the customer journey and the behavioral segmentation.

The foundational elements are also designed to help integrate analytics into different parts of the digital business. The data collection piece is targeted toward direct response optimization. Journey mapping is designed to help weld strategic decisions to line manager responsibilities. Behavioral segmentation is focused on line and product managers needing tactical experience optimization. VoC is targeted toward strategic thinking and decision-making, and, of course, the marketing framework is designed to support the campaign and creative teams.

If a way to think and talk intelligently about the digital enterprise and its operations is the first step, what comes next?

All five of the initiatives that I’ve slated into the next phase are about one thing – creating a discipline of continuous improvement in the enterprise. That discipline can’t be built on top of thin air – it only works if your foundation (data, metrics, framework) supports optimization. Once it does, however, the focus should be on taking advantage of that to create continuous improvement.

The first step is massive experimentation via an analytics driven testing plan. This is partly about doing lots of experiments, yes. But even more important is that the experimentation be done as part of an overall optimization plan with tests targeted by behavioral and VoC analytics to specific experiences where the opportunity for improvement is highest. If all you’re thinking about is how many experiments you run, you’re not doing it right. Every type of customer and every part of their journey should have tests targeted toward its improvement.

Similarly on the marketing side, phase II is about optimizing against the unified measurement framework with both mix and control group testing. Mix is a top-down approach that works against your overall spending – regardless of channel type or individual measurement. Control group testing is nothing more than experimentation in the marketing world. Control groups have been a key part of marketing since the early direct response days. They’re easier to implement and more accurate in establishing true lift and incrementality than mathematical attribution solutions.

The drive toward continuous improvement doesn’t end there, however. I’m a big fan for tool-based reporting as a key part of the second phase of analytics driven transformation. The idea behind tool-based reporting is simple but profound. Instead of reports as static, historical tools to describe what happened, the idea is that reports contain embedded predictive models that transform them into tools that can be used to understand the levers of the business and test what might happen based on different business strategies. Building tool-based reports for marketing, for product launch, for conversion funnels and for other key digital systems is deeply transformative. I describe this as shift in the organization from democratizing data to democratizing knowledge. Knowledge is better. But the advantages to tool-based reporting run even deeper. The models embedded in these reports are your best analytic thinking about how the business works. And guess what? They’ll be wrong a lot of the time and that’s a good thing. It’s a good thing because by making analytically thinking about how the business works explicit, you’ve created feedback mechanisms in the organization. When things don’t work out the way the model predicts, your analysts will hear about it and have to figure out why and how to do better. That drives continuous improvement in analytics.

A fourth key part of creating the agile enterprise – at least for sites without direct ecommerce – is value-based optimization. One of the great sins in digital measurement is leaving gaps in your ability to measure customers across their journey. I call this “closing measurement loops”. If you’re digital properties are lead generating or brand focused or informational or designed to drive off-channel or off-property (to Amazon or to a Call-Center), it’s much harder to measure whether or not they’re successful. You can measure proxies like content consumption or site satisfaction, but unless these proxies actually track to real outcomes, you’re just fooling yourself. This is important. To be good at digital and to use measurement effectively, every important measurement gap needs to be closed. There’s no one tool or method for closing measurement gaps, instead, a whole lot of different techniques with a bunch of sweat is required. Some of the most common methods for closing measurement gaps include re-survey, panels, device binding and dynamic 800 numbers.

Lastly, a key part of this whole phase is training the organization to think in terms of continuous improvement. That doesn’t happen magically and while all of the initiatives described here support that transformation, they aren’t, by themselves, enough. In my two posts on building analytics culture, I laid out a fairly straightforward vision of culture. The basic idea is that you build analytics culture my using data and analytics. Not by talking about how important data is or how people should behave. In the beginning was the deed.

Creating a constant cadence of analytics-based briefings and discussions forces the organization to think analytically. It forces analysts to understand the questions that are meaningful to the business. It forces decision-makers to reckon with data and lets them experience the power of being able to ask questions and get real answers. Just the imperative of having to say something interesting is good discipline for driving continuous improvement.

foundational transformation Step 2

That’s phase two of enterprise digital transformation. It’s all about baking continuous improvement into the organization and building on top of each element of the foundation the never ending process of getting better.

 

You might think that’s pretty much all there is to the analytics side of the digital transformation equation. Not so. In my next post, I’ll cover the next phase of analytics transformation – driving big analytics wins. So far, most of what I’ve covered is valid for any enterprise in any industry. But in the next phase, initiatives tend to be quite different depending on your industry and business model.

See you after the Hub!

Gelato was the word I meant

I spent most of the last week on holiday in Italy. But since the holiday was built around a speaking gig in Italy at the Be Wizard Digital Marketing conference I still spent a couple of days talking analytics and digital. A couple of days I thoroughly enjoyed. The conference closed with a Q&A for a small group of speakers and while I got a few real analytics questions it felt more like a meet and greet – with plenty of puff-ball questions like “what word would use to describe the conference?” A question I failed miserably with the very pathetic answer “fun”.

I guess that’s why it’s better to ask me analytics questions.

The word I probably should have chosen is “gelato”.

And not just because I hogged down my usual totally ridiculous amount of fragola, melone, cioccolato, and pesca – scoop by scoop from Rimini to Venice.

Gelato because I had a series of rich conversations with Mat Sweezey from Salesforce (nee Pardot) who gave a terrific presentation on authenticity and what it means in this new digital marketing world. It’s easy to forget how dramatically digital has changed marketing and miss some of the really important lessons from those changes. Mat also showed me a presentation on agile that blends beautifully with the digital transformation story I’ve been trying to tell in the last six months. It’s a terrific deck with some slides that explain why test&learn and agile methods work so much better than traditional methods. It’s a presentation with the signal virtue of taking very difficult concepts and making them not just clear but compelling. That’s hard to do well.

Gelato because I also talked with and enjoyed a great presentation from Chris Anderson of Cornell. Chris led a two-hour workshop in the revenue management track (which happens to be a kind of side interest of mine). His presentation focused on the impact of social media content on sites like TripAdvisor on room pricing strategies. He’s done several compelling research projects with OTAs (Online Travel Agents) looking at the influence of social media content on buying decisions. His research has looked at key variables that drive influence (number of reviews and rating), how sensitive demand is to those factors, and how that sensitivity plays out by hotel class (turns out that the riskier the lodging decision the more impactful social reviews are). He’s also looked at review response strategies on TripAdvisor and has some compelling research showing how review response can significantly improve ratings outcomes but how it’s also possible to over-respond. Respond to everything, and you actually do worse than if you respond to nothing.

That’s a fascinating finding and very much in keeping with Mat’s arguments around authenticity. If you make responding to every social media post a corporate policy, what you say is necessarily going to sound forced and artificial.

That’s why it doesn’t work.

If you’re in the hospitality industry, you should see this presentation. In fact, there are lessons here for any company interested in the impact of reviews and social content and interested in taking a more strategic view of social outreach and branding. I think Chris’ data suggest significant and largely unexplored opportunities for both better revenue management decisions around OTA pricing and better strategies around the review ask.

Gelato because there was one question I didn’t get to answer that I wanted to (and somehow no matter how much gelato I consume I always want a little more).

Since I had to have translations of the panel questions at the end, I didn’t always get a chance to respond. Sometimes the discussion had moved on by the time I understood the question! And one of the questions – how can companies compete with publishers when it comes to content creation – seemed to me deeply related to both Mat and Chris’ presentations.

Here’s the question as I remember it:

If you’re a manufacturer or a hotel chain or a retailer, all you ever hear in digital marketing is how content is king. But you’re not a content company. So how do you compete?

The old-fashioned way is to hire an agency to write some content for you. That’s not going to work. You won’t have enough content, you’ll have to pay a lot for it, and it won’t be any good. To Mat’s point around authenticity, you’re not going to fool people. You’re not going to convince them that your content isn’t corporate, mass-produced, ad agency hack-work. Because it is and because people aren’t stupid. Building a personalization strategy to make bad content more relevant isn’t going to help much either. That’s why you don’t make it a corporate policy to reply to every review and why you don’t write replies from a central team of ad writers.

Stop trying to play by the old rules.

Make sure your customer relations, desk folks, and managers understand how to build relationships with social media and give them the tools to do it. If you want authentic content, find your evangelists. People who actually make, design, support or use your products. Give them a forum. A real one. And turn them loose. Find ways to encourage them. Find ways to magnify their voice. But turn them loose.

You can’t have it both ways. You can’t be authentic while you try to wrap every message in a Madison Avenue gift wrapping bought from the clever folks at your ad agency. Check out Mat’s presentation (he’s a Slideshare phenom). Think about the implications of unlimited content and the ways we filter. Process the implications. The world has changed and the worst strategy in the world is to keep doing things the old way.

So gelato because the Be Wizard conference, like Italy in general, was rich, sweet, cool and left me wanting to hear (and say) a bit more!

And speaking of conferences, we’re not that far away from my second European holiday with analytics baked in – The Digital Analytics Hub in London (early June). I’ve been to DA Hub several years running now – ever since two old friends of mine started it. It’s an all conversational conference modeled on X Change and it’s always one of the highlights of my year. In addition to facilitating a couple conversations, I’m also going to be leading a very deep-dive workshop into digital forecasting. I plan to walk through forecasting from the simplest sort of forecast (everything will stay the same) to increasingly advanced techniques that rely, first on averages and smoothing, and then to models. If you’re thinking about forecasting, I really think this workshop will be worth the whole conference (and the Hub is always great anyway)…

If you’ve got a chance to be in London in early June, don’t miss the Hub.

The Agile Organization

I’ve been meandering through an extended series on digital transformation: why it’s hard, where things go wrong, and what you need to be able to do to be successful. In this post, I intend to summarize some of that thinking and describe how the large enterprise should organize itself to be good at digital.

Throughout this series, I’ve emphasized the importance of being able to make good decisions in the digital realm. That is, of course, the function of analytics and its my own special concerns when it comes to digital. But there are people who will point out  that decision-making is not the be all and end all of digital excellence. They might suggest that being able to execute is important too.

If you’re a football fan, it’s easy to see the dramatic difference between Peyton Manning – possibly the finest on-field decision-maker in the history of the game – with a good arm and without. It’s one thing to know where to throw the ball on any given play, quite another to be able to get it there accurately. If that wasn’t the case, it’s probably true that many of my readers would be making millions in the NFL!

On the other hand, this divide between decision-making and execution tends to break down if you extend your view to the entire organization. If the GM is doing the job properly, then the decision about which quarterbacks to draft or sign will appropriately balance their physical and decision-making skills. That’s part of what’s involved in good GM decisioning. Meanwhile, the coach has an identical responsibility on a day-to-day basis. A foot injury may limit Peyton to the point where his backup becomes a better option. Then it may heal and the pendulum swings back. The organization makes a series of decisions and if it can make all of those decisions well, then it’s hard to see how execution doesn’t follow along.

If, as an organization, I can make good decisions about the strategy for digital, the technology to run it on, the agencies to build it, the people to optimize it, the way to organize it, and the tactics to drive it, then everything is likely to be pretty good.

Unfortunately, it’s simply not the case that the analytics, organization and capabilities necessary to make good decisions across all these areas are remotely similar. To return to my football analogy, it’s clear that very few organizations are setup to make good decisions in every aspect of their operations. Some organizations excel at particular functions (like game-planning) but are very poor at drafting. Indeed, sometimes success in one-area breeds disaster in another. When a coach like Chip Kelly becomes very successful in his role, there is a tendency for the organization to expand that role so that the coach has increasing control over personnel. This almost always works badly in practice. Even knowing it will work badly doesn’t prevent the problem. Since the coach is so important, it may be that an organization will cede much control over personnel to a successful coach even when everyone (except the coach) believes it’s a bad idea.

If you don’t think similar situations arise constantly in corporate America, you aren’t paying attention.

In my posts in this series, I’ve mapped out the capabilities necessary to give decision-makers the information and capabilities they need to make good decisions about digital experiences. I haven’t touched on (and don’t really intend to touch on) broader themes like deciding who the right people to hire are or what kind of measurement, analysis or knowledge is necessary to make those sorts of meta-decisions.

There are two respects, however, in which I have tried to address at least some of these meta-concerns about execution. First, I’ve described why it is and how it comes to pass that most enterprises don’t use analytics to support strategic decision-making. This seems like a clear miss and a place where thoughtful implementation of good measurement, particularly voice-of-customer measurement of the type I’ve described, should yield high returns.

Second, I took a stab at describing how organizations can think about and work toward building an analytics culture. In these two posts, I argue that most attempts at culture-building approach the problem backwards. The most common culture-building activities in the enterprise are all about “talk”. We talk about diversity. We talk about ethics. We talk about being data-driven in our decision-making. I don’t think this talk adds up to much. I suggest that culture is formed far more through habit than talk; that if an organization wants to build an analytics culture, it needs to find ways to “do” analytics. The word may proceed the deed, but it is only through the force of the deed (good habits) that the word becomes character/culture. This may seem somewhat obvious – no, it is obvious – but people somehow manage to miss the obvious far too often. Those posts don’t just formulate the obvious, they also suggest a set of activities that are particularly efficacious in creating good enterprise habits of decision-making. If you care about enterprise culture and you haven’t already done so, give them a read.

For some folks, however, all these analytics actions miss the key questions. They don’t want to know what the organization should do. They want to know how the organization should work. Who owns digital? Who owns analytics? What lives in a central organization? What lives in a business unit? Is digital a capability or a department?

In the context of the small company, most of these questions aren’t terribly important. In the large enterprise, they mean a lot. But acknowledging that they mean a lot isn’t to suggest that I can answer them – or at least most of them.

I’m skeptical that there is an answer for most of these questions. At least in the abstract, I doubt there is one right organization for digital or one right degree of centralization. I’ve had many conversations with wise folks who recognize that their organizations seem to be in constant motion – swinging like an enormous pendulum between extremes of centralization followed by extremes of decentralization.

Even this peripatetic motion – which can look so irrational from the inside – may make sense. If we assume that centralization and decentralization have distinct advantages, then not only might it be true that changing circumstances might drive a change in the optimal configuration, but it might even be true that swinging the organization from one pole to the other might help capture the benefits of each.

That seems unlikely, but you never know. There is sometimes more logic in the seemingly irrational movements of the crowd than we might first imagine.

Most questions about digital organization are deeply historical. They depend on what type of company you are, in what of market, with what culture and what strategic imperatives. All of which is, of course, Management 101. Obvious stuff that hardly needs to be stated.

However, there are some aspects of digital about which I am willing to be more directive. First, that some balance between centralization and decentralization is essential in analytics. The imperative for centralization is driven by these factors: the need for comparative metrics of success around digital, the need for consistent data collection, the imperatives of the latest generation of highly-complex IT systems, and the need/desire to address customers across the full spectrum of their engagement with the enterprise. Of these, the first and the last are primary. If you don’t need those two, then you may not care about consistent data collection or centralized data systems (this last is debatable).

On the other hand, there are powerful reasons for decentralization of which the biggest is simply that analytics is best done as close to the decision-making as possible. Before the advent of Hadoop, I would have suggested that the vast majority of analytics resources in the digital space be decentralized. Hadoop makes that much harder. The skills are much rarer, the demands for control and governance much higher, and the need for cross-domain expertise much greater in this new world.

That will change. As the open-source analytics stack matures and the market over-rewards skilled practitioners – drawing in more folks, it will become much easier to decentralize again. This isn’t the first time we’ve been down the IT path that goes from centralization to gradual diffusion as technologies become cheaper, easier, and better supported.

At an even more fundamental level than the question of centralization lives the location and nature of digital. Is digital treated as a thing? Is it part of Marketing? Or Operations? Or does each thing have a digital component?

I know I should have more of an opinion about this, but I’m afraid that the right answers seem to me, once again, to be local and historical. In a digital pure-play, to even speak of digital as a thing seems absurd. It’s the core of the company. In a gas company, on the other hand, digital might best be viewed as a customer service channel. In a manufacturer, digital might be a sub-function of brand marketing or, depending on the nature of the digital investment and its importance to the company, a unit unto-itself.

Obviously, one of the huge disadvantages to thinking of digital as a unit unto-itself is how it can then interact correctly with the non-digital functions that share the same purpose. If you have digital customer servicing and non-digital customer servicing, does it really make sense to have one in a digital department and the other as a customer-service department?

There is a case, however, for incubating digital capabilities within a small compact, standalone entity that can protect and nourish the digital investment with a distinct culture and resourcing model. I get that. Ultimately, though, it seems to me that unless digital OWNS an entire function, separating that function across digital and non-digital lines is arbitrary and likely to be ineffective in an omni-channel world.

But here’s the flip side. If you have a single digital property and it shares marketing and customer support functions, how do you allocate real-estate and who gets to determine key things like site structure? I’ve seen organizations where everything but the homepage is owned by somebody and the home page is like Oliver Twist. “Home page for sale, does anybody want one?”

That’s not optimal.

So the more overlap there needs to be between the functions and your digital properties, the more incentive you have to build a purely digital organization.

No matter what structure you pick, there are some trade-offs you’re going to have to live with. That’s part of why there is no magic answer to the right organization.

But far more important than the precise balance you strike around centralization or even where you put digital is the way you organize the core capabilities that belong to digital. Here, the vast majority of enterprises organize along the same general lines. Digital comprises some rough set of capabilities including:

  • IT
  • Creative
  • Marketing
  • Customer
  • UX
  • Analytics
  • Testing
  • VoC

In almost every company I work with, each of these capabilities is instantiated as a separate team. In most organizations, the IT folks are in a completely different reporting structure all the way up. There is no unification till you hit the C-Suite. Often, Marketing and Creative are unified. In some organizations, all of the research functions are unified (VoC, analytics) – sometimes under Customer, sometimes not. UX and Testing can wind up almost anywhere. They typically live under the Marketing department, but they can also live under a Research or Customer function.

None of this, to me, makes any sense.

To do digital well requires a deep integration of these capabilities. What’s more, it requires that these teams work together on a consistent basis. That’s not the way it’s mostly done.

Almost every enterprise I see not only siloes these capabilities, but puts in place budgetary processes that fund each digital asset as a one-time investment and which requires pass-offs between teams.

That’s probably not entirely clear so let me give some concrete examples.

You want to launch a new website. You hire an agency to design the Website. Then your internal IT team builds it. Now the agency goes away. The folks who designed the website no longer have anything to do with it. What’s more, the folks who built it get rotated onto the next project. Sometimes, that’s all that happens. The website just sits there – unimproved. Sometimes the measurement team will now pick it up. Keep in mind that the measurement team almost never had anything to do with the design of the site in the first place. They are just there to report on it. Still, they measure it and if they find some problem, who do they give it to?

Well, maybe they pass it on to the UX team or the testing team. Those teams, neither of which have ever worked with the website or had anything to do with its design are now responsible for implementing changes on it. And, of course, they will be working with developers who had nothing to do with building it.

Meanwhile, on an entirely separate track, the customer team may be designing a broader experience that involves that website. They enlist the VoC team to survey the site’s users and find out what they don’t like about it. Neither team (of course) had anything to do with designing or building the site.

If they come to some conclusion about what they want the site to do, they work with another(!) team of developers to implement their changes. That these changes may be at cross-purposes to the UX team’s changes or the original design intent is neither here nor there.

Does any of this make sense?

If you take continuous improvement to heart (and you should because it is the key to digital excellence), you need to realize that almost everything about the way your digital organization functions is wrong. You budget wrong and you organize wrong.

[Check out my relatively short (20 min) video on digital transformation and analytics organization – it’s the perfect medium for distributing this message through your enterprise!]

Here’s my simple rule about building digital assets. If it’s worth doing, it’s worth improving. Nothing you build will ever be right the first time. Accept that. Embrace it. That means you budget digital teams to build AND improve something. Those teams don’t go away. They don’t rotate. And they include ALL of the capabilities you need to successfully deliver digital experiences. Your developers don’t rotate off, your designers don’t go away, your VoC folks aren’t living in a parallel universe.

When you do things this way, you embody a commitment to continuous improvement deeply into your core organizational processes. It almost forces you to do it right. All those folks in IT and creative will demand analytics and tests to run or they won’t have anything to do.

That’s a good thing.

This type of vertical integration of digital capabilities is far, far more important than the balance around centralization or even the home for digital. Yet it gets far less attention in most enterprise strategic discussions.

The existence or lack of this vertical integration is the single most important factor in driving analytics into digital. Do it right, and you’ll do it well. Do what everyone else does and…well…it won’t be so good.

Building Analytics Culture – One Decision at a Time

In my last post, I argued that much of what passes for “building culture” in corporate America is worthless. It’s all about talk. And whether that talk is about diversity, ethics or analytics, it’s equally arid. Because you don’t build culture by talking. You build culture though actions. By doing things right (or wrong if that’s the kind of culture you want). Not only are words not effective in building culture, they can be positively toxic. When words and actions don’t align, the dishonesty casts other – possibly more meaningful words – into disrepute. Think about which is worse – a culture where bribery is simply the accepted and normal way of getting things done (and is cheerfully acknowledged) and one where bribery is ubiquitous but is cloaked behind constant protestations of disinterest and honesty? If you’re not sure about your answer, take it down to a personal level and ask yourself the same question. Do we not like an honest villain better than a hypocrite? If hypocrisy is the compliment vice pays to virtue, it is a particularly nasty form of flattery.

What this means is that you can’t build an analytics culture by telling people to be data driven. You can’t build an analytics culture by touting the virtues of analysis. You can’t even build an analytics culture by hiring analysts. You build an analytics culture by making good (data-driven) decisions.

That’s the only way.

But how do you get an organization to make data-driven decisions? That’s the art of building culture. And in that last post, I laid out seven (a baker’s half-dozen?) tactics for building good decision-making habits: analytic reporting, analytics briefing sessions, hiring a C-Suite analytics advisor, creating measurement standards, building a rich meta-data system for campaigns and content, creating a rapid VoC capability and embracing a continuous improvement methodology like SPEED.

These aren’t just random parts of making analytic decisions. They are tactics that seem to me particularly effective in driving good habits in the organization and building the right kind of culture. But seven tactics doesn’t nearly exhaust my list. Here’s another set of techniques that are equally important in helping drive good decision-making in the organization (my original list wasn’t in any particular order so it’s not like the previous list had all the important stuff):

Yearly Agency Performance Measurement and Reviews

What it is: Having an independent annual analysis of your agency’s performance. This should include review of goals and metrics, consideration of the appropriateness of KPIs and analysis of variation in campaign performance along three dimensions (inside the campaign by element, over time, and across campaigns). This must not be done by the agency itself (duh!) or by the owners of the relationship.

Why it builds culture: Most agencies work by building strong personal relationships. There are times and ways that this can work in your favor, but from a cultural perspective it both limits and discourages analytic thinking. I see many enterprises where the agency is so strongly entrenched you literally cannot criticize them. Not only does the resulting marketing nearly always suck, but this drains the life out of an analytics culture. This is one of many ways in which building an analytic culture can conflict with other goals, but here I definitely believe analytics should win. You don’t need a too cozy relationship with your agency. You do need objective measurement of their performance.

 

Analytics Annotation / Collaboration Tool like Insight Rocket

What it is: A tool that provides a method for rich data annotation and the creation and distribution of analytic stories across the analytics team and into the organization. In Analytic Reporting, I argued for a focus on democratizing knowledge not data. Tools like Insight Rocket are a part of that strategy, since they provide a way to create and rapidly disseminate a layer of meaning on top of powerful data exploration tools like Tableau.

Why it builds culture: There aren’t that many places where technology makes much difference to culture, but there are a few. As some of my other suggestions make clear, you get better analytics culture the more you drive analytics across and into the organization (analytic reporting, C-Suite Advisor, SPEED, etc.). Tools like Insight Rocket have three virtues: they help disseminate analytics thinking not just data, they boost analytics collaboration making for better analytic teams, and they provide a repository of analytics which increases long-term leverage in the enterprise. Oh, here’s a fourth advantage, they force analysts to tell stories – meaning they have to engage with the business. That makes this piece of technology a really nice complement to my suggestion about a regular cadence of analytics briefings and a rare instance of technology deepening culture.

 

In-sourcing

What it is: Building analytics expertise internally instead of hiring it out and, most especially, instead of off-shoring it.

Why it builds culture: I’d be the last person to tell you that consulting shouldn’t have a role in the large enterprise. I’ve been a consultant for most of my working life. But we routinely advise our clients to change the way they think about consulting – to use it not as a replacement for an internal capability but as a bootstrap and supplement to that capability. If analytics is core to digital (and it is) and if digital is core to your business (which it probably is), then you need analytics to be part of your internal capability. Having strong, capable, influential on-shore employees who are analysts is absolutely necessary to analytics culture. I’ll add that while off-shoring, too, has a role, it’s a far more effective culture killer than normal consulting. Off-shoring creates a sharp divide between the analyst and the business that is fatal to good performance and good culture on EITHER side.

 

Learning-based Testing Plan

What it is: Testing plans that include significant focus on developing best design practices and resolving political issues instead of on micro-optimizations of the funnel.

Why it works: Testing is a way to make decisions. But as long as its primary use is to decide whether to show image A or image B or a button in this color or that color, it will never be used properly. To illustrate learning-based testing, I’ve used the example of video integration – testing different methods of on-page video integration, different lengths, different content types and different placements against each key segment and use-case to determine UI parameters for ALL future videos. When you test this way, you resolve hundreds of future questions and save endless future debate about what to do with this or that video. That’s learning based testing. It’s also about picking key places in the organization where political battles determine design – things like home page real-estate and the amount of advertising load on a page – and resolving them with testing; that’s learning based testing, too. Learning based testing builds culture in two ways. First, in and of itself, it drives analytic decision-making. Almost as important, it demonstrates the proper role of experimentation and should help set the table for decision-makers tests to ask for more interesting tests.

 

Control Groups

What it is: Use of control groups to measure effectiveness whenever new programs (operational or marketing) are implemented. Control groups use small population subsets chosen randomly from a target population who are given either no experience or a neutral (existing) experience instead. Nearly all tests feature a baseline control group as part of the test, but the use of control groups transcends A/B testing tools. Use of control groups common in traditional direct response marketing and can be used in a wide variety of on and offline contexts (most especially as I recently saw Elea Feit of Drexel hammer home at the DAA Symposium – as a much more effective approach to attribution).

Why it works: One of the real barriers to building culture is a classic problem in education. When you first teach students something, they almost invariably use it poorly. That can sour others on the value of the knowledge itself. When people in an organization first start using analytics, they are, quite inevitably, going to fall into the correlation trap. Correlation is not causation. But in many cases, it sure looks like it is and this leads to many, many bad decisions. How to prevent the most common error in analytics? Control groups. Control groups build culture because they get decision-makers thinking the right way about measurement and because they protect the organization from mistakes that will otherwise sour the culture on analytics.

 

Unified Success Framework

What it is: A standardized, pre-determined framework for content and campaign success measurement that includes definition of campaign types, description of key metrics for those types, and methods of comparing like campaigns on an apples-to-apples basis.

Why it works: You may not be able to make the horse drink, but leading it to water is a good start. A unified success framework puts rigor around success measurement – a critical part of building good analytics culture. On the producer side, it forces the analytics team to make real decisions about what matters and, one hopes, pushes them to prove that proxy measures (such as engagement) are real. On the consumer side, it prevents that most insidious destroyer of analytics culture, the post hoc success analysis. If you can pick your success after the game is over, you’ll always win.

 

The Enterprise VoC Dashboard

What it is: An enterprise-wide state-of-the-customer dashboard that provides a snapshot and trended look at how customer attitudes are evolving. It should include built in segmentation so that attitudinal views are ALWAYS shown sliced by key customer types with additional segmentation possible.

Why it works: There are so many good things going on here that it’s hard to enumerate them all. First, this type of dashboard is one of the best ways to distill customer-first thinking in the organization. You can’t think customer-first, until you know what the customer thinks. Second, this type of dashboard enforces a segmented view of the world. Segmentation is fundamental to critical thinking about digital problems and this sets the table for better questions and better answers in the organization. Third, opinion data is easier to absorb and use than behavioral data, making this type of dashboard particularly valuable for encouraging decision-makers to use analytics.

 

Two-Tiered Segmentation

What it is: A method that creates two-levels of segmentation in the digital channel. The first level is the traditional “who” someone is – whether in terms of persona or business relationship or key demographics. The second level captures “what” they are trying to accomplish. Each customer touch-point can be described in this type of segmentation as the intersection of who a visitor is and what their visit was for.

Why it works: Much like the VoC Dashboard, Two-Tiered Segmentation makes for dramatically better clarity around digital channel decision-making and evaluation of success. Questions like ‘Is our Website successful?’ get morphed into the much more tractable and analyzable question ‘Is our Website successful for this audience trying to do this task?’. That’s a much better question and big part of building analytics culture is getting people to ask better questions. This also happens to be the main topic of my book “Measuring the Digital World” and in it you can get a full description of both the power and the methods behind Two-Tiered Segmentation.

 

I have more, but I’m going to roll the rest into my next post on building an agile organization since they are all deeply related to the integration of capabilities in the organization. Still, that’s fifteen different tactics for building culture. None of which include mission statements, organizational alignment or C-Level support (okay, Walking the Walk is kind of that but not exactly and I didn’t include it in the fifteen) and none of which will take place in corporate retreats or all-hands conferences. That’s a good thing and makes me believe they might actually work.

Ask yourself this: is it possible to imagine an organization that does even half these things and doesn’t have a great analytics culture? I don’t think it is. Because culture just is the sum of the way your organization works and these are powerful drivers of good analytic thinking. You can imagine an organization that does these things and isn’t friendly, collaborative, responsible, flat, diverse, caring or even innovative. There are all kinds of culture, and good decision-making isn’t the only aspect of culture to care about*. But if you do these things, you will have an organization that makes consistently good decisions.

*Incidentally, if you want to build culture in any of these other ways, you have to think about similar approaches. Astronomers have a clever technique for seeing very faint objects called averted vision. The idea is that you look just to the side of the object if you want to get the most light-gathering power from your eyes. It’s the same with culture. You can’t tackle it head-on by talking about it. You have to build it just a little from the side!

Continuous Improvement

Is it a Method or a Platitude?

What does it take to be good at digital? The ability to make good decisions, of course. If you run a pro football team and you make consistently good decisions about players and about coaches, and they, in turn, make consistently good decisions about preparation and plays, you’ll be successful. Most organizations aren’t setup to make good decisions in digital. They don’t have the right information to drive strategic decisions and they often lack the right processes to make good tactical decisions. I’ve highlighted four capabilities that must be knitted together to drive consistently good decisions in the digital realm: comprehensive customer journey mapping, analytics support at every level of the organization, aggressive controlled experimentation targeted to decision-support, and constant voice of customer research. For most organizations, none of these capabilities are well-baked and it’s rare that even a very good organization is excellent at more than two of these capabilities.

The Essentials for Digital Transformation
                          The Essentials for Digital Transformation

There’s a fifth spoke of this wheel, however, that isn’t so much a capability as an approach. That’s not so completely different from the others as it might seem. After all, almost every enterprise I see has a digital analytics department, a VoC capability, a customer journey map, and an A/B Testing team. In previous posts, I’ve highlighted how those capabilities are mis-used, mis-deployed or simply misunderstood. Which makes for a pretty big miss. So it’s very much true that a better approach underlies all of these capabilities. When I talk about continuous improvement, it’s not a capability at all. There’s no there, there. It’s just an approach. Yet it’s an approach that, taken seriously, can help weld these other four capabilities into a coherent whole.

The doctrine of continuous improvement is not new – in digital or elsewhere. It has a long and proven track record and it’s one of the few industry best practices with which I am in whole-hearted agreement. Too often, however, continuous improvement is treated as an empty platitude, not a method. It’s interpreted as a squishy injunction that we should always try to get better. Rah! Rah!

No.

Taken this way, it’s as contentless as interpreting evolutionary theory as survival of the fittest. Those most likely to survive are…those most likely to survive. It is the mechanism of natural selection coupled with genetic variation and mutation that gives content to evolutionary doctrine. In other words, without a process for deciding what’s fittest and a method of transmitting that fitness across generations, evolutionary theory would be a contentless tautology. The idea of continuous improvement, too, needs a method to be interesting. Everybody wants to get better all the time. There has to be a real process to make it interesting.

There are such processes, of course. Techniques like Six Sigma famously elaborate a specific method to drive continuous improvement in manufacturing processes. Unfortunately, Six Sigma isn’t directly transferable to digital analytics. We lack the critical optimization variable (defects) against which these methods work. Nor does it work to simply substitute a variable like conversion rate for defects because we lack the controlled environment necessary to believe that every customer should convert.

If Six Sigma doesn’t translate directly into digital analytics, that doesn’t mean we can’t learn from it and cadge some good ideas, though. Here are the core ideas that drive continuous improvement in digital, many of which are rooted in formal continuous improvement methodologies:

  1. It’s much easier to measure a single, specific change than a huge number of simultaneous changes. A website or mobile app is a complex set of interconnecting pieces. If you change your home page, for example, you change the dynamics of every use-case on the site. This may benefit some users and disadvantage others; it may improve one page’s performance and harm another’s. When you change an entire website at once, it’s incredibly difficult to isolate which elements improved and which didn’t. Only the holistic performance of the system can be measured on a before and after basis – and even that can be challenging if new functionality has been introduced. The more discrete and isolated a change, the easier it is to measure its true impact on the system.
  2. Where changes are specific and local, micro-conversion analytics can generally be used to assess improvement. Where changes are numerous or the impact non-local, then a controlled environment is necessary to measure improvement. A true controlled environment in digital is generally impossible but can be effectively replicated via controlled experimentation (such as A/B testing or hold-outs).
  3. Continuous improvement can be driven on a segmented or site-wide basis. Improvements that are site-wide are typically focused on reducing friction. Segmentation improvements are focused on optimizing the conversation with specific populations. Both types of improvement cycles must be addressed in any comprehensive program.
  4. Digital performance is driven by two different systems (acquisition of traffic and content performance). Despite the fact that these two systems function independently, it’s impossible to measure performance of either without measuring their interdependencies. Content performance is ALWAYS relative to the mix of audience created by the acquisition systems. This dependency is even tighter in closed loop systems like Search Engine Optimization – where the content of the page heavily determines the nature of the traffic sent AND the performance of that traffic once sourced (though the two can function quite differently with the best SEO optimized page being a very poor content performer even though it’s sourcing its own traffic).
  5. Marketing performance is a function of four things: the type of audience sourced, the use-case of the audience sourced, the pre-qualification of the audience sourced and the target content to which the audience is sourced. Continuous improvement must target all four factors to be effective.
  6. Content performance is relative to function, audience and use-case. Some content changes will be directly negative or positive (friction causing or reducing), but most will shift the distribution of behaviors. Because most impacts are shifts in the distribution of use-cases or journeys, it’s essential that the relative value of alternative paths be understood when applying continuous improvement.

These are core ideas, not a formal process. In my next post, I’ll take a shot at translating them into a formal process for digital improvement. I’m not really confident how tightly I can describe that process, but I am confident that it will capture something rather different than any current approach to digital analytics.

 

With Thanksgiving upon us now is the time to think about the perfect stocking stuffer for the digital analyst you like best. Pre-order “Measuring the Digital World” now!

Digital Transformation – How to Get Started, Real KPIs, the Necessary Staff and So Much More!

In the last couple of months, I’ve been writing an extended series on digital transformation that reflects our current practice focus. At the center of this whole series is a simple thesis: if you want to be good at something you have to be able to make good decisions around it. Most enterprises can’t do that in digital. From the top on down, they are setup in ways that make it difficult or impossible for decision-makers to understand how digital systems work and act on that knowledge. It isn’t because people don’t understand what’s necessary to make good decisions. Enterprises have invested in exactly the capabilities that are necessary: analytics, Voice of Customer, customer journey mapping, agile development, and testing. What they haven’t done is changed their processes in ways that take advantage of those capabilities.

I’ve put together what I think is a really compelling presentation of how most organizations make decisions in the digital channel, why it’s ineffective, and what they need to do to get better. I’ve put a lot of time into it (because it’s at the core of our value proposition) and really, it’s one of the best presentations I’ve ever done. If you’re a member of the Digital Analytics Association, you can see a chunk of that presentation in the recent webinar I did on this topic. [Webinars are brutal – by far the hardest kind of speaking I do – because you are just sitting there talking into the phone for 50 minutes – but I think this one, especially the back-half, just went well] Seriously, if you’re a DAA member, I think you’ll find it worthwhile to replay the webinar.

If you’re not, and you really want to see it, drop me a line, I’m told we can get guest registrations setup by request.

At the end of that webinar I got quite a few questions. I didn’t get a chance to answer them all and I promised I would – so that’s what this post is. I think most of the questions have inherent interest and are easily understood without watching the webinar so do read on even if you didn’t catch it (but watch the darn webinar).

Q: Are metrics valuable to stakeholders even if they don’t tie in to revenues/cost savings?

Absolutely. In point of fact, revenue isn’t even the best metric on the positive side of the balance sheet. For many reasons, lifetime value metrics are generally a better choice than revenue. Regardless, not every useful metric has to, can or should tie back to dollars. There are whole classes of metrics that are important but won’t directly tie to dollars: satisfaction metrics, brand awareness metrics and task completion metrics. That being said, the most controversial type of non-revenue metric are proxies for engagement which is, in turn, a kind of proxy for revenue. These, too, can be useful but they are far more dangerous. My advice is to never use a proxy metric unless you’ve done the work to prove it’s a valid proxy. That means no metrics plucked from thin air because they seem reasonable. If you can’t close the loop on performance with behavioral data, use re-survey methods. It’s absolutely critical that the metrics you optimize with be the right ones – and that means spending the extra time to get them right. Finally, I’ve argued for awhile that rather than metrics our focus should be on delivering models embedded in tools – this allows people to run their business not just look at history.

Q: What is your favorite social advertising KPI? I have been using $ / Site Visit and $ / Conversion to measure our campaigns but there is some pushback from the social team that we are not capturing social reach.

A very related question – and it’s interesting because I actually didn’t talk much about KPIs in the webinar! I think the question boils down to this (in addition to everything I just said about metrics) – is reach a valid metric? It can be, but reach shouldn’t be taken as is. As per my answer above, the value of an impression is quite different on every channel. If you’re not doing the work to figure out the value of an impression in a channel then what’s the point of reporting an arbitrary reach number? How can people possibly assess whether any given reach number makes a buy good or bad once they realize that the value of an impression varies dramatically by channel? I also think a strong case can be made that it’s a mistake to try and optimize digital campaigns using reported metrics even direct conversion and dollars. I just saw a tremendous presentation from Drexel’s Elea Feit at the Philadelphia DAA Symposium that echoed (and improved) what I’ve been saying for years. Namely that non-incremental attribution is garbage and that the best way to get true measures of lift is to use control groups. If your social media team thinks reach is important, then it’s worth trying to prove if they are right – whether that’s because those campaigns generate hidden short-term lift or or because they generate brand awareness that track to long term lift.

Q: For companies that are operating in the way you typically see, what is the one thing you would recommend to help get them started?

This is a tough one because it’s still somewhat dependent on the exact shape of the organization. Here are two things I commonly recommend. First, think about a much different kind of VoC program. Constant updating and targeting of surveys, regular socialization with key decision-makers where they drive the research, an enterprise-wide VoC dashboard in something like Tableau that focuses on customer decision-making not NPS. This is a great and relatively inexpensive way to bootstrap a true strategic decision support capability. Second, totally re-think your testing program as a controlled experimentation capability for decision-making. Almost every organization I work with should consider fundamental change in the nature, scope, and process around testing.

Q: How much does this change when there are no clear conversions (i.e., Non-Profit, B2B, etc)?

I don’t think anything changes. But, of course, everything does change. What I mean is that all of the fundamental precepts are identical. VoC, controlled experiments, customer journey mapping, agile analytics, integration of teams – it’s all exactly the same set of lessons regardless of whether or not you have clear conversions on your website. On the other hand, every single measurement is that much harder. I’d argue that the methods I argue for are even more important when you don’t have the relatively straightforward path to optimization that eCommerce provides. In particular, the absolute importance of closing the loop on important measurements simply can’t be understated when you don’t have a clear conversion to optimize to.

Q: What is the minimum size of analytics team to be able to successfully implement this at scale?

Another tricky question to answer but I’ll try not to weasel out of it. Think about it this way, to drive real transformation at enterprise scale, you need at least 1 analyst covering every significant function. That means an analyst for core digital reporting, digital analytics, experimentation, VoC, data science, customer journey, and implementation. For most large enterprises, that’s still an unrealistically small team. You might scrape by with a single analyst in VoC and customer journey, but you’re going to need at least small teams in core digital reporting, analytics, implementation and probably data science as well. If you’re at all successful, the number of analytics, experimentation and data science folks is going to grow larger – possibly much larger.  It’s not like a single person in a startup can’t drive real change, but that’s just not the way things work in the large enterprise. Large enterprise environments are complex in every respect and it takes a significant number of people to drive effective processes.

Q: Sometimes it feels like agile is just a subject line for the weekly meeting. Do you have any examples of organizations using agile well when it comes to digital?

Couldn’t agree more. My rule of thumb is this: if your organization is studying how to be innovative, it never will be. If your organization is meeting about agile, it isn’t. In the IT world, Agile has gone from a truly innovative approach to development to a ludicrous over-engineered process managed, often enough, by teams of consulting PMs. I do see some organizations that I think are actually quite agile when it comes to digital and doing it very well. They are almost all gaming companies, pure-play internet companies or startups. I’ll be honest – a lot of the ideas in my presentation and approach to digital transformation come from observing those types of companies. Whether I’m right that similar approaches can work for a large enterprise is, frankly, unclear.

Q: As a third party measurement company, what is the best way to approach or the best questions to ask customers to really get at and understand their strategic goals around their customer journeys?

This really is too big to answer inside a blog – maybe even too big to reasonably answer as a blog. I’ll say, too, that I’m increasingly skeptical of our ability to do this. As a consultant, I’m honor-bound to claim that as a group we can come in, ask a series of questions of people who have worked in an industry for 10 or 20 years and, in a few days time, understand their strategic goals. Okay…put this way, it’s obviously absurd. And, in fact, that’s really not how consulting companies work. Most of the people leading strategic engagements at top-tier consulting outfits have actually worked in an industry for a long-time and many have worked on the enterprise side and made exactly those strategic decisions. That’s a huge advantage. Most good consultants in a strategic engagement know 90% of what they are going to recommend before they ask a single question.

Having said that, I’m often personally in a situation where I’m asked to do exactly what I’ve just said is absurd and chances are if you’re a third party measurement company you have the same problem. You have to get at something that’s very hard and very complex in a very short amount of time and your expertise (like mine) is in analytics or technology not insurance or plumbing or publishing or automotive.

Here’s a couple of things I’ve found helpful. First, take the journey’s yourself. It’s surprising how many executives have never bought an online policy from their own company, downloaded a whitepaper to generate a lead, or bought advertising on their own site. You may not be able to replicate every journey, but where you can get hands on, do it. Having a customer’s viewpoint on the journey never hurts and it can give you insight your customers should but often don’t have. Second, remember that the internet is your best friend. A little up-front research from analysts is a huge benefit when setting the table for those conversations. And I’m often frantically googling acronyms and keywords when I’m leading those executive conversations. Third, check out the competition. If you do a lead on the client’s website, try it on their top three competitors too. What you’ll see is often a great table-set for understanding where they are in digital and what their strategy needs to be. Finally, get specific on the journey. In my experience, the biggest failing in senior leaders is their tendency to generality. Big generalities are easy and they sound smart but they usually don’t mean much of anything. The very best leaders don’t ever retreat into useless generality, but most of us will fall into it all too easily.

Q: What are some engagement models where an enterprise engages 3rd party consulting? For how long?

The question every consultant loves to hear! There are three main ways we help drive this type of digital transformation. The first is as strategic planners. We do quite a bit of pure digital analytics strategy work, but for this type of work we typically expand the strategic team a bit (beyond our core digital analytics folks) to include subject matter experts in the industry, in customer journey, and in information management. The goal is to create a “deep” analytics strategy that drives toward enterprise transformation. The second model (which can follow the strategic phase) is to supplement enterprise resources with specific expertise to bootstrap capabilities. This can include things like tackling specific highly strategic analytics projects, providing embedded analysts as part of the team to increase capacity and maturity, building out controlled experiment teams, developing VoC systems, etc. We can also provide – and here’s where being part of a big practice really helps – PM and Change Management experts who can help drive a broader transformation strategy. Finally, we can help soup to nuts building the program. Mind you, that doesn’t mean we do everything. I’m a huge believer that a core part of this vision is transformation in the enterprise. Effectively, that means outsourcing to a consultancy is never the right answer. But in a soup-to-nuts model, we keep strategic people on the ground, helping to hire, train, and plan on an ongoing basis.

Obviously, the how-long depends on the model. Strategic planning exercises are typically 10-12 weeks. Specific projects are all over the map, and the soup-to-nuts model is sustained engagement though it usually starts out hot and then gets gradually smaller over time.

Q: Would really like to better understand how you can identify visitor segments in your 2-tier segmentation when we only know they came to the site and left (without any other info on what segment they might represent).  Do you have any examples or other papers that address how/if this can be done?

A couple years back I was on a panel at a Conference in San Diego and one of the panelists started every response with “In my book…”. It didn’t seem to matter much what the question was. The answer (and not just the first three words) were always the same. I told my daughters about it when I got home, and the gentleman is forever immortalized in my household as the “book guy”. Now I’m going to go all book guy on you. The heart of my book, “Measuring the Digital World” is an attempt to answer this exact question. It’s by far the most detailed explication I’ve ever given of the concepts behind 2-tiered segmentation and how to go from behavior to segmentation. That being said, you can only pre-order now. So I’m also going to point out that I have blogged fairly extensively on this topic over the years. Here’s a couple of posts I dredged out that provide a good overview:

http://semphonic.blogs.com/semangel/2012/05/digital-segmentation.html

http://semphonic.blogs.com/semangel/2011/06/building-a-two-tiered-segmentation-semphonics-digital-segmentation-techniques.html

and – even more important – here’s the link to pre-order the book!

That’s it…a pretty darn good list of questions. I hope that’s genuinely reflective of the quality of the webinar. Next week I’m going to break out of this series for a week and write about our recent non-profit analytics hackathon – a very cool event that spurred some new thoughts on the analysis process and the tools we use for it.