Tag Archives: EY

Seven Pillars of Statistical Wisdom

I don’t review a lot of business books on my blog…mostly because I don’t like a lot of business books. A ridiculous percentage of business books seem to me either to be one-trick ponies (a good idea that could be expressed fully in a magazine article expanded to book length) or thinly veiled self-help books (self help books with ties as described in this spot-on Slate article). I HATE self-help books. Grit, Courage, Indecisiveness. It’s all the same to me.

On the other hand, The Seven Pillars of Statistical Wisdom isn’t really a business book. It’s a short (200 small pages), crisp, philosophical exploration of what makes statistics interesting. Written by a Univ. of Chicago Professor and published by Harvard University Press, it’s the best quasi-business book I’ve read in a long time.

I say quasi-business book because I’m not really sure who the intended audience is. It’s not super technical (thank god you can read it and know very little math), but it sometimes veers into explanations that assume a fairly deep understanding of statistics. Deeper, at least, than I have though I am most certainly not a formally trained statistician.

What Seven Pillars does extraordinarily well is examine a small core set of statistical ideas, explicate their history, and show why they are important, fundamental, and, in some cases, still controversial. In doing this, Seven Pillars provides a profound introduction into how to think statistically – not do statistics. Instead of focusing on how specific methods work, on definitions of statistical methods, or on specific issues in modern statistics (like big data), Seven Pillars tries to define what makes statistics an important way to think.

To give you a sense of this, here are the seven pillars:

Aggregation: Probably the core concept at the heart of all statistical thinking is the idea that you can sometimes GAIN insight while losing data. Stigler delves into basic concepts like the mean, shows how they evolved over the centuries (and it did take centuries) and explains why this fundamental insight is so important. It’s a brilliant discussion.

Information: If we gain information by losing data, how do we know how much information we’ve gained? Or how much data we need? With this pillar, Stigler lays out why more is sometimes less and how the value of observations usually declines sharply. Another terrific discussion around a fundamental insight that comes from statistics but is constantly under siege from folk common-sense.

Likelihood: In this section, Stigler tackles how the concepts around confidence levels and estimation of likelihood evolved over time. This section contains an amusing and historically interesting discussion on arguments for and against the likelihood of miracles!

Intercomparison: Stigler’s fourth pillar is the idea that we can use interior measurements of the data (there’s an excellent discussion of the historical derivation of Standard Deviation for example) to understand it. This section includes a superb discussion of the pitfalls of purely internal comparison and the tendency of humans to find patterns and of data to exhibit patterns that are not meaningful.

Regression: The idea of regression to the mean is fundamental to statistical thinking. It’s an amazingly powerful but consistently non-intuitive concept. Stigler uses a genetics example (and a really cool Quincunx visualization) to help explain the concept. This is one of the best discussions in a very fine book. On the other hand, the last part of this section which covers multivariate and Bayesian developments is less wonderful. If you don’t already understand these concepts, I’m not sure Stigler’s discussion is going to help.

Design: The next pillar is all about experimental design – surely a concept that is fundamental not just to statistics but to our everyday practical application of it. I found the discussion of randomization in this section particularly interesting and potentially noteworthy and thought-provoking.

Residual: Pillar seven is, appropriately enough, about what’s left over. Stigler is concerned here to show how examining the unexplained part of the analysis leads to a great deal of productive thinking in science and elsewhere. The idea of nested models is introduced and this section somehow transitions into a discussion of data visualization with illustrations from Florence Nightingale (apparently a mean hand with a chart). I’m not sure this transition made perfect sense in the context of the chapter, but the discussion is fascinating, enjoyable and pointed enough to generate some real insight.

Stigler concludes with some thoughts around whether and where an eighth pillar might arise. There’s some interesting stuff here that’s highly appropriate to anyone in digital trying to extend analytics into high-dimensional, machine-learning spaces. The discussion is (too) brief but I think intentionally so.

 

Seven Pillars isn’t quite a great book, and I mean that as high-praise. I don’t read many books that I could plausibly describe as almost great. The quality of the explanations is extremely high. But it does a better job explicating the intellectual basis behind simpler statistical concepts than more complicated ones and there are places where I think it’s insufficiently forceful in illuminating the underlying ways of thinking not just the statistical methods. Perhaps that’s inevitable, but greatness isn’t easy!

I do think the book occasionally suffers from a certain ambiguity around its audience. Is it intended as a means to get deep practitioners thinking about more fundamental concepts? I don’t think so – too many of the explanations are historical and basic.

Is it intended for a lay audience? Please.

I think it fits two audiences very well, but perhaps neither perfectly.

First, there are folks like me who use statistics and statistical thinking on an everyday basis but are not formally trained. I’m assuming that’s also a pretty broad swath of my readers. I know I found it both useful and enlightening, with only a few spots where the discussion became obscure and overtly professional.

The second audience is students and potential students of statistics who need something that pulls them away from the trenches (here’s how you do a regression) and gets them to think about what their discipline actually does. For that audience, I think the book is consistently brilliant.

If there’s a better short introduction into the intellectual basis and foundation of statistical thinking, I don’t know it. And for those who confuse statistical thinking with the ability to calculate a standard deviation or run a regression, Seven Pillars is a heady antidote

Productivity is Our Business. And Business isn’t Good

A little while back there was a fascinating article on the lack of productivity growth in the U.S. in the past 4-5 years. I’ll try to summarize the key points below (and then tell you why I think they’re important) – but the full article is very much worth the read.

Productivity Growth

Let’s start with the facts. In the last year, the total number of hours worked in the U.S. rose by 1.9%. GDP growth in the last quarter exactly matched that rate – 1.9%. So we added hours and we got an exact match in output. That might sound okay, but it means that there was zero productivity growth. We didn’t get one whit more efficient in producing stuff. Nor is this just a short term blip. In the last four years, we’ve recorded .4% annual growth in productivity. That’s not very good. Take a look at the chart above (from the New York Times article and originally from the Labor Department) – it looks bad. We’re in late ‘70s and early ‘80s territory. Those weren’t good years.

The Times article advances three theories about why productivity growth has been so tepid. They classify them as the “Depressing” theory, the “Neutral” theory and the “Happy” theory. Here’s a quick description of each.

Depressing Theory

The trend is real and will be sustained. Capex is down. The digital revolution is largely complete. People aren’t getting significantly more productive and the people returning to the work-force post-recession are the least productive segment of our workforce. On this view, we’re not getting richer anytime soon.

Neutral Theory

There’s a lot of imprecision in measuring productivity. With fundamental changes in the economy it may be that the imprecision is increasing – and we’re undercounting true productivity. As measurement professionals, we all know this one needs to be reckoned with.

Happy Theory

We’re in an “investment” period where companies are hiring and investing – resulting in a period of lower-productivity before that investment begins to show returns and productivity accelerates. Interestingly, this story played out in the late ‘90s when productivity slowed and then accelerated sharply in the 2000s.

 

Which theory is right? The Times article doesn’t really draw any firm conclusions – and that’s probably reasonable. When it comes to macro-economic trends, the answers are rarely simple and obvious. From my perspective, though, this lack of productivity is troubling. We live in a profession (analytics) that’s supposed to be the next great driver or productivity. Computers, internet, now analytics. We’re on the hook for the next great advance in productivity. From a macro-economic perspective, no one’s thinking about analytics. But out here in the field, analytics is THE thing companies are investing in to drive productivity.

And the bad news? We’re clearly not delivering.

Now I don’t take it as all bad news. There’s a pretty good chance that the Happy theory is dead-on. Analytics is a difficult transformation and one that many companies struggle with. And while they’re struggling with big data systems and advanced analytics, you have a lot of money getting poured into rather unproductive holes. Word processing was almost certainly more immediately productive than analytics (anybody out there remember Wang?) – but every sea change in how we do things is going to take time, effort and money. Analytics takes more than most.

Here’s the flip side, though. It’s easy to see how all that investment in analytics might turn out to be as unproductive as building nuclear missiles and parking them into the ground. If they were ever used, those missiles would produce a pretty big bang for the buck. In the case of ICBM’s, we’re all happiest when they don’t get used. That’s not what we hope for from analytics.

Of course, I’ve been doing this extended series on the challenges of digital transformation – most of which revolves around why we aren’t more productive with analytics. Those challenges are not, in my opinion, the exception. They’re the rule. The vast majority of enterprises aren’t doing analytics well and aren’t boosting their productivity with it. That doesn’t mean I don’t believe in the power of analytics to drive real productivity. I do. But before those productivity gains start to appear, we have to do better.

Doing better isn’t about one single thing. Heaven knows it’s not just about having the newest technologies. We have those aplenty. It’s about finding highly repeatable methods in analytics so that we can drive improvement without rock-stars. It’s very much about re-thinking the way the organization is setup so that analytics is embedded and operationalized. It’s even more about finding ways to re-tool our thinking so that agile concepts and controlled experimentation are everywhere.

Most companies still need a blueprint for how to turn analytics into increased productivity. That’s what this series on digital transformation is all about.

If you haven’t yet had the opportunity to spin through my 20min presentation on transforming the organization with analytics – check it out.

After all, productivity is our business.

Space 2.0

The New Frontier of Commercial Satellite Imagery for Business

One of my last speaking gigs of the spring season was, for me, both the least typical and one of the most interesting. Space 2.0 was a brief glimpse into a world that is both exotic and fascinating. It’s a gathering of high-tech, high-science companies driving commercialization of space.

Great stuff, but what the heck did they want with me?

Well, one of the many new frontiers in the space industry is the commercialization of geo-spatial data. For years now, the primary consumer of satellite data has been the government. But the uses for satellite imagery are hardly limited to intel and defense. For the array of Space startups and aggressive tech companies, intel and defense are relatively mature markets – slow moving and difficult to crack if you’re not an established player. You ever tried selling to the government? It’s not easy.

So the big opportunity is finding ways to open up the information potential in geo-spatial data and satellite imagery to the commercial marketplace. Now I may not know HyperSpectral from IR but I do see a lot of the challenges that companies face both provisioning and using big data. So I guess I was their doom-and-gloom guy – in my usual role of explaining why everything always turns out to be harder than we expect when it comes to using or selling big data.

For me, though, attending Space 2.0 was more about learning that educating. I’ve never had an opportunity to really delve into this kind of data and hearing (and seeing) some of what is available is fascinating.

Let’s start with what’s available (and keep in mind you’re not hearing an expert view here – just a fanboy with a day’s exposure). Most commercial capture is visual (other bands are available and used primarily for environmental and weather related research). Reliance on visual spectrum has implications that are probably second-nature to folks in the industry but take some thought if you’re outside it. Once speaker described their industry as “outside” and “daytime” focused. It’s also very weather dependent. Europe, with its abundant cloudiness, is much more challenging than the much of the U.S. (though I suppose Portland and Seattle must be no picnic).

Images are either panchromatic (black and white), multi-spectral (like the RGB we’re used to but with an IR band as well and sometimes additional bands) or hyperspectral (lots of narrow bands on the spectrum). Perhaps even more important than color, though, is resolution. As you’d probably expect, black and white images tend to have the highest resolution – down to something like a 30-40cm square. Color and multi-band images might be more in the meter range but the newest generation take the resolution down to the 40-50cm range in full color. That’s pretty fine grained.

How fine-grained? Well, with a top-down 40cm square per pixel it’s not terribly useful for things like people. But here’s an example that one of the speakers gave in how they are using the data. They pick selected restaurant locations (Chipotle was the example) and count cars in the parking lot during the day. They then compare this data to previous periods to create estimates of how the location is doing. They can also compare competitor locations (e.g. Panera) to see if the trends are brand specific or consistent.

Now, if you’re Chipotle, this data isn’t all that interesting. There are easy ways to measure your business than trying to count cars in satellite images. But if you’re a Fund Manager looking to buy or sell Chipotle stock in advance of earnings reports, this type of intelligence is extremely valuable. You have hard-data on how a restaurant or store is performing before everyone else. That’s the type of data that traders live for.

Of course, that’s not the only way to get that information. You may have heard about the recent FourSquare prediction targeted to exactly the same problem. Foursquare was able to predict Chipotle’s sales decline almost to the percentage point. As one of the day’s panelist’s remarked, there are always other options and the key to market success is being cheaper, faster, easier, and more accurate than alternative mechanisms.

You can see how using Foursquare data for this kind of problem might be better than commercial satellite. You don’t have weather limitations, the data is easier to process, it covers walk-in and auto traffic, and it covers a 24hr time band. But you can also see plenty of situations where satellite imagery might have advantages too. After all, it’s easily available, relatively inexpensive, has no sampling bias, has deep historical data and is global in reach.

So how easy is satellite data to use?

I think the answer is a big “it depends”. This is, first of all, big data. Those multi and hyper band images at hi-res are really, really big. And while the providers have made it quite easy to find what you want and get it, it didn’t seem to me that they had done much to solve the real big data analytics problem.

I’ve described what I think the real big data problem is before (you can check out this video if you want a big data primer). Big data analytics is hard because it requires finding patterns in the data and our traditional analytics tools aren’t good at that. This need for pattern recognition is true in my particular field (digital analytics), but it’s even more obviously true when it comes to big data applications like facial recognition, image processing, and text analytics.

On the plus side, unlike digital analytics, the need for image (and linguistic) processing is well understood and relatively well-developed. There are a lot of tools and libraries you can use to make the job easier. It’s also a space where deep-learning has been consistently successful so that libraries from companies like Microsoft and Google are available that provide high-quality deep-learning tools – often tailor made for processing image data – for free.

It’s still not easy. What’s more, the way you process these images is highly likely to be dependent on your business application. Counting cars is different than understanding crop growth which is different than understanding storm damage. My guess is that market providers of this data are going to have to develop very industry-specific solutions if they want to make the data reasonably usable.

That doesn’t necessarily mean that they’ll have to provide full on applications. The critical enabler is providing the ability to extract the business-specific patterns in the data – things like identifying cars. In effect, solving the hard part of the pattern recognition problem so that end-users can focus on solving the business interpretation problem.

Being at Space 2.0 reminded me a lot of going to a big data conference. There’s a lot of technologies (some of them amazingly cool) in search of killer business applications. In this industry, particularly, the companies are incredibly sophisticated technically. And it’s not that there aren’t real applications. Intelligence, environment and agriculture are mature and profitable markets with extensive use of commercial satellite imagery. The golden goose, though, is opening up new opportunities in other areas. Do those opportunities exist? I’m sure they do. For most of us, though, we aren’t thinking satellite imagery to solve our problems. And if we do think satellite, we’re likely intimidated by difficulty of solving the big data problem inherent in getting value from the imagery for almost any new business application.

That’s why, as I described it to the audience there, I suspect that progress with the use and adoption of commercial satellite imagery will seem quite fast to those of us on the outside – but agonizingly slow to the people in the industry.

Gelato was the word I meant

I spent most of the last week on holiday in Italy. But since the holiday was built around a speaking gig in Italy at the Be Wizard Digital Marketing conference I still spent a couple of days talking analytics and digital. A couple of days I thoroughly enjoyed. The conference closed with a Q&A for a small group of speakers and while I got a few real analytics questions it felt more like a meet and greet – with plenty of puff-ball questions like “what word would use to describe the conference?” A question I failed miserably with the very pathetic answer “fun”.

I guess that’s why it’s better to ask me analytics questions.

The word I probably should have chosen is “gelato”.

And not just because I hogged down my usual totally ridiculous amount of fragola, melone, cioccolato, and pesca – scoop by scoop from Rimini to Venice.

Gelato because I had a series of rich conversations with Mat Sweezey from Salesforce (nee Pardot) who gave a terrific presentation on authenticity and what it means in this new digital marketing world. It’s easy to forget how dramatically digital has changed marketing and miss some of the really important lessons from those changes. Mat also showed me a presentation on agile that blends beautifully with the digital transformation story I’ve been trying to tell in the last six months. It’s a terrific deck with some slides that explain why test&learn and agile methods work so much better than traditional methods. It’s a presentation with the signal virtue of taking very difficult concepts and making them not just clear but compelling. That’s hard to do well.

Gelato because I also talked with and enjoyed a great presentation from Chris Anderson of Cornell. Chris led a two-hour workshop in the revenue management track (which happens to be a kind of side interest of mine). His presentation focused on the impact of social media content on sites like TripAdvisor on room pricing strategies. He’s done several compelling research projects with OTAs (Online Travel Agents) looking at the influence of social media content on buying decisions. His research has looked at key variables that drive influence (number of reviews and rating), how sensitive demand is to those factors, and how that sensitivity plays out by hotel class (turns out that the riskier the lodging decision the more impactful social reviews are). He’s also looked at review response strategies on TripAdvisor and has some compelling research showing how review response can significantly improve ratings outcomes but how it’s also possible to over-respond. Respond to everything, and you actually do worse than if you respond to nothing.

That’s a fascinating finding and very much in keeping with Mat’s arguments around authenticity. If you make responding to every social media post a corporate policy, what you say is necessarily going to sound forced and artificial.

That’s why it doesn’t work.

If you’re in the hospitality industry, you should see this presentation. In fact, there are lessons here for any company interested in the impact of reviews and social content and interested in taking a more strategic view of social outreach and branding. I think Chris’ data suggest significant and largely unexplored opportunities for both better revenue management decisions around OTA pricing and better strategies around the review ask.

Gelato because there was one question I didn’t get to answer that I wanted to (and somehow no matter how much gelato I consume I always want a little more).

Since I had to have translations of the panel questions at the end, I didn’t always get a chance to respond. Sometimes the discussion had moved on by the time I understood the question! And one of the questions – how can companies compete with publishers when it comes to content creation – seemed to me deeply related to both Mat and Chris’ presentations.

Here’s the question as I remember it:

If you’re a manufacturer or a hotel chain or a retailer, all you ever hear in digital marketing is how content is king. But you’re not a content company. So how do you compete?

The old-fashioned way is to hire an agency to write some content for you. That’s not going to work. You won’t have enough content, you’ll have to pay a lot for it, and it won’t be any good. To Mat’s point around authenticity, you’re not going to fool people. You’re not going to convince them that your content isn’t corporate, mass-produced, ad agency hack-work. Because it is and because people aren’t stupid. Building a personalization strategy to make bad content more relevant isn’t going to help much either. That’s why you don’t make it a corporate policy to reply to every review and why you don’t write replies from a central team of ad writers.

Stop trying to play by the old rules.

Make sure your customer relations, desk folks, and managers understand how to build relationships with social media and give them the tools to do it. If you want authentic content, find your evangelists. People who actually make, design, support or use your products. Give them a forum. A real one. And turn them loose. Find ways to encourage them. Find ways to magnify their voice. But turn them loose.

You can’t have it both ways. You can’t be authentic while you try to wrap every message in a Madison Avenue gift wrapping bought from the clever folks at your ad agency. Check out Mat’s presentation (he’s a Slideshare phenom). Think about the implications of unlimited content and the ways we filter. Process the implications. The world has changed and the worst strategy in the world is to keep doing things the old way.

So gelato because the Be Wizard conference, like Italy in general, was rich, sweet, cool and left me wanting to hear (and say) a bit more!

And speaking of conferences, we’re not that far away from my second European holiday with analytics baked in – The Digital Analytics Hub in London (early June). I’ve been to DA Hub several years running now – ever since two old friends of mine started it. It’s an all conversational conference modeled on X Change and it’s always one of the highlights of my year. In addition to facilitating a couple conversations, I’m also going to be leading a very deep-dive workshop into digital forecasting. I plan to walk through forecasting from the simplest sort of forecast (everything will stay the same) to increasingly advanced techniques that rely, first on averages and smoothing, and then to models. If you’re thinking about forecasting, I really think this workshop will be worth the whole conference (and the Hub is always great anyway)…

If you’ve got a chance to be in London in early June, don’t miss the Hub.

Big Data Forecasting

Forecasting is a foundational activity in analytics and is a fundamental part of everyone’s personal mental calculus. At the simplest level, we live and work constantly using the most basic forecasting assumption – that everything will stay the same. And even though people will throw around aphorisms of the “one constant is change” sort, the assumption that things will stay largely the same is far more often true. The keyword in that sentence, though, is “largely”. Because if things mostly do stay the same, they almost never stay exactly the same. Hence the art and science of forecasting lies in figuring out what will change.

Slide 1 ForecastingBigData
Click here for the 15 minute Video Presentation on Forecasting & Big Data

There are two macro approaches to forecasting: trending and modelling. With trending, we forecast future measurements by projecting trends of past measurements. And because so many trends have significant variation and cyclical behaviors (seasonal, time-of-day, business, geological), trending techniques often incorporate smoothing.

Though trending can often create very reliable forecasts, particularly when smoothed to reduce variation and cycles, there’s one thing it doesn’t do well – it doesn’t handle significant changes to the system dynamics.

When things change, trends can be broken (or accelerated). When you have significant change (or the likelihood of significant change) in a system, then modelling is often a better and more reliable technique for forecasting. Modelling a system is designed to capture an understanding of the true system dynamics.

Suppose our sales have declined for the past 14 months. In a trend, the expectation will be that sales will decline in the 15 month. But if we decide to cut our prices or dramatically increase our marketing budget, that trend may not continue. A model could capture the impact of price or marketing on sales and potentially generate a much better prediction when one of the key system drivers is changed.

This weekend, I added a third video to my series on big data – discussion of the changes to forecasting methodology when using big data.

[I’ve been working this year to build a legitimate YouTube channel on digital analytics. I love doing the videos (webinar’s really since they are just slide-shows with a voice-over), but they are a lot of work. I think they add something that’s different from either a blog or a Powerpoint and I’m definitely hoping to keep knocking them out. So far, I have three video series’ going: one on measuring the digital world, one on digital transformation in the enterprise, and one on big data.]

The new video is a redux of a couple recent speaking gigs – one on big data and predictive analytics and one on big data and forecasting. The video focuses more on the forecasting side of things and it explains how big data concepts impact forecasting – particularly from a modelling perspective.

Like each of my big data videos, it begins with a discussion of what big data is. If you’ve watched (or watch) either of the first two videos in the series (Big Data Beyond the Hype or Big Data and SQL), you don’t need to watch me reprise my definition of big data in the first half of Big Data and Forecasting. Just skip the first eight minutes. If you haven’t, I’d actually encourage you to check out one of those videos first as they provide a deeper dive into the definition of big data and why getting the right definition matters.

In the second half of the video, I walk through how “real” big data impacts forecasting and predictive problems. The video lays out three common big data forecasting scenarios: integrating textual data into prediction and forecasting systems, building forecasts at the individual level and then aggregating those predictions, and pattern-matching IoT and similar types of data sources as a prelude to analysis.

Each of these is interesting in its own right, though I think only the middle case truly adds anything to the discipline of forecasting. Text and IoT type analytics are genuine big data problems that involve significant pattern-matching and that challenge traditional IT and statistical paradigms. But neither really generate new forecasting techniques.

However, building forecasts from individual patterns is a fairly fundamental change in the way forecasts get built. Instead of applying smoothing techniques for building models against aggregated data, big data approaches use individual patterns to generate a forecast for each record (customer/account/etc.). These forecasts can then be added up (or treated probabilistically) to generate macro-forecasts or forecasting ranges.

If you’ve got an interest in big data and forecasting problems, give it a listen. The full video is about 16 minutes split into two pretty equal halves (big data definition, big data forecasting).

The Agile Organization

I’ve been meandering through an extended series on digital transformation: why it’s hard, where things go wrong, and what you need to be able to do to be successful. In this post, I intend to summarize some of that thinking and describe how the large enterprise should organize itself to be good at digital.

Throughout this series, I’ve emphasized the importance of being able to make good decisions in the digital realm. That is, of course, the function of analytics and its my own special concerns when it comes to digital. But there are people who will point out  that decision-making is not the be all and end all of digital excellence. They might suggest that being able to execute is important too.

If you’re a football fan, it’s easy to see the dramatic difference between Peyton Manning – possibly the finest on-field decision-maker in the history of the game – with a good arm and without. It’s one thing to know where to throw the ball on any given play, quite another to be able to get it there accurately. If that wasn’t the case, it’s probably true that many of my readers would be making millions in the NFL!

On the other hand, this divide between decision-making and execution tends to break down if you extend your view to the entire organization. If the GM is doing the job properly, then the decision about which quarterbacks to draft or sign will appropriately balance their physical and decision-making skills. That’s part of what’s involved in good GM decisioning. Meanwhile, the coach has an identical responsibility on a day-to-day basis. A foot injury may limit Peyton to the point where his backup becomes a better option. Then it may heal and the pendulum swings back. The organization makes a series of decisions and if it can make all of those decisions well, then it’s hard to see how execution doesn’t follow along.

If, as an organization, I can make good decisions about the strategy for digital, the technology to run it on, the agencies to build it, the people to optimize it, the way to organize it, and the tactics to drive it, then everything is likely to be pretty good.

Unfortunately, it’s simply not the case that the analytics, organization and capabilities necessary to make good decisions across all these areas are remotely similar. To return to my football analogy, it’s clear that very few organizations are setup to make good decisions in every aspect of their operations. Some organizations excel at particular functions (like game-planning) but are very poor at drafting. Indeed, sometimes success in one-area breeds disaster in another. When a coach like Chip Kelly becomes very successful in his role, there is a tendency for the organization to expand that role so that the coach has increasing control over personnel. This almost always works badly in practice. Even knowing it will work badly doesn’t prevent the problem. Since the coach is so important, it may be that an organization will cede much control over personnel to a successful coach even when everyone (except the coach) believes it’s a bad idea.

If you don’t think similar situations arise constantly in corporate America, you aren’t paying attention.

In my posts in this series, I’ve mapped out the capabilities necessary to give decision-makers the information and capabilities they need to make good decisions about digital experiences. I haven’t touched on (and don’t really intend to touch on) broader themes like deciding who the right people to hire are or what kind of measurement, analysis or knowledge is necessary to make those sorts of meta-decisions.

There are two respects, however, in which I have tried to address at least some of these meta-concerns about execution. First, I’ve described why it is and how it comes to pass that most enterprises don’t use analytics to support strategic decision-making. This seems like a clear miss and a place where thoughtful implementation of good measurement, particularly voice-of-customer measurement of the type I’ve described, should yield high returns.

Second, I took a stab at describing how organizations can think about and work toward building an analytics culture. In these two posts, I argue that most attempts at culture-building approach the problem backwards. The most common culture-building activities in the enterprise are all about “talk”. We talk about diversity. We talk about ethics. We talk about being data-driven in our decision-making. I don’t think this talk adds up to much. I suggest that culture is formed far more through habit than talk; that if an organization wants to build an analytics culture, it needs to find ways to “do” analytics. The word may proceed the deed, but it is only through the force of the deed (good habits) that the word becomes character/culture. This may seem somewhat obvious – no, it is obvious – but people somehow manage to miss the obvious far too often. Those posts don’t just formulate the obvious, they also suggest a set of activities that are particularly efficacious in creating good enterprise habits of decision-making. If you care about enterprise culture and you haven’t already done so, give them a read.

For some folks, however, all these analytics actions miss the key questions. They don’t want to know what the organization should do. They want to know how the organization should work. Who owns digital? Who owns analytics? What lives in a central organization? What lives in a business unit? Is digital a capability or a department?

In the context of the small company, most of these questions aren’t terribly important. In the large enterprise, they mean a lot. But acknowledging that they mean a lot isn’t to suggest that I can answer them – or at least most of them.

I’m skeptical that there is an answer for most of these questions. At least in the abstract, I doubt there is one right organization for digital or one right degree of centralization. I’ve had many conversations with wise folks who recognize that their organizations seem to be in constant motion – swinging like an enormous pendulum between extremes of centralization followed by extremes of decentralization.

Even this peripatetic motion – which can look so irrational from the inside – may make sense. If we assume that centralization and decentralization have distinct advantages, then not only might it be true that changing circumstances might drive a change in the optimal configuration, but it might even be true that swinging the organization from one pole to the other might help capture the benefits of each.

That seems unlikely, but you never know. There is sometimes more logic in the seemingly irrational movements of the crowd than we might first imagine.

Most questions about digital organization are deeply historical. They depend on what type of company you are, in what of market, with what culture and what strategic imperatives. All of which is, of course, Management 101. Obvious stuff that hardly needs to be stated.

However, there are some aspects of digital about which I am willing to be more directive. First, that some balance between centralization and decentralization is essential in analytics. The imperative for centralization is driven by these factors: the need for comparative metrics of success around digital, the need for consistent data collection, the imperatives of the latest generation of highly-complex IT systems, and the need/desire to address customers across the full spectrum of their engagement with the enterprise. Of these, the first and the last are primary. If you don’t need those two, then you may not care about consistent data collection or centralized data systems (this last is debatable).

On the other hand, there are powerful reasons for decentralization of which the biggest is simply that analytics is best done as close to the decision-making as possible. Before the advent of Hadoop, I would have suggested that the vast majority of analytics resources in the digital space be decentralized. Hadoop makes that much harder. The skills are much rarer, the demands for control and governance much higher, and the need for cross-domain expertise much greater in this new world.

That will change. As the open-source analytics stack matures and the market over-rewards skilled practitioners – drawing in more folks, it will become much easier to decentralize again. This isn’t the first time we’ve been down the IT path that goes from centralization to gradual diffusion as technologies become cheaper, easier, and better supported.

At an even more fundamental level than the question of centralization lives the location and nature of digital. Is digital treated as a thing? Is it part of Marketing? Or Operations? Or does each thing have a digital component?

I know I should have more of an opinion about this, but I’m afraid that the right answers seem to me, once again, to be local and historical. In a digital pure-play, to even speak of digital as a thing seems absurd. It’s the core of the company. In a gas company, on the other hand, digital might best be viewed as a customer service channel. In a manufacturer, digital might be a sub-function of brand marketing or, depending on the nature of the digital investment and its importance to the company, a unit unto-itself.

Obviously, one of the huge disadvantages to thinking of digital as a unit unto-itself is how it can then interact correctly with the non-digital functions that share the same purpose. If you have digital customer servicing and non-digital customer servicing, does it really make sense to have one in a digital department and the other as a customer-service department?

There is a case, however, for incubating digital capabilities within a small compact, standalone entity that can protect and nourish the digital investment with a distinct culture and resourcing model. I get that. Ultimately, though, it seems to me that unless digital OWNS an entire function, separating that function across digital and non-digital lines is arbitrary and likely to be ineffective in an omni-channel world.

But here’s the flip side. If you have a single digital property and it shares marketing and customer support functions, how do you allocate real-estate and who gets to determine key things like site structure? I’ve seen organizations where everything but the homepage is owned by somebody and the home page is like Oliver Twist. “Home page for sale, does anybody want one?”

That’s not optimal.

So the more overlap there needs to be between the functions and your digital properties, the more incentive you have to build a purely digital organization.

No matter what structure you pick, there are some trade-offs you’re going to have to live with. That’s part of why there is no magic answer to the right organization.

But far more important than the precise balance you strike around centralization or even where you put digital is the way you organize the core capabilities that belong to digital. Here, the vast majority of enterprises organize along the same general lines. Digital comprises some rough set of capabilities including:

  • IT
  • Creative
  • Marketing
  • Customer
  • UX
  • Analytics
  • Testing
  • VoC

In almost every company I work with, each of these capabilities is instantiated as a separate team. In most organizations, the IT folks are in a completely different reporting structure all the way up. There is no unification till you hit the C-Suite. Often, Marketing and Creative are unified. In some organizations, all of the research functions are unified (VoC, analytics) – sometimes under Customer, sometimes not. UX and Testing can wind up almost anywhere. They typically live under the Marketing department, but they can also live under a Research or Customer function.

None of this, to me, makes any sense.

To do digital well requires a deep integration of these capabilities. What’s more, it requires that these teams work together on a consistent basis. That’s not the way it’s mostly done.

Almost every enterprise I see not only siloes these capabilities, but puts in place budgetary processes that fund each digital asset as a one-time investment and which requires pass-offs between teams.

That’s probably not entirely clear so let me give some concrete examples.

You want to launch a new website. You hire an agency to design the Website. Then your internal IT team builds it. Now the agency goes away. The folks who designed the website no longer have anything to do with it. What’s more, the folks who built it get rotated onto the next project. Sometimes, that’s all that happens. The website just sits there – unimproved. Sometimes the measurement team will now pick it up. Keep in mind that the measurement team almost never had anything to do with the design of the site in the first place. They are just there to report on it. Still, they measure it and if they find some problem, who do they give it to?

Well, maybe they pass it on to the UX team or the testing team. Those teams, neither of which have ever worked with the website or had anything to do with its design are now responsible for implementing changes on it. And, of course, they will be working with developers who had nothing to do with building it.

Meanwhile, on an entirely separate track, the customer team may be designing a broader experience that involves that website. They enlist the VoC team to survey the site’s users and find out what they don’t like about it. Neither team (of course) had anything to do with designing or building the site.

If they come to some conclusion about what they want the site to do, they work with another(!) team of developers to implement their changes. That these changes may be at cross-purposes to the UX team’s changes or the original design intent is neither here nor there.

Does any of this make sense?

If you take continuous improvement to heart (and you should because it is the key to digital excellence), you need to realize that almost everything about the way your digital organization functions is wrong. You budget wrong and you organize wrong.

[Check out my relatively short (20 min) video on digital transformation and analytics organization – it’s the perfect medium for distributing this message through your enterprise!]

Here’s my simple rule about building digital assets. If it’s worth doing, it’s worth improving. Nothing you build will ever be right the first time. Accept that. Embrace it. That means you budget digital teams to build AND improve something. Those teams don’t go away. They don’t rotate. And they include ALL of the capabilities you need to successfully deliver digital experiences. Your developers don’t rotate off, your designers don’t go away, your VoC folks aren’t living in a parallel universe.

When you do things this way, you embody a commitment to continuous improvement deeply into your core organizational processes. It almost forces you to do it right. All those folks in IT and creative will demand analytics and tests to run or they won’t have anything to do.

That’s a good thing.

This type of vertical integration of digital capabilities is far, far more important than the balance around centralization or even the home for digital. Yet it gets far less attention in most enterprise strategic discussions.

The existence or lack of this vertical integration is the single most important factor in driving analytics into digital. Do it right, and you’ll do it well. Do what everyone else does and…well…it won’t be so good.

Measuring the Digital World – The Movie!

I’ve put together a short 20 minute video that’s a companion piece to Measuring the Digital World. It’s a guided tour through the core principles of digital analytics and a really nice introduction to the book and the field:

Measuring the Digital World : Introduction

Measuring the Digital World

An Introduction to Digital Analytics

The video introduces the unique challenges of measuring the digital world. It’s a world where none of our traditional measurement categories and concepts apply. And it doesn’t help that our tools mostly point us in the wrong direction – introducing measurement categories that are unhelpful or misleading. To measure the digital world, we need to understand customer experiences not Websites. That isn’t easy when all you know is what web pages people looked at!

But it’s precisely that leap – from consumption to intent – that underlies all digital measurement. The video borrows an example from the book (Conan the Librarian) to show how this works and why it can be powerful. This leads directly to the concepts of 2-Tiered segmentation that are central to MTDW and are the foundation of good digital measurement.

Of course, it’s not that easy. Not only is making the inference from consumption to intent hard, it’s constantly undermined by the nature of digital properties. Their limited real-estate and strong structural elements – designed to force visitors in particular directions – make it risky to assume that people viewed what they were most interested in.

This essential contradiction between the two most fundamental principles of digital analytics is what makes our discipline so hard and (also) so interesting.

Finally, the video introduces the big data story and the ways that digital data – and making the leap from consumption to intent – challenges many of our traditional IT paradigms (not to mention our supposedly purpose-built digital analytics toolkit).

Give it a look. Even if you’re an experience practitioner I think you’ll find parts of it illuminating. And if you’re new to the field or a consumer of digital reporting and analytics, I don’t think you could spend a more productive 20 minutes.

Afterward (when you want to order the book), here’s the link to it on Amazon!