Tag Archives: Gary Angel

Building and Measuring Analytics Culture

Culture – how to measure it and how to build it – has been much on my mind lately.

At least when it comes to the measurement part – something we don’t normally have to do – the reason is…different.

My Counseling Family team is going to be doing another fun project – participating in the 538 Oscar Modelling challenge – and our approach is to try and model each nominated movies’ fit to the current Hollywood zeitgeist. The theory behind the approach is simple. It seems fairly reasonable to suggest that while qualitative differences between nominated movies and non-nominated movies might be fairly large, when it comes to selecting between a small-set of relatively high-quality choices the decision is fairly arbitrary. In such situations, political and personal concerns will play a huge role, but so, presumably, will simple preference. Our thought is that preference is likely more a function of worldview than artistry – in much the same manner that people watching a political debate almost always believe that the person who most nearly echoes their opinion-set won. But how do you measure the cultural fit of a movie to a community? It’s no easy task. Our challenges include deciding how to capture a cultural zeitgeist in general, how to focus that capture on Hollywood, how to capture the spirit and themes of each movie, and how to score a match. And, of course, there is the challenge that the Hollywood zeitgeist might be more hot air than great wind and altogether too thin to be captured!

Should be interesting.

Equally, though, I have been thinking a lot about how to build culture – specifically when it comes to analytics. A constant theme running through my recent posts on enterprise transformation has been the challenge of doing digital well in the large enterprise. As I’ve repeatedly pointed out, that challenge is less a matter of technology or people, as it is of organization and culture. Enterprises have and understand the core capabilities to do digital well. Listen to my recent video series and you’ll hear this refrain:

  • The typical enterprise does analytics, they just don’t use it.
  • They have testing, they just don’t learn.
  • They talk voice-of-customer, they just don’t listen.
  • They do Agile, but they aren’t.

To fix those problems requires changes in the way the organization is structured and he way those capabilities are done. Even more, it requires changes in the way people need to think. That’s culture.

So I’ve been pondering how to build a culture of analytics driven decision-making and, of course, how to measure whether you’re successful. Now while my particular problem – building the proper sort of enterprise for digital transformation – may not be the standard one, the problems and challenges of building culture and measuring culture are hardly unique. And since this isn’t my specialty at all, I’ve been trying to read up on common approaches.

By and large, it’s pretty disappointing.

From a building culture perspective, so much of the literature seems to focus on top-down approaches: ways that a senior leader can communicate and encourage cultural change. That’s clearly important and I’m not going to dispute both the need and the challenges around top-down change. But this type of approach often seems to degenerate into leadership self-help advice or cheerleading, neither of which seem likely to be useful. Nor am I much impressed by the idea of carefully crafting a mission statement and promulgating it through the organization. I’ve sat in more than one excruciating mission statement meeting and all I ever got out of it was a sore butt. I’ve said before that if you have to create an innovation capability in your enterprise, you’re already defeated. And if you’re looking to a carefully crafted corporate mission statement to provide a shared vision, you’ve already lost your way.

I wasn’t much more impressed with attempts to measure culture.

It’s hard, obviously.

Most approaches seem to rely on survey instruments and involve categorization of the organization into pre-defined types (e.g. hierarchical) or score a number of separate variables (e.g. perceived alignment of vision). This seems like a close corollary to personality measurement tests. Lots of people love these tests, but they don’t strike me as particularly rigorous.

With regards to categorization, in particular, I’m skeptical that it means much and very skeptical that it might be a useful trigger to action. I can see value – and perhaps even triggers to action – in learning that there are differing perceptions of organization mission or differing perceptions around how aligned the organization is. It’s easy to fool yourself with a view from the top and this type of cultural survey instrument might help correct attitudes of corporate complacency. It’s much less clear to me, however, that such measurement would be a useful part of a continuous program designed to improve analytics (or any other) culture.

I’d very much like to see measures of culture that are behavioral and amenable to continuous measurement, but at least so far I haven’t come across anything very interesting.

It may be that culture is one of those things that is so challenging to measure that the subsequent benefits in clarity and decision-making – at least outside the world of academia – aren’t worth the investment. Perhaps the best way to measure culture is by digital success. If it works, it’s working. You could easily take that point of view from this extended series on digital transformation and I don’t think it’s implausible.

Maybe we just haven’t found the right methods.

Or maybe I just haven’t read the right articles. Indeed, if you have thoughts on either of these issues (how to build or measure culture) or can point me to interesting research, I’d love to hear about  it.

Right now, I have more ideas about how to build analytics culture than I do how to measure your success building it. Some of those ideas are implicit in the recommendations I’ve been making about the integration of analytics, the use of voice-of-customer and the role of experimentation, but they don’t end there.

In my next post, I’ll explain some concrete actions the enterprise can take to build analytics culture and why I think they are both more practical and more impactful than most of what passes for culture building.

At the same time, I’m going to be thinking more about measuring culture and I hope – eventually – to have something interesting to say about that too.


Measuring the Digital World is OFFICIALLY RELEASED. Order here – and just let me know if you’d like an autographed copy!

Measuring the Digital World

After several months in pre-order purgatory, my book, Measuring the Digital World is now available. If you’re even an occasional reader of this blog, I hope you’ll find the time to read it.

I know that’s no small ask. Reading a professional book is a big investment of time. So is reading Measuring the Digital World worth it?

Well, if you’re invested in digital optimization and analytics, I think it is – and here’s why. We work in a field that is still very immature. It’s grown up, as it were, underneath our feet. And while that kind of organic growth is always the most exciting, it’s also the most unruly. I’m betting that most of us who have spent a few years or more in digital analytics have never really had a chance to reflect on what we do and how we do it. Worse, most of those who are trying to learn the field, have to do so almost entirely by mentored trial-and-error. That’s hard. Having a framework for how and why things work makes the inevitable trial-and-error learning far more productive.

My goal in Measuring the Digital World wasn’t so much to create a how-to book as to define a discipline. I believe digital analytics is a unique field. A field defined by a few key problems that we must solve if we are to do it well. In the book, I wanted to lay out those problems and show how they can be tackled – irrespective of the tools you use or the type of digital property you care about.

At the very heart of digital analytics is a problem of description. Measurement is basic to understanding. We are born with and soon learn to speak and think in terms of measurement categories that apply to the physical world. Dimensionality, weight, speed, direction and color are some of the core measurement categories that we use over and over and over again in understanding the world we live in. These things don’t exist in the digital world.

What replaces them?

Our digital analytics tools provide the eyes and ears into the digital world. But I think we should be very skeptical of the measurement categories they suggest. Having lived through the period when those tools where designed and took their present shape, I’ve seen how flawed were the measurement conceptions that drove their form and function.

It’s not original, but it’s still true to say that our digital analytics tools mostly live at the wrong level and have the wrong set of measurement categories – that they are far too focused on web assets and far too little on web visitors.

But if this is a mere truism, it nevertheless lays the ground work for a real discipline. Because it suggests that the great challenge of digital is how to understand who people are and what they are doing using only their viewing behavior. We have to infer identity and intention from action. Probably 9 out of every 10 pages in Measuring the Digital World are concerned with how to do this.

The things that make it hard are precisely the things that define our discipline. First, to make the connection between action and both identity and intention, we have to find ways to generate meaning based on content consumption. This means understanding at a deep level what content is about – it also means making the implicit assumption that people self-select the things that interest them.

For the most part, that’s true.

But it’s also where things get tricky. Because digital properties don’t contain limitless possibilities and they impose a structure that tries to guide the user to specific actions. This creates a push-pull in every digital world. On the one hand, we’re using what people consume to understand their intention and, at the very same time, we’re constantly forcing their hand and trying to get them to do specific actions! Every digital property – no matter its purpose or design – embodies this push-pull. The result? A complex interplay between self-selection, intention and web design that makes understanding behavior in digital a constant struggle.

That’s the point – and the challenge – of digital analytics. We need to have techniques for moving from behavior to identity and intention. And we need to have techniques that control for the structure of digital properties and the presence or absence of content. These same challenges are played out on Websites, on mobile apps and, now, on omni-channel customer journeys.

This is all ground I’ve walked before, but Measuring the Digital World embodies an orderly and fairly comprehensive approach to describing these challenges and laying out the framework of our discipline. How it works. Why it’s hard. What challenges we still face. It’s all there.

So if you’re an experienced analyst and just want to reflect your intuitions and knowledge against a formal description of digital analytics and how it can be done, this book is for you. I’m pretty sure you’ll find at least a few new ideas and some new clarity around ideas you probably already have.

If you’re relatively new to the field and would like something that is intellectually a little more meaty than the “bag of tips-and-tricks” books that you’ve already read, then this book is for you. You’ll get a deep set of methods and techniques that can be applied to almost any digital property to drive better understanding and optimization. You’ll get a sense, maybe for the first time, of exactly what our discipline is – why it’s hard and why certain kinds of mistakes are ubiquitous and must be carefully guarded against.

And if you’re teaching a bunch of MBA or Business Students about digital analytics and want something that actually describes a discipline, this book is REALLY for you (well…for your students). Your students will get a true appreciation for a cutting edge analytics discipline, they’ll also get a sense of where the most interesting new problems in digital analytics are and what approaches might bear fruit. They’ll get a book that illuminates how the structure of a field – in this case digital – demands specific approaches, creates unique problems, and rewards certain types of analysis. That’s knowledge that cuts deeper than just understanding digital analytics – it goes right to the heart of what analytics is about and how it can work in any business discipline. Finally, I hope that the opportunity to tackle deep and interesting problems illuminated by the book’s framework, excites new analysts and inspires the next generation of digital analysts to go far beyond what we’ve been able to do.


Yes, even though I’m an inveterate reader, I know it’s no trivial thing to say “read this book”. After all, despite my copious consumption, I delve much less often into business or technical books. So many seem like fine ten-page articles stretched – I’m tempted to say distorted – into book form. You get their gist in the first five pages and the rest is just filler. That doesn’t make for a great investment of time.

And now that I’ve actually written a book, I can see why that happens. Who really has 250 pages worth of stuff to say? I’m not sure I do…actually I’m pretty sure there’s some filler tucked in there in a spot or two. But I think the ratio is pretty good.

With Measuring the Digital World I tried to do something very ambitious – define a discipline. To create the authoritative view of what digital analytics is, how it works, and why it’s different than any other field of analytics. Not to answer every question, lay out every technique or solve every problem. There are huge swaths of our field not even mentioned in the book. That doesn’t bother me. What we do is far too rich to describe in a single book or even a substantial collection. Digital is, as the title of the book suggests, a whole new world. My goal was not to explore every aspect of measuring that world, but only to show how that measurement, at its heart, must proceed. I’m surely not the right person to judge to what extent I succeeded. I hope you’ll do that.

Here’s the link to Measuring the Digital World on Amazon.

[By the way, if you’d like signed copy of Measuring the Digital World, just let me know. You can buy a copy online and I’ll send you a book-plate. I know it’s a little silly, but I confess to extreme fondness for the few signed books I possess!]

Analytics with a Strategic Edge

The Role of Voice of Customer in Enterprise Analytics

The vast majority of analytics effort is expended on problems that are tactical in nature. That’s not necessarily wrong. Tactics gets a bad rap, sometimes, but the truth is that the vast majority of decisions we make in almost any context are tactical. The problem isn’t that too much analytics is weighted toward tactical issues, it’s really that strategic decisions don’t use analytics at all. The biggest, most important decisions in the digital enterprise nearly always lack a foundation in data or analysis.

I’ve always disliked the idea behind “HIPPOs” – with its Dilbertian assumption that executives are idiots. That isn’t (mostly) my experience at all. But analytics does suffer from what might be described as “virtue” syndrome – the idea that something (say taxes or abstinence) is good for everyone else but not necessarily for me. Just as creative folks tend to think that what they do can’t be driven by analytics, so too is there a perception that strategic decisions must inevitably be more imaginative and intuitive and less number-driven than many decisions further down in the enterprise.

This isn’t completely wrong though it probably short-sells those mid-level decisions. Building good creative takes…creativity. It can’t be churned out by machine. Ditto for strategic decisions. There is NEVER enough information to fully determine a complex strategic decision at the enterprise level.

This doesn’t mean that data isn’t useful or should not be a driver for strategic decisions (and for creative content too). Instinct only works when it’s deeply informed about reality. Nobody has instincts in the abstract. To make a good strategic decision, a decision-maker MUST have certain kinds of data to hand and without that data, there’s nothing on which intuition, knowledge and experience can operate.

What data does a digital decision-maker need for driving strategy?

Key audiences. Customer Journey. Drivers of decision. Competitive choices.

You need to know who your audiences are and what makes them distinct. You need (as described in the last post) to understand the different journeys those audiences take and what journeys they like to take. You need to understand why they make the choices they make – what drives them to choose one product or service or another. Things like demand elasticity, brand awareness, and drivers of choice at each journey stage are critical. And, of course, you need to understand when and why those choices might favor the competition.

None of this stuff will make a strategic decision for you. It won’t tell you how much to invest in digital. Whether or not to build a mobile app. Whether personalization will provide high returns.

But without fully understanding audience, journey, drivers of decision and competitive choices, how can ANY digital decision-maker possibly arrive at an informed strategy? They can’t. And, in fact, they don’t. Because for the vast majority of enteprises, none of this information is part-and-parcel of the information environment.

I’ve seen plenty of executive dashboards that are supposed to help people run their business. They don’t have any of this stuff. I’ve seen the “four personas” puffery that’s supposed to help decision-makers understand their audience. I’ve seen how limited is the exposure executives have to journey mapping and how little it is deployed on a day-to-day basis. Worst of all, I’ve seen how absolutely pathetic is the use of voice of customer (online and offline) to help decision-makers understand why customers make the choices they do.

Voice of customer as it exists today is almost exclusively concerned with measuring customer satisfaction. There’s nothing wrong with measuring NPS or satisfaction. But these measures tell you nothing that will help define a strategy. They are at best (and they are often deeply flawed here too) measures of scoreboard – whether or not you are succeeding in a strategy.

I’m sure that people will object that knowing whether or not a strategy is succeeding is important. It is. It’s even a core part of ongoing strategy development. However, when divorced from particular customer journeys, NPS is essentially meaningless and uninterpretable. And while it truly is critical to measure whether or not a strategy is succeeding, it’s even more important to have data to help shape that strategy in the first place.

Executives just don’t get that context from their analytics teams. At best, they get little pieces of it in dribs and drabs. It is never – as it ought to be – the constant ongoing lifeblood of decision-making.

I subtitled this post “The Role of Voice of Customer in Enterprise Analytics” because of all the different types of information that can help make strategic decisions better, VoC is by far the most important. A good VoC program collects information from every channel: online and offline surveys, call-center, site feedback, social media, etc. It provides a continuing, detailed and sliceable view of audience, journey distribution and (partly) success. It’s by far the best way to help decision-makers understand why customers are making the choices they are, whether those choices are evolving, and how those choices are playing out across the competitive set. In short, it answers the majority of the questions that ought to be on the minds of decision-makers crafting a digital strategy.

This is a very different sort of executive dashboard than we typically see. It’s a true customer insights dashboard. It’s also fundamentally different than almost ANY VoC dashboard we see at any level. The vast majority of VoC reporting doesn’t provide slice-and-dice by audience and use-case – a capability which is absolutely essential to useful VoC reporting. VoC reporting is almost never based on and tied into a journey model so that the customer insights data is immediately reflective of journey stage and actionable arena. And VoC reporting almost never includes a continuous focus on exploring customer decision-making and tying that into the performance of actual initiatives.

It isn’t just a matter of a dashboard. One of the most unique and powerful aspects of digital voice-of-customer is the flexibility it provides to rapidly, efficiently and at very little cost tackle new problems. VoC should be a core part of executive decision-making with a constant cadence of research, analysis, discussion and reporting driven by specific business questions. This open and continuing dialog where VoC is a tool for decision-making is critical to integrating analytics into decisioning. If senior folks aren’t asking for new VoC research on a constant basis, you aren’t doing it right. The single best indicator of a robust VoC program in digital is the speed with which it changes.

Sadly, what decision-makers mostly get right now (if they get anything at all) is a high-level, non-segmented view of audience demographics, an occasional glimpse into high-level decision-factors that is totally divorced from both segment and journey stage, and an overweening focus on a scoreboard metric like NPS.

It’s no wonder, given such thin gruel, that decision-makers aren’t using data for strategic decisions better. If our executives mostly aren’t Dilbertian, they aren’t miracle workers either. They can’t make wine out of information water. If we want analytics to support strategy – and I assume we all do – then building a completely different sort of VoC program is the single best place to start. It isn’t everything. There are other types of data (behavioral, benchmark, econometric, etc.) that can be hugely helpful in shaping digital strategies. But a good VoC program is a huge step forward – a step forward that, if well executed – has the power to immediately transform how the digital enterprise thinks and works.


This is probably my last post of the year – so see you in 2016! In the meantime, my book Measuring the Digital World is now available. Could be a great way to spend your holiday down time (ideally while your resting up from time on the slopes)! Have a great holiday…

Is Data Science a Science?

I got a fair amount of feedback through various channels around my argument that data science isn’t a science and that the scientific method isn’t a method (or at least much of one). I wouldn’t consider either of these claims particularly important in the life of a business analyst, and I think I’ve written pieces that are far more significant in terms of actual practice, but I’ve written few pieces about topics which are evidently more fun to argue about. Well, I’m not opposed to a fun argument now and again, so here’s a redux on some of the commentary and my thoughts in response.

There were two claims in that post:

  1. I was somewhat skeptical that data science was correctly described as a science
  2. I was extremely skeptical that the scientific method was a good description of the scientific endeavor

The comment that most engaged me came from Adam Gitzes and really focused on the first claim:

Science is the distillation of evidence into a causal understanding of the world (my definition anyway). In business analytics, we use surveys, data analysis techniques, and experimental design to also understand causal relationships that can be used to drive our business.

On re-reading my initial post, I realized that while I had argued that business analytics wasn’t science (#1 above), I hadn’t really put many reasons on the table for that view – partly because I was too busy demolishing the “Scientific Method” and partly because I think it’s the less important of the two claims and also the more likely to be correct. Mostly, I just said I was skeptical of the idea. So I think Adam’s right to push out a more specific description of science and ask why data science might not be reasonably described as a kind of scientific endeavor.

I’m not going to get into the thicket of trying to define science. Really. I’m not. That’s the work of a different career. If I got nothing else out of my time studying Philosophy, I got an appreciation for how incredibly hard it is to answer seemingly simple questions like “what is science?” For the most part, we know it when we see it. Physics is science. Philosophy isn’t. But knowing it when you see it is precisely what fails when it comes to edge cases like data science or sociology.

When it comes to business analytics and data science, however, there are a couple of things that make me skeptical of applying the term science that I think we might actually agree on and that use our shared, working understanding of the scientific endeavor.

In business analytics, our main purpose isn’t to understand the world. It’s to improve a specific part of it. Science has no such objective.

Does that seem like a small difference? I don’t think it is. Part of what makes the scientific endeavor unique is that there is no axe to grind. Understanding is the goal. This isn’t to say that people don’t get attached to their ideas or that their careers don’t benefit if they are successful advocates for them – it’s done by humans after all. It would be no more accurate to suggest that the goal of a business is always profit. External forces can and often do set the agenda for researchers. But these are corruptions of the process not the process itself. Business analytics starts (appropriately) with an axe to grind and true science doesn’t.

To see why this makes a difference, consider my own domain – digital analytics. If our goal was just to understand the digital world, we’d have a very different research program than we do. If knowledge was our only goal, we’d spend as much time analyzing why people create certain kinds of digital worlds as how people consume them. That’s not the way it works. In reality, our research program is entirely focused on why and how people use a digital property and what will get more of them to take specific actions – not why and how it was created.

We are, rightly I believe, skeptical of the idea that research sponsored by tobacco companies into lung cancer is, properly speaking, science. That’s not because those researchers don’t follow the general outline of the scientific endeavor – it’s because they have an axe to grind and their research program is determined by factors outside the community of science. When it comes to business analytics, we are all tobacco scientists.

Perhaps we’re not so biased as to the findings of our experiments – good analytics is neutral as to what will work – but we’re every bit as biased when it comes to the outcomes desired and the shape of the research program.

Here’s another crucial difference. I think it’s fair to suggest that in data science we sometimes have no interest in causality. If I’m building a forecast model and I can find variables that are predictive, I may have little interest in whether those variables are also causal. If I’m building a look-alike targeting model, for example, it doesn’t matter one whit whether the variables are causal. Now it’s true that philosophers of science hotly debate the role and necessity of causality in science, but I tend to agree with Adam that there is something in the scientific endeavor that makes the demand for causality a part of the process. But in business analytics, we may demand causality for some problems but be entirely and correctly unconcerned with it in others. In business analytics, causality is a tool not a requirement.

There is, also, the nature of the analytics problem – at least in my field (digital). Science is typically concerned with studying natural phenomena. The digital world is not a natural world, it’s an engineered world. It’s created and adapted with intention. Perhaps even worse, it responds to and changes with the measurements we make and those measurements influence our intentions in subsequent building (which is the whole point after all).

This is Heisenberg’s Uncertainty Principle with a vengeance! When we measure the digital world, we mean to change it based on the measurement. What’s more, once we change it, we can never go back to the same world. We could restore the HTML, but not the absence of users with an alternative experience. In digital, every test we run changes the world in a fundamental way because it changes the users of that world. There is no possibility of conducting a digital test that doesn’t alter the reality we’re measuring – and while this might be true at the quantum level in physics, at the macro level where the scientific endeavor really lives, it seems like a huge difference.

What’s more, each digital property lives in the context of a larger digital world that is being constantly changed with intention by a host of other people. When new Apps like Uber change our expectations of how things like payment should work or alter the design paradigm on the Web, these exogenous and intentional changes can have a dramatic impact on our internal measurement. There is, then, little or no possibility of a true controlled experiment in digital. In digital analytics, our goal is to optimize one part of a giant machine for a specific purpose while millions of other people are optimizing other, inter-related parts of the same machine for entirely different and often opposed purposes.

This doesn’t seem like science to me.

There are disciplines that seem clearly scientific that cannot do controlled experiments. However, no field where the results of an experiment change the measured reality in a clearly significant fashion and are used to intentionally shape the resulting reality is currently described as scientific.

So why don’t I think data science is a science – at least in the realm of digital analytics? It differs from the scientific endeavor in several aspects that seem to me to be critical. Unlike science, business analytics and data science start with an agenda that isn’t just understanding and this fundamentally shapes the research program. Unlike science, business analytics and data science have no fixed commitment to causal explanations – just a commitment to working explanations. Finally, unlike science, business analytics and data science change the world they measure in a clearly significant fashion and do so intentionally with respect to the measurement.

Given that we have no fixed and entirely adequate definition of science, none of this is proof. I can’t demonstrate to you with the certainty of a logical proof that the definition of science requires X, data science is not X, so data science is not a science.

However, I think I have shown that at least by many of the core principles we associate with the scientific endeavor, that business analytics (which I take to be a proxy in this conversation for data science) is not well described as a science.

This isn’t a huge deal. I’ve done business analytics for many years and never once thought of myself as a scientist. What’s more, once we realize that being scientists doesn’t attach a powerful new methodology to business analytics – which was the rather more important point of my last post – it’s much less clear why anyone would think it makes a difference.



A few other notes on the comments I received. With regards to Nikolaos’ question “why should we care?” I’m obviously largely in agreement. There is intellectual interest in these questions (at least for me), but I won’t pretend that they are likely to matter in actual practice or will determine ‘what works’. I’m also very much in agreement with Ake’s point about qualitative data. The truth is that nothing in the scientific endeavor precludes the use of qualitative data in addition to behavioral data. But even though there’s no determinate tie between the two, I certainly think that advocates for data science as a science are particularly likely to shun qualitative data (which is a shame). As far as Patrick’s comment goes, I think it dodges the essential question. He’s right to suggest that the term data science is contentless because data is not the subject of science, the data is always about something which is the subject of science. But I take the deeper claim to be what I have tackled here; namely, that business analytics is a scientific endeavor. That claim isn’t contentless, just wrong. I remain, still, deeply unconvinced of the utility of CRISP-DM.


Now is as good a time as any (how’s that for a powerful call to action?) to pre-order my book, ‘Measuring the Digital World’ on Amazon.