Tag Archives: SPEED

Organizing the Digital Enterprise

At the Digital Analytics Hub in Europe I facilitated a conversation around enterprise digital transformation. We covered a lot of interesting ground, but organizing digital in the enterprise was the most challenging part of that discussion.

It’s a topic you can easily find yourself going around in circles with as people trot out opinions that sound right but sail past each other. That’s especially true since different organizations start (and want to finish) in very different places.

To get around that, I framed the problem in “state-of-nature” terms. If you were starting a digital organization from scratch in an enterprise, how would you organize and staff it?

But before we could answer that question, we had to consider something even more basic.

Should a “digital” organization be separate?

There’s a pretty strong sense these days that walling off digital from the rest of the organization gets things wrong from the outset. Digital should be embedded right into the DNA of the core organization. In a mature organization, there was a pretty broad consensus that digital isn’t a separate function. On the other hand, what if you’re not mature? Can you embed digital directly and grow it right if it’s inside the huge, complex structures that pervade an existing large enterprise? Even strong proponents of the “digital needs to be organic in the organization” point of view seemed to concede that incubation as a separate organization is often necessary to getting digital done and setup right. Of course, taking the incubation strategy is going to leave you with an organizational debt that at some point will have to be paid. The more successful you are and the larger and faster digital grows, the harder it’s going to be to re-integrate digital back into the organization.

I see both sides of this argument (and I’m sure there are more than two sides to be had). I’m just not a big believer in hard-and-fast right answers when it comes to organizational design.

If you have a strong digitally-experienced leader on your executive team and you have solid relationships between marketing and IT, maybe you try to transform digitally within your existing structures. If you’re not that lucky (and that is pretty lucky), maybe incubation with a strategy for integration is the right answer.

Having gotten to the point where most people conceded that incubation might sometimes be necessary, we returned to the “state-of-nature” question and discussed building out an incubated organization. Most people set product teams at the heart of that organization and agreed that these product teams should be organized very much along the lines that I described in my videos on enterprise transformation: agile-based product teams that include IT, creative and analytics people (behavioral, customer and attitudinal) all working together. In this model, there’s no pass-off from design to implementation to measurement to testing. The same teams that built a project optimize it – and there’s analytics at every step of the process.

I believe this is an incredibly powerful model for getting digital products right – and it’s a model that resonated across a pretty wide swath of different organizations – from giant retailers to very modest start-ups.

But it’s far from a complete answer to creating a digital organization.

Suppose you have these great integrated teams for each digital product, how do you handle all the ancillary functions that the large enterprise has developed? Things like finance and HR, for example. Do they need to be re-created inside a digital organization?

My reaction – and it was common – was that such functions probably don’t need to be re-created inside digital. Including these functions in digital doesn’t seem fundamental to getting digital right.

This point-of-view, however, was immediately challenged when it came to HR. The difficulties of digital hiring are well known – and it isn’t just finding resources. Traditional HR approaches to finding people, vetting candidates, compensation, and promotion bands are all problematic in digital. And if you get the people element wrong, everything else is doomed.

So once again, if you’ve got HR folks willing to work with and adapt to the needs of your digital leader, maybe you can leave existing structures intact and keep HR centralized. But HR is the wrong place to wimp out and leave your digital team without the power to execute the way they need to.

Bottom line? If my digital leader really wanted to own their own HR, I’d say yes.

Other functions? I don’t really know. Is it fundamental to digital execution? Does it need to be done differently in digital? Wherever the answer is yes, then it’s going to be a debate about whether it should live inside an incubated digital organization or be an outside service to it.

There’s another challenge that cuts even closer to the bone and lies at the heart of the challenge to the large enterprise. If you have a single digital product (like a pure-play startup might), you don’t have to worry about the relationship between and across teams and functions. But in a larger enterprise – even when it’s incubated – digital is going to require multiple product teams.

How do management lines work across those teams? Are the IT folks across product teams in the Digital IT organization and are they “managed” by Digital IT? Or are they managed by their Product Owner? In one sense, the answer seems obvious. On a day-to-day basis they are managed by their Product Owner. But who owns their career? What’s a career path like? How do Digital IT folks (or analysts) across product teams communicate? Who makes centralized decisions about key technology infrastructure? Who owns the customer?

Every one of these is a deep, important question with real ramifications for how the organization works and how you take a single product model and scale it into something that preserves the magic of the integrated team but adapts to the reality of the large, multi-function enterprise.

It was here, not surprisingly, that one of the participants in our DA Hub conversation trotted out the “dotted line”. Now it happened to be a consultant from a fellow big-4 and I (too glibly, I’m afraid) responded that “dotted lines are what consultants draw when they don’t have a good answer to a problem”.

I both regret and endorse this answer. I regret it because it was far too glib a response to what is, in one sense, probably the right answer. I endorse it because I think it’s true. God knows I’ve drawn these dotted lines before. When we draw a dotted line we essentially leave it up to the organization to organically figure out how it should work in day-to-day practice. That’s not necessarily a bad thing. It’s probably the right answer in a lot of cases. But we shouldn’t kid ourselves that just because it might be the right answer that makes it a good answer. It’s not. It’s a “we’re not the right people at the right time to answer this question” kind of answer. Knowing enough to know you’re not the right people at the right time is a good thing, but it would be a mistake to confuse that with actually having a good answer to the question.

So here’s my best attempt at a non-dotted line organization that integrates Product Teams into a broader structure. It seems clear to me that you need some centralized capabilities within each function. For Digital IT, as an example, these centralized teams provide shared services including enterprise technology selection, key standards and data governance. In analytics, the centralized team will be responsible for the overall customer journey mapping, analytics technology selection and standardization, a centralized analytics warehouse, and standards around implementation and reporting.

How big and inclusive does the centralized team need to be? Thinking there’s one right answer to this question is a kind of disease akin to thinking there’s some right answer to questions like “how large should government be?” There isn’t. I tend to be in the “as small as practical” school when it comes to centralization – both politically and in the enterprise. The best IT, the best creative, the best analytics is done when it’s closest to the business – that means out there in those Product teams. That also means making sure you don’t incent your best people out of the product teams into centralized roles so that they can “advance” and make more money.

It used to drive me crazy to see good teachers promoted to administrative roles in schools. You can’t blame the teachers. When you’ve got a family to feed, a house to buy, a nice to car to own, you’re not going to stay a teacher when your only path to more money and prestige is becoming an assistant principal. But you don’t see the Cleveland Cavaliers promoting Lebron from player to coach. It’s a terrible mistake to confuse rank with value.

I’m a big believer in WIDE salary bands. One great developer is worth an army of offshore programmers and is likely worth more than the person managing them. Don’t force your best people to Peter Principle themselves into jobs they hate or suck at.

So instead of creating progressions from Product teams to central teams, I’d favor aggressive rotational policies. By rotating people into and out of those central teams, you ensure that central teams stay attuned to the needs of the Product teams where work is actually getting done. You also remove the career-path issues that often drive top talent to gravitate toward centralization.

Communications and collaboration is another tricky problem. Collaboration is one of the most under-invested capabilities in the organization and my Product team structure is going to make it harder to do well. For areas like analytics, though, it’s critical. Analysts need to communicate practices and learnings across – not just within – product teams. So I’d favor having at least one role (and maybe more) per area in the central team whose sole function is driving cross-team communication and sharing. This is one of those band-aids you slap on an organizational structure because it doesn’t do something important well. Every organizational structure will have at least a few of these.

In an ideal world, that collaboration function would probably always have at least two resource slots – and one of those slots would be rotated across different teams.

My final structure features highly integrated product teams that blend resources across every function needed to deliver a great digital experience. Those teams don’t dissolve and they don’t pass off products. They own not just the creation of a product, but its ongoing improvement. Almost needless to say, analytics (customer, behavioral and attitudinal) is embedded in that team right from the get-go and drives continuous improvement.

Those teams are supported by centralized groups organized by function (IT, Design, Analytics) that handle key support, integration and standardization tasks. These centralized teams are kept as small as is practical. Rotational policies are enforced so that people experience both centralized and product roles. Salary bands are kept very wide and the organization tries hard not to incent people out of roles at which they excel. Included in the centralized teams are roles designed to foster collaboration and communication between functional areas embedded in the product teams.

Finally, support functions like HR and Finance are mostly kept external. However, where compelling reasons exist to incubate them with digital, they are embedded in the central structure.

I won’t pretend this is the one right answer to digital organizational structure. Not only does it leave countless questions unanswered, but I’m sure it has many problems that make it fatally flawed in at least some organizations.

There are no final answers when it comes to organizational design. Every decision is a trade-off and every decision needs to be placed in the context of your organization history, culture and your specific people. That’s why you can’t get the right answer out of a book or a blog.

But if you’re building an incubated digital organization, I think there’s more right than wrong here. I’ve tried to keep the cop-outs and dotted lines to a minimum and focused on designing a structure that really will enable digital excellence. I don’t say deliver it, because that’s always up to the people. But if your Product Managers can’t deliver good digital experiences with this organization, at least you know it’s their fault.

Digital Transformation in the Enterprise – Creating Continuous Improvement

I’m writing this post as I fly to London for the Digital Analytics Hub. The Hub is in its fourth year now (two in Berlin and two in London) and I’ve managed to make it every time. Of course, doing these Conference/Vacations is a bit of a mixed blessing. I really enjoyed my time in Italy but that was more vacation than Conference. The Hub is more Conference than vacation – it’s filled with Europe’s top analytics practitioners in deep conversation on analytics. In fact, it’s my favorite analytics conference going right now. And here’s the good news, it’s coming to the States in September! So I have one more of these analytics vacations on my calendar and that should be the best one of all. If you’re looking for the ultimate analytics experience – an immersion in deep conversation with the some of the best analytics practitioners around – you should check it out.

I’ve got three topics I’m bringing to the Hub. Machine Learning for digital analytics, digital analytics forecasting and, of course, the topic at hand today, enterprise digital transformation.

In my last post, I described five initiatives that lay the foundation for analytics driven digital transformation. Those projects focus on data collection, journey mapping, behavioral segmentation, enterprise Voice of Customer (VoC) and unified marketing measurement. Together, these five initiatives provide a way to think about digital from a customer perspective. The data piece is focused on making sure that data collection to support personalization and segmentation is in place. The Journey mapping and the behavioral segmentation provide the customer context for every digital touchpoint – why it exists and what it’s supposed to do. The VoC system provides a window into who customers want and need and how they make decisions at every touchpoint. Finally, the marketing framework ensures that digital spend is optimized on an apples-to-apples basis and is focused on the right customers and actions to drive the business.

In a way, these projects are all designed to help the enterprise think and talk intelligently about the digital business. The data collection piece is designed to get organizations thinking about personalization cues in the digital experience. Journey mapping is designed to expand and frame customer experience and place customer thinking at the center of the digital strategy. Two-tiered segmentation serves to get people talking about digital success in terms of customer’s and their intent. Instead of asking questions like whether a Website is successful, it gets people thinking about whether the Website is successful for a certain type of customer with a specific journey intent. That’s a much better way to think. Similarly, the VoC system is all about getting people to focus on customer and to realize that analytics can serve decision-making on an ongoing basis. The marketing framework is all about making sure that campaigns and creative are measured to real business goals – set within the customer journey and the behavioral segmentation.

The foundational elements are also designed to help integrate analytics into different parts of the digital business. The data collection piece is targeted toward direct response optimization. Journey mapping is designed to help weld strategic decisions to line manager responsibilities. Behavioral segmentation is focused on line and product managers needing tactical experience optimization. VoC is targeted toward strategic thinking and decision-making, and, of course, the marketing framework is designed to support the campaign and creative teams.

If a way to think and talk intelligently about the digital enterprise and its operations is the first step, what comes next?

All five of the initiatives that I’ve slated into the next phase are about one thing – creating a discipline of continuous improvement in the enterprise. That discipline can’t be built on top of thin air – it only works if your foundation (data, metrics, framework) supports optimization. Once it does, however, the focus should be on taking advantage of that to create continuous improvement.

The first step is massive experimentation via an analytics driven testing plan. This is partly about doing lots of experiments, yes. But even more important is that the experimentation be done as part of an overall optimization plan with tests targeted by behavioral and VoC analytics to specific experiences where the opportunity for improvement is highest. If all you’re thinking about is how many experiments you run, you’re not doing it right. Every type of customer and every part of their journey should have tests targeted toward its improvement.

Similarly on the marketing side, phase II is about optimizing against the unified measurement framework with both mix and control group testing. Mix is a top-down approach that works against your overall spending – regardless of channel type or individual measurement. Control group testing is nothing more than experimentation in the marketing world. Control groups have been a key part of marketing since the early direct response days. They’re easier to implement and more accurate in establishing true lift and incrementality than mathematical attribution solutions.

The drive toward continuous improvement doesn’t end there, however. I’m a big fan for tool-based reporting as a key part of the second phase of analytics driven transformation. The idea behind tool-based reporting is simple but profound. Instead of reports as static, historical tools to describe what happened, the idea is that reports contain embedded predictive models that transform them into tools that can be used to understand the levers of the business and test what might happen based on different business strategies. Building tool-based reports for marketing, for product launch, for conversion funnels and for other key digital systems is deeply transformative. I describe this as shift in the organization from democratizing data to democratizing knowledge. Knowledge is better. But the advantages to tool-based reporting run even deeper. The models embedded in these reports are your best analytic thinking about how the business works. And guess what? They’ll be wrong a lot of the time and that’s a good thing. It’s a good thing because by making analytically thinking about how the business works explicit, you’ve created feedback mechanisms in the organization. When things don’t work out the way the model predicts, your analysts will hear about it and have to figure out why and how to do better. That drives continuous improvement in analytics.

A fourth key part of creating the agile enterprise – at least for sites without direct ecommerce – is value-based optimization. One of the great sins in digital measurement is leaving gaps in your ability to measure customers across their journey. I call this “closing measurement loops”. If you’re digital properties are lead generating or brand focused or informational or designed to drive off-channel or off-property (to Amazon or to a Call-Center), it’s much harder to measure whether or not they’re successful. You can measure proxies like content consumption or site satisfaction, but unless these proxies actually track to real outcomes, you’re just fooling yourself. This is important. To be good at digital and to use measurement effectively, every important measurement gap needs to be closed. There’s no one tool or method for closing measurement gaps, instead, a whole lot of different techniques with a bunch of sweat is required. Some of the most common methods for closing measurement gaps include re-survey, panels, device binding and dynamic 800 numbers.

Lastly, a key part of this whole phase is training the organization to think in terms of continuous improvement. That doesn’t happen magically and while all of the initiatives described here support that transformation, they aren’t, by themselves, enough. In my two posts on building analytics culture, I laid out a fairly straightforward vision of culture. The basic idea is that you build analytics culture my using data and analytics. Not by talking about how important data is or how people should behave. In the beginning was the deed.

Creating a constant cadence of analytics-based briefings and discussions forces the organization to think analytically. It forces analysts to understand the questions that are meaningful to the business. It forces decision-makers to reckon with data and lets them experience the power of being able to ask questions and get real answers. Just the imperative of having to say something interesting is good discipline for driving continuous improvement.

foundational transformation Step 2

That’s phase two of enterprise digital transformation. It’s all about baking continuous improvement into the organization and building on top of each element of the foundation the never ending process of getting better.

 

You might think that’s pretty much all there is to the analytics side of the digital transformation equation. Not so. In my next post, I’ll cover the next phase of analytics transformation – driving big analytics wins. So far, most of what I’ve covered is valid for any enterprise in any industry. But in the next phase, initiatives tend to be quite different depending on your industry and business model.

See you after the Hub!

Digital Transformation of the Enterprise (with a side of Big Data)

Since I finished Measuring the Digital World and got back to regular blogging, I’ve been writing an extended series on the challenges of digital in the enterprise. Like many analysts, I’m often frustrated by the way our clients approach decision-making. So often, they lack any real understanding of the customer journey, any effective segmentation scheme, any real method for either doing or incorporating analytics into their decisioning, anything more than a superficial understanding of their customers, and anything more than the empty façade of a testing program. Is it any surprise that they aren’t very good at digital? This would be frustrating but understandable if companies simply didn’t invest in these capabilities. They aren’t magic, and no large enterprise can do these things without making a significant investment. But, in fact, many companies have invested plenty with very disappointing results. That’s maddening. I want to change that – and this series is an extended meditation on what it takes to do better and how large enterprises might truly gain competitive advantage in digital.

I hope that reading these posts is useful to people, but I know, too, that it’s hard to get the time. Heaven knows I struggle to read the stuff I’d like to. So I took advantage of the slow time over the holidays to do something that’s been on my wish list for about 2 years now – take some of the presentations I do and turn them into full online webinars. I started with a whole series that captures the core elements of this series – the challenge of digital transformation.

There are two versions of this video series. The first is a set of fairly short (2-4 minute) stories that walk through how enterprise decision-making gets done, what’s wrong with the way we do it, and how we can do better. It’s a ten(!) part series and meant to be tackled in order. It’s not really all that long…like I said, most of the videos are just 2-4 minutes long. I’ve also packaged up the whole story (except Part 10) in single video that runs just a little over 20 minutes. It’s shorter than viewing all 10 of the others, but you need a decent chunk of uninterrupted time to get at it. If you’re really pressed and only want to get the key themes without the story, you can just view Parts 8-10.

Here’s the video page that has all of these laid out in order:

Digital Transformation Video Series

Check it out and let me know what you think! To me it seems like a faster, better, and more enjoyable way to get the story about digital transformation and I’m hoping it’s very shareable as well. If you’re struggling to get analytics traction in your organization, these videos might be an easy thing to share with your CMO and digital channel leads to help drive real change.

I have to say I enjoyed doing these a lot and they aren’t really hard to do. They aren’t quite professional quality, but I think they are very listenable and I’ll keep working to make them better. In fact, I enjoyed doing the digital transformation ones so much that I knocked out another this last week – Big Data Explained.

This is one of my favorite presentations of all time – it’s rich in content and intellectually interesting. Big data is a subject that is obscured by hype, self-interest, and just plain ignorance; everyone talks about it but no one has a clear, cogent explanation of what it is and why it’s important. This presentation deconstructs the everyday explanation about big data (the 4Vs) and shows why it misses the mark. But it isn’t designed to merely expose the hype, it actually builds out a clear, straightforward and important explanation of why big data is real, why it challenges common IT and analytics paradigms, and how to understand whether a problem is a big data problem…or not. I’ve written about this before, but you can’t beat a video with supporting visuals for this particular topic. It’s less than fifteen minutes and, like the digital transformation series, it’s intended for a wide audience. If you have decision-makers who don’t get big data or are skeptical of the hype, they’ll appreciate this straightforward, clear, and no-nonsense explication of what it is.

You can get it on my video page or direct on Youtube

This is also a significant topic toward the end of Measuring the Digital World where I try to lay out a forward looking plan for digital analytics as a discipline.

I’m planning to do a steady stream of these videos throughout the year so I’d love thoughts/feedback if you have suggestions!

Next week I hope to have an update on my EY Counseling Family’s work in the 538 Academy Awards challenge. We’ve built our initial Hollywood culture models – it’s pretty cool stuff and I’m excited to share the results. Our model may not be as effective as some of the other challengers (TBD), but I think it’s definitely more fun.

Building Analytics Culture – One Decision at a Time

In my last post, I argued that much of what passes for “building culture” in corporate America is worthless. It’s all about talk. And whether that talk is about diversity, ethics or analytics, it’s equally arid. Because you don’t build culture by talking. You build culture though actions. By doing things right (or wrong if that’s the kind of culture you want). Not only are words not effective in building culture, they can be positively toxic. When words and actions don’t align, the dishonesty casts other – possibly more meaningful words – into disrepute. Think about which is worse – a culture where bribery is simply the accepted and normal way of getting things done (and is cheerfully acknowledged) and one where bribery is ubiquitous but is cloaked behind constant protestations of disinterest and honesty? If you’re not sure about your answer, take it down to a personal level and ask yourself the same question. Do we not like an honest villain better than a hypocrite? If hypocrisy is the compliment vice pays to virtue, it is a particularly nasty form of flattery.

What this means is that you can’t build an analytics culture by telling people to be data driven. You can’t build an analytics culture by touting the virtues of analysis. You can’t even build an analytics culture by hiring analysts. You build an analytics culture by making good (data-driven) decisions.

That’s the only way.

But how do you get an organization to make data-driven decisions? That’s the art of building culture. And in that last post, I laid out seven (a baker’s half-dozen?) tactics for building good decision-making habits: analytic reporting, analytics briefing sessions, hiring a C-Suite analytics advisor, creating measurement standards, building a rich meta-data system for campaigns and content, creating a rapid VoC capability and embracing a continuous improvement methodology like SPEED.

These aren’t just random parts of making analytic decisions. They are tactics that seem to me particularly effective in driving good habits in the organization and building the right kind of culture. But seven tactics doesn’t nearly exhaust my list. Here’s another set of techniques that are equally important in helping drive good decision-making in the organization (my original list wasn’t in any particular order so it’s not like the previous list had all the important stuff):

Yearly Agency Performance Measurement and Reviews

What it is: Having an independent annual analysis of your agency’s performance. This should include review of goals and metrics, consideration of the appropriateness of KPIs and analysis of variation in campaign performance along three dimensions (inside the campaign by element, over time, and across campaigns). This must not be done by the agency itself (duh!) or by the owners of the relationship.

Why it builds culture: Most agencies work by building strong personal relationships. There are times and ways that this can work in your favor, but from a cultural perspective it both limits and discourages analytic thinking. I see many enterprises where the agency is so strongly entrenched you literally cannot criticize them. Not only does the resulting marketing nearly always suck, but this drains the life out of an analytics culture. This is one of many ways in which building an analytic culture can conflict with other goals, but here I definitely believe analytics should win. You don’t need a too cozy relationship with your agency. You do need objective measurement of their performance.

 

Analytics Annotation / Collaboration Tool like Insight Rocket

What it is: A tool that provides a method for rich data annotation and the creation and distribution of analytic stories across the analytics team and into the organization. In Analytic Reporting, I argued for a focus on democratizing knowledge not data. Tools like Insight Rocket are a part of that strategy, since they provide a way to create and rapidly disseminate a layer of meaning on top of powerful data exploration tools like Tableau.

Why it builds culture: There aren’t that many places where technology makes much difference to culture, but there are a few. As some of my other suggestions make clear, you get better analytics culture the more you drive analytics across and into the organization (analytic reporting, C-Suite Advisor, SPEED, etc.). Tools like Insight Rocket have three virtues: they help disseminate analytics thinking not just data, they boost analytics collaboration making for better analytic teams, and they provide a repository of analytics which increases long-term leverage in the enterprise. Oh, here’s a fourth advantage, they force analysts to tell stories – meaning they have to engage with the business. That makes this piece of technology a really nice complement to my suggestion about a regular cadence of analytics briefings and a rare instance of technology deepening culture.

 

In-sourcing

What it is: Building analytics expertise internally instead of hiring it out and, most especially, instead of off-shoring it.

Why it builds culture: I’d be the last person to tell you that consulting shouldn’t have a role in the large enterprise. I’ve been a consultant for most of my working life. But we routinely advise our clients to change the way they think about consulting – to use it not as a replacement for an internal capability but as a bootstrap and supplement to that capability. If analytics is core to digital (and it is) and if digital is core to your business (which it probably is), then you need analytics to be part of your internal capability. Having strong, capable, influential on-shore employees who are analysts is absolutely necessary to analytics culture. I’ll add that while off-shoring, too, has a role, it’s a far more effective culture killer than normal consulting. Off-shoring creates a sharp divide between the analyst and the business that is fatal to good performance and good culture on EITHER side.

 

Learning-based Testing Plan

What it is: Testing plans that include significant focus on developing best design practices and resolving political issues instead of on micro-optimizations of the funnel.

Why it works: Testing is a way to make decisions. But as long as its primary use is to decide whether to show image A or image B or a button in this color or that color, it will never be used properly. To illustrate learning-based testing, I’ve used the example of video integration – testing different methods of on-page video integration, different lengths, different content types and different placements against each key segment and use-case to determine UI parameters for ALL future videos. When you test this way, you resolve hundreds of future questions and save endless future debate about what to do with this or that video. That’s learning based testing. It’s also about picking key places in the organization where political battles determine design – things like home page real-estate and the amount of advertising load on a page – and resolving them with testing; that’s learning based testing, too. Learning based testing builds culture in two ways. First, in and of itself, it drives analytic decision-making. Almost as important, it demonstrates the proper role of experimentation and should help set the table for decision-makers tests to ask for more interesting tests.

 

Control Groups

What it is: Use of control groups to measure effectiveness whenever new programs (operational or marketing) are implemented. Control groups use small population subsets chosen randomly from a target population who are given either no experience or a neutral (existing) experience instead. Nearly all tests feature a baseline control group as part of the test, but the use of control groups transcends A/B testing tools. Use of control groups common in traditional direct response marketing and can be used in a wide variety of on and offline contexts (most especially as I recently saw Elea Feit of Drexel hammer home at the DAA Symposium – as a much more effective approach to attribution).

Why it works: One of the real barriers to building culture is a classic problem in education. When you first teach students something, they almost invariably use it poorly. That can sour others on the value of the knowledge itself. When people in an organization first start using analytics, they are, quite inevitably, going to fall into the correlation trap. Correlation is not causation. But in many cases, it sure looks like it is and this leads to many, many bad decisions. How to prevent the most common error in analytics? Control groups. Control groups build culture because they get decision-makers thinking the right way about measurement and because they protect the organization from mistakes that will otherwise sour the culture on analytics.

 

Unified Success Framework

What it is: A standardized, pre-determined framework for content and campaign success measurement that includes definition of campaign types, description of key metrics for those types, and methods of comparing like campaigns on an apples-to-apples basis.

Why it works: You may not be able to make the horse drink, but leading it to water is a good start. A unified success framework puts rigor around success measurement – a critical part of building good analytics culture. On the producer side, it forces the analytics team to make real decisions about what matters and, one hopes, pushes them to prove that proxy measures (such as engagement) are real. On the consumer side, it prevents that most insidious destroyer of analytics culture, the post hoc success analysis. If you can pick your success after the game is over, you’ll always win.

 

The Enterprise VoC Dashboard

What it is: An enterprise-wide state-of-the-customer dashboard that provides a snapshot and trended look at how customer attitudes are evolving. It should include built in segmentation so that attitudinal views are ALWAYS shown sliced by key customer types with additional segmentation possible.

Why it works: There are so many good things going on here that it’s hard to enumerate them all. First, this type of dashboard is one of the best ways to distill customer-first thinking in the organization. You can’t think customer-first, until you know what the customer thinks. Second, this type of dashboard enforces a segmented view of the world. Segmentation is fundamental to critical thinking about digital problems and this sets the table for better questions and better answers in the organization. Third, opinion data is easier to absorb and use than behavioral data, making this type of dashboard particularly valuable for encouraging decision-makers to use analytics.

 

Two-Tiered Segmentation

What it is: A method that creates two-levels of segmentation in the digital channel. The first level is the traditional “who” someone is – whether in terms of persona or business relationship or key demographics. The second level captures “what” they are trying to accomplish. Each customer touch-point can be described in this type of segmentation as the intersection of who a visitor is and what their visit was for.

Why it works: Much like the VoC Dashboard, Two-Tiered Segmentation makes for dramatically better clarity around digital channel decision-making and evaluation of success. Questions like ‘Is our Website successful?’ get morphed into the much more tractable and analyzable question ‘Is our Website successful for this audience trying to do this task?’. That’s a much better question and big part of building analytics culture is getting people to ask better questions. This also happens to be the main topic of my book “Measuring the Digital World” and in it you can get a full description of both the power and the methods behind Two-Tiered Segmentation.

 

I have more, but I’m going to roll the rest into my next post on building an agile organization since they are all deeply related to the integration of capabilities in the organization. Still, that’s fifteen different tactics for building culture. None of which include mission statements, organizational alignment or C-Level support (okay, Walking the Walk is kind of that but not exactly and I didn’t include it in the fifteen) and none of which will take place in corporate retreats or all-hands conferences. That’s a good thing and makes me believe they might actually work.

Ask yourself this: is it possible to imagine an organization that does even half these things and doesn’t have a great analytics culture? I don’t think it is. Because culture just is the sum of the way your organization works and these are powerful drivers of good analytic thinking. You can imagine an organization that does these things and isn’t friendly, collaborative, responsible, flat, diverse, caring or even innovative. There are all kinds of culture, and good decision-making isn’t the only aspect of culture to care about*. But if you do these things, you will have an organization that makes consistently good decisions.

*Incidentally, if you want to build culture in any of these other ways, you have to think about similar approaches. Astronomers have a clever technique for seeing very faint objects called averted vision. The idea is that you look just to the side of the object if you want to get the most light-gathering power from your eyes. It’s the same with culture. You can’t tackle it head-on by talking about it. You have to build it just a little from the side!

Practical Steps to Building an Analytics Culture

Building an analytics culture in the enterprise is incredibly important. It’s far more important than any single capability, technology or technique. But building culture isn’t easy. You can’t buy it. You can’t proclaim it. You can’t implement it.

There is, of course, a vast literature on building culture in the enterprise. But if the clumsy, heavy-handed, thoroughly useless attempts to “build culture” that I’ve witnessed over the course of my working life are any evidence, that body of literature is nearly useless.

Here’s one thing I know for sure: you don’t build culture by talk. I don’t care whether it’s getting teenagers to practice safe-sex or getting managers to use analytics, preaching virtue doesn’t work, has never worked and will never work. Telling people to be data-driven, proclaiming your commitment to analytics, touting your analytics capabilities: none of this builds analytics culture.

If there’s one thing that every young employee has learned in this era, it’s that fancy talk is cheap and meaningless. People are incredibly sophisticated about language these days. We can sit in front of the TV and recognize in a second whether we’re seeing a commercial or a program. Most of us can tell the difference between a TV show and movie almost at a glance. We can tune out advertising on a Website as effortlessly as we put on our pants. A bunch of glib words aren’t going to fool anyone. You want to know what the reaction is to your carefully crafted, strategic consultancy driven mission statement or that five year “vision” you spent millions on and just rolled out with a cool video at your Sales Conference? Complete indifference.

That’s if you’re lucky…if you didn’t do it really well, you got the eye-roll.

But it isn’t just that people are incredibly sensitive – probably too sensitive – to BS. It’s that even true, sincere, beautifully reasoned words will not build culture. Reading moral philosophy does not create moral students. Not because the words aren’t right or true, but because behaviors are, for the most part, not driven by those types of reasons.

That’s the whole thing about culture.

Culture is lived, not read or spoken. To create it, you have to ingrain it in people’s thinking. If you want a data-driven organization, you have to create good analytic habits. You have to make the organization (and you too) work right.

How do you do that?

You do it by creating certain kinds of process and behaviors that embed analytic thinking. Do enough of that, and you’ll have an analytic culture. I guarantee it. The whole thrust of this recent series of posts is that by changing the way you integrate analytics, voice-of-customer, journey-mapping and experimentation into the enterprise, you can drive better digital decision making. That’s building culture. It’s my big answer to the question of how you build analytics culture.

But I have some small answers as well. Here, in no particular order, are practical ways you can create importantly good analytics habits in the enterprise.

Analytic Reporting

What it is: Changing your enterprise reporting strategy by moving from reports to tools. Analytic models and forecasting allow you to build tools that integrate historical reporting with forecasting and what-if capabilities. Static reporting is replaced by a set of interactive tools that allow users to see how different business strategies actually play-out.

Why it build analytics culture: With analytics reporting, you democratize knowledge not data. It makes all the difference in the world. The analytic models capture your best insight into how a key business works and what levers drive performance. Building this into tools not only operationalizes the knowledge, it creates positive feedback loops to analytics. When the forecast isn’t right, everyone know it and the business is incented to improve its understanding and predictive capabilities. This makes for better culture in analytics consumers and analytics producers.

 

Cadence of Communications

What it is: Setting up regular briefings between analytics and your senior team and decision-makers. This can include review of dashboards but should primarily focus on answers to previous business questions and discussion of new problems.

Why it builds analytics culture: This is actually one of the most important things you can do. It exposes decision-makers to analytics. It makes it easy for decision-makers to ask for new research and exposes them to the relevant techniques. Perhaps even more important, it lets decision-makers drive the analytics agenda, exposes analysts to real business problems, and forces analysts to develop better communication skills.

 

C-Suite Advisor

What it is: Create an Analytics Minister-without-portfolio whose sole job is to advise senior decision-makers on how to use, understand and evaluate the analytics, the data and the decisions they get.

Why it builds analytics culture: Most senior executives are fairly ignorant of the pitfalls in data interpretation and the ins-and-outs of KPIs and experimentation. You can’t send them back to get a modern MBA, but you can give them a trusted advisor with no axe to grind. This not only raises their analytics intelligence, it forces everyone feeding them information to up their game as well. This tactic is also critical because of the next strategy…

 

Walking the Walk

What it is: Senior Leaders can talk tell they are blue in the face about data-driven decision-making. Nobody will care. But let a Senior Leader even once use data or demand data around a decision they are making and the whole organization will take notice.

Why it builds analytics culture: Senior leaders CAN and DO have a profound impact on culture but they do so by their behavior not their words. When the leaders at the top use and demand data for decisions, so will everyone else.

 

Tagging Standards

What it is: A clearly defined set of data collection specifications that ensure that every piece of content on every platform is appropriately tagged to collect a rich set of customer, content, and behavioral data.

Why it builds analytics culture: This ends the debate over whether tags and measurement are optional. They aren’t. This also, interestingly, makes measurement easier. Sometimes, people just need to be told what to do. This is like choosing which side of the road to drive on – it’s far more important that you have a standard that which side of the road you pick. Standards are necessary when an organization needs direction and coordination. Tagging is a perfect example.

 

CMS and Campaign Meta-Data

What it is: The definition of and governance around the creation of campaign and content meta-data. Every piece of content and every campaign element should have detailed, rich meta-data around the audience, tone, approach, contents, and every other element that can be tuned and analyzed.

Why it builds analytics culture: Not only is meta-data the key to digital analytics – providing the meaning that makes content consumption understandable, but rich meta-data definition guides useful thought. These are the categories people will think about when they analyze content and campaign performance. That’s as it should be and by providing these pre-built, populated categorizations, you’ll greatly facilitate good analytics thinking.

 

Rapid VoC

What it is: The technical and organizational capability to rapidly create, deploy and analyze surveys and other voice-of-customer research instruments.

Why it builds analytics culture: This is the best capability I know for training senior decision-makers to use research. It’s so cheap, so easy, so flexible and so understandable that decision-makers will quickly get spoiled. They’ll use it over and over and over. Well – that’s the point. Nothing builds analytics muscle like use and getting this type of capability deeply embedded in the way your senior team thinks and works will truly change the decision-making culture of the enterprise.

 

SPEED and Formal Continuous Improvement Cycles

What it is: The use of a formal methodology for digital improvement. SPEED provides a way to identify the best opportunities for digital improvement, the ways to tackle those opportunities, and the ability to measure the impact of any changes. It’s the equivalent of Six Sigma for digital.

Why it builds analytics culture: Formal methods make it vastly easier for everyone in the organization to understand how to get better. Methods also help define a set of processes that organizations can build their organization around. This makes it easier to grow and scale. For large enterprises, in particular, it’s no surprise that formal methodologies like Six Sigma have been so successful. They make key cultural precepts manifest and attach processes to them so that the organizational inertia is guided in positive directions.

 

Does this seem like an absurdly long list? In truth I’m only about half-way through. But this post is getting LONG. So I’m going to save the rest of my list for next week. Till then, here’s some final thoughts on creating an analytics culture.

The secret to building culture is this: everything you do builds culture. Some things build the wrong kind of culture. Some things the right kind. But you are never not building culture. So if you want to build the right culture to be good at digital and decision-making, there’s no magic elixir, no secret sauce. There is only the discipline of doing things right. Over and over.

That being said, not every action is equal. Some foods are empty of nutrition but empty, too, of harm. Others positively destroy your teeth or your waistline. Still others provide the right kind of fuel. The things I’ve described above are not just a random list of things done right, they are the small to medium things that, done right, have the biggest impacts I’ve seen on building a great digital and analytics culture. They are also targeted to places and decisions which, done poorly, will deeply damage your culture.

I’ll detail some more super-foods for analytics culture in my next post!

 

[Get your copy of Measuring the Digital World – the definitive guide to the discipline of digital analytics – to learn more].

Analytics with a Strategic Edge

The Role of Voice of Customer in Enterprise Analytics

The vast majority of analytics effort is expended on problems that are tactical in nature. That’s not necessarily wrong. Tactics gets a bad rap, sometimes, but the truth is that the vast majority of decisions we make in almost any context are tactical. The problem isn’t that too much analytics is weighted toward tactical issues, it’s really that strategic decisions don’t use analytics at all. The biggest, most important decisions in the digital enterprise nearly always lack a foundation in data or analysis.

I’ve always disliked the idea behind “HIPPOs” – with its Dilbertian assumption that executives are idiots. That isn’t (mostly) my experience at all. But analytics does suffer from what might be described as “virtue” syndrome – the idea that something (say taxes or abstinence) is good for everyone else but not necessarily for me. Just as creative folks tend to think that what they do can’t be driven by analytics, so too is there a perception that strategic decisions must inevitably be more imaginative and intuitive and less number-driven than many decisions further down in the enterprise.

This isn’t completely wrong though it probably short-sells those mid-level decisions. Building good creative takes…creativity. It can’t be churned out by machine. Ditto for strategic decisions. There is NEVER enough information to fully determine a complex strategic decision at the enterprise level.

This doesn’t mean that data isn’t useful or should not be a driver for strategic decisions (and for creative content too). Instinct only works when it’s deeply informed about reality. Nobody has instincts in the abstract. To make a good strategic decision, a decision-maker MUST have certain kinds of data to hand and without that data, there’s nothing on which intuition, knowledge and experience can operate.

What data does a digital decision-maker need for driving strategy?

Key audiences. Customer Journey. Drivers of decision. Competitive choices.

You need to know who your audiences are and what makes them distinct. You need (as described in the last post) to understand the different journeys those audiences take and what journeys they like to take. You need to understand why they make the choices they make – what drives them to choose one product or service or another. Things like demand elasticity, brand awareness, and drivers of choice at each journey stage are critical. And, of course, you need to understand when and why those choices might favor the competition.

None of this stuff will make a strategic decision for you. It won’t tell you how much to invest in digital. Whether or not to build a mobile app. Whether personalization will provide high returns.

But without fully understanding audience, journey, drivers of decision and competitive choices, how can ANY digital decision-maker possibly arrive at an informed strategy? They can’t. And, in fact, they don’t. Because for the vast majority of enteprises, none of this information is part-and-parcel of the information environment.

I’ve seen plenty of executive dashboards that are supposed to help people run their business. They don’t have any of this stuff. I’ve seen the “four personas” puffery that’s supposed to help decision-makers understand their audience. I’ve seen how limited is the exposure executives have to journey mapping and how little it is deployed on a day-to-day basis. Worst of all, I’ve seen how absolutely pathetic is the use of voice of customer (online and offline) to help decision-makers understand why customers make the choices they do.

Voice of customer as it exists today is almost exclusively concerned with measuring customer satisfaction. There’s nothing wrong with measuring NPS or satisfaction. But these measures tell you nothing that will help define a strategy. They are at best (and they are often deeply flawed here too) measures of scoreboard – whether or not you are succeeding in a strategy.

I’m sure that people will object that knowing whether or not a strategy is succeeding is important. It is. It’s even a core part of ongoing strategy development. However, when divorced from particular customer journeys, NPS is essentially meaningless and uninterpretable. And while it truly is critical to measure whether or not a strategy is succeeding, it’s even more important to have data to help shape that strategy in the first place.

Executives just don’t get that context from their analytics teams. At best, they get little pieces of it in dribs and drabs. It is never – as it ought to be – the constant ongoing lifeblood of decision-making.

I subtitled this post “The Role of Voice of Customer in Enterprise Analytics” because of all the different types of information that can help make strategic decisions better, VoC is by far the most important. A good VoC program collects information from every channel: online and offline surveys, call-center, site feedback, social media, etc. It provides a continuing, detailed and sliceable view of audience, journey distribution and (partly) success. It’s by far the best way to help decision-makers understand why customers are making the choices they are, whether those choices are evolving, and how those choices are playing out across the competitive set. In short, it answers the majority of the questions that ought to be on the minds of decision-makers crafting a digital strategy.

This is a very different sort of executive dashboard than we typically see. It’s a true customer insights dashboard. It’s also fundamentally different than almost ANY VoC dashboard we see at any level. The vast majority of VoC reporting doesn’t provide slice-and-dice by audience and use-case – a capability which is absolutely essential to useful VoC reporting. VoC reporting is almost never based on and tied into a journey model so that the customer insights data is immediately reflective of journey stage and actionable arena. And VoC reporting almost never includes a continuous focus on exploring customer decision-making and tying that into the performance of actual initiatives.

It isn’t just a matter of a dashboard. One of the most unique and powerful aspects of digital voice-of-customer is the flexibility it provides to rapidly, efficiently and at very little cost tackle new problems. VoC should be a core part of executive decision-making with a constant cadence of research, analysis, discussion and reporting driven by specific business questions. This open and continuing dialog where VoC is a tool for decision-making is critical to integrating analytics into decisioning. If senior folks aren’t asking for new VoC research on a constant basis, you aren’t doing it right. The single best indicator of a robust VoC program in digital is the speed with which it changes.

Sadly, what decision-makers mostly get right now (if they get anything at all) is a high-level, non-segmented view of audience demographics, an occasional glimpse into high-level decision-factors that is totally divorced from both segment and journey stage, and an overweening focus on a scoreboard metric like NPS.

It’s no wonder, given such thin gruel, that decision-makers aren’t using data for strategic decisions better. If our executives mostly aren’t Dilbertian, they aren’t miracle workers either. They can’t make wine out of information water. If we want analytics to support strategy – and I assume we all do – then building a completely different sort of VoC program is the single best place to start. It isn’t everything. There are other types of data (behavioral, benchmark, econometric, etc.) that can be hugely helpful in shaping digital strategies. But a good VoC program is a huge step forward – a step forward that, if well executed – has the power to immediately transform how the digital enterprise thinks and works.

 

This is probably my last post of the year – so see you in 2016! In the meantime, my book Measuring the Digital World is now available. Could be a great way to spend your holiday down time (ideally while your resting up from time on the slopes)! Have a great holiday…

Is Data Science a Science?

I got a fair amount of feedback through various channels around my argument that data science isn’t a science and that the scientific method isn’t a method (or at least much of one). I wouldn’t consider either of these claims particularly important in the life of a business analyst, and I think I’ve written pieces that are far more significant in terms of actual practice, but I’ve written few pieces about topics which are evidently more fun to argue about. Well, I’m not opposed to a fun argument now and again, so here’s a redux on some of the commentary and my thoughts in response.

There were two claims in that post:

  1. I was somewhat skeptical that data science was correctly described as a science
  2. I was extremely skeptical that the scientific method was a good description of the scientific endeavor

The comment that most engaged me came from Adam Gitzes and really focused on the first claim:

Science is the distillation of evidence into a causal understanding of the world (my definition anyway). In business analytics, we use surveys, data analysis techniques, and experimental design to also understand causal relationships that can be used to drive our business.

On re-reading my initial post, I realized that while I had argued that business analytics wasn’t science (#1 above), I hadn’t really put many reasons on the table for that view – partly because I was too busy demolishing the “Scientific Method” and partly because I think it’s the less important of the two claims and also the more likely to be correct. Mostly, I just said I was skeptical of the idea. So I think Adam’s right to push out a more specific description of science and ask why data science might not be reasonably described as a kind of scientific endeavor.

I’m not going to get into the thicket of trying to define science. Really. I’m not. That’s the work of a different career. If I got nothing else out of my time studying Philosophy, I got an appreciation for how incredibly hard it is to answer seemingly simple questions like “what is science?” For the most part, we know it when we see it. Physics is science. Philosophy isn’t. But knowing it when you see it is precisely what fails when it comes to edge cases like data science or sociology.

When it comes to business analytics and data science, however, there are a couple of things that make me skeptical of applying the term science that I think we might actually agree on and that use our shared, working understanding of the scientific endeavor.

In business analytics, our main purpose isn’t to understand the world. It’s to improve a specific part of it. Science has no such objective.

Does that seem like a small difference? I don’t think it is. Part of what makes the scientific endeavor unique is that there is no axe to grind. Understanding is the goal. This isn’t to say that people don’t get attached to their ideas or that their careers don’t benefit if they are successful advocates for them – it’s done by humans after all. It would be no more accurate to suggest that the goal of a business is always profit. External forces can and often do set the agenda for researchers. But these are corruptions of the process not the process itself. Business analytics starts (appropriately) with an axe to grind and true science doesn’t.

To see why this makes a difference, consider my own domain – digital analytics. If our goal was just to understand the digital world, we’d have a very different research program than we do. If knowledge was our only goal, we’d spend as much time analyzing why people create certain kinds of digital worlds as how people consume them. That’s not the way it works. In reality, our research program is entirely focused on why and how people use a digital property and what will get more of them to take specific actions – not why and how it was created.

We are, rightly I believe, skeptical of the idea that research sponsored by tobacco companies into lung cancer is, properly speaking, science. That’s not because those researchers don’t follow the general outline of the scientific endeavor – it’s because they have an axe to grind and their research program is determined by factors outside the community of science. When it comes to business analytics, we are all tobacco scientists.

Perhaps we’re not so biased as to the findings of our experiments – good analytics is neutral as to what will work – but we’re every bit as biased when it comes to the outcomes desired and the shape of the research program.

Here’s another crucial difference. I think it’s fair to suggest that in data science we sometimes have no interest in causality. If I’m building a forecast model and I can find variables that are predictive, I may have little interest in whether those variables are also causal. If I’m building a look-alike targeting model, for example, it doesn’t matter one whit whether the variables are causal. Now it’s true that philosophers of science hotly debate the role and necessity of causality in science, but I tend to agree with Adam that there is something in the scientific endeavor that makes the demand for causality a part of the process. But in business analytics, we may demand causality for some problems but be entirely and correctly unconcerned with it in others. In business analytics, causality is a tool not a requirement.

There is, also, the nature of the analytics problem – at least in my field (digital). Science is typically concerned with studying natural phenomena. The digital world is not a natural world, it’s an engineered world. It’s created and adapted with intention. Perhaps even worse, it responds to and changes with the measurements we make and those measurements influence our intentions in subsequent building (which is the whole point after all).

This is Heisenberg’s Uncertainty Principle with a vengeance! When we measure the digital world, we mean to change it based on the measurement. What’s more, once we change it, we can never go back to the same world. We could restore the HTML, but not the absence of users with an alternative experience. In digital, every test we run changes the world in a fundamental way because it changes the users of that world. There is no possibility of conducting a digital test that doesn’t alter the reality we’re measuring – and while this might be true at the quantum level in physics, at the macro level where the scientific endeavor really lives, it seems like a huge difference.

What’s more, each digital property lives in the context of a larger digital world that is being constantly changed with intention by a host of other people. When new Apps like Uber change our expectations of how things like payment should work or alter the design paradigm on the Web, these exogenous and intentional changes can have a dramatic impact on our internal measurement. There is, then, little or no possibility of a true controlled experiment in digital. In digital analytics, our goal is to optimize one part of a giant machine for a specific purpose while millions of other people are optimizing other, inter-related parts of the same machine for entirely different and often opposed purposes.

This doesn’t seem like science to me.

There are disciplines that seem clearly scientific that cannot do controlled experiments. However, no field where the results of an experiment change the measured reality in a clearly significant fashion and are used to intentionally shape the resulting reality is currently described as scientific.

So why don’t I think data science is a science – at least in the realm of digital analytics? It differs from the scientific endeavor in several aspects that seem to me to be critical. Unlike science, business analytics and data science start with an agenda that isn’t just understanding and this fundamentally shapes the research program. Unlike science, business analytics and data science have no fixed commitment to causal explanations – just a commitment to working explanations. Finally, unlike science, business analytics and data science change the world they measure in a clearly significant fashion and do so intentionally with respect to the measurement.

Given that we have no fixed and entirely adequate definition of science, none of this is proof. I can’t demonstrate to you with the certainty of a logical proof that the definition of science requires X, data science is not X, so data science is not a science.

However, I think I have shown that at least by many of the core principles we associate with the scientific endeavor, that business analytics (which I take to be a proxy in this conversation for data science) is not well described as a science.

This isn’t a huge deal. I’ve done business analytics for many years and never once thought of myself as a scientist. What’s more, once we realize that being scientists doesn’t attach a powerful new methodology to business analytics – which was the rather more important point of my last post – it’s much less clear why anyone would think it makes a difference.

Agree?

 

A few other notes on the comments I received. With regards to Nikolaos’ question “why should we care?” I’m obviously largely in agreement. There is intellectual interest in these questions (at least for me), but I won’t pretend that they are likely to matter in actual practice or will determine ‘what works’. I’m also very much in agreement with Ake’s point about qualitative data. The truth is that nothing in the scientific endeavor precludes the use of qualitative data in addition to behavioral data. But even though there’s no determinate tie between the two, I certainly think that advocates for data science as a science are particularly likely to shun qualitative data (which is a shame). As far as Patrick’s comment goes, I think it dodges the essential question. He’s right to suggest that the term data science is contentless because data is not the subject of science, the data is always about something which is the subject of science. But I take the deeper claim to be what I have tackled here; namely, that business analytics is a scientific endeavor. That claim isn’t contentless, just wrong. I remain, still, deeply unconvinced of the utility of CRISP-DM.

 

Now is as good a time as any (how’s that for a powerful call to action?) to pre-order my book, ‘Measuring the Digital World’ on Amazon.