Tag Archives: 2-tiered segmentation

Getting Started with Digital Transformation

For most of this year I’ve been writing an extended series on digital transformation in the enterprise. Along the way, I’ve described why organizations (particularly large ones) struggle with digital, the core capabilities necessary to do digital well, and ways in which organizations can build a better, more analytic culture. I’ve even put together a series of videos that describe how enterprises are currently driving digital and how they can do better.

I think both the current-state (what we do wrong) and the end-state (doing digital right) are compelling. In the next few posts, I’m going to wrap this series up with a discussion around how you get from here to there.

I don’t suppose anyone thinks the journey from here to there is trivial. Doing digital the way I’ve described it (see the Agile Organization) involves some pretty fundamental change: change to the way enterprises budget, change to the way they organize, and change to the way they do digital at almost every level. It also involves, and this is totally unsurprising, investments in people and technology and more than a dollop of patience. It would actually be much easier to build a good digital organization from scratch than to adapt the pieces that exist in the typical enterprise.

Change is harder than creation. It has more friction and more fail points. But change is the reality for most enterprise.

So where do you start and how do you go about building a great digital organization?

I’m going to answer that question here from an analytics perspective. That’s the easy part. Once I’ve worked through the steps in building analytics maturity and digital decisioning, I’ll tackle the organizational component, wherein I expect to hazard a series of guesses, speculation and unlikely theory to paper over the fact that almost no one has done this transformation successfully and every organization has fundamentally unique structures and people that make its dynamics deeply specific.

The foundation of any analytics program is, of course, data. One of the most satisfying developments in digital analytics in the past 3-5 years has been the dramatic improvement in the state of data collection. It used to be that EVERY engagement we undertook began with a plodding slog through data auditing and clean-up. These days, that’s more the exception than the rule. Still, there are plenty of exceptions. So the first step in just about any analytics effort is to make sure the data foundation is solid. There’s a second aspect to this that’s worth pointing out. For a lot of my clients, basic data collection is no longer much of an issue. But even where that’s true, there are often significant gaps in digital analytics data collection for personalization. So many Adobe designs are predicated on meeting reporting requirements that it’s not at all unusual for key personalization elements like filtering selections, image expansions, sorting behaviors and DHTML exposures to go largely untracked. That’s true on both the Web and Mobile sides. Part of auditing your data collection should be a careful look at whether your capturing all the personalization cues you could – and that’s often a critical foundational element for the steps to follow.

Right along with auditing your data collection comes building a comprehensive customer journey framework. I’ve added the word “framework” here not to be all “consulty” but to emphasize that a customer journey isn’t built once as a static map. That’s the old way – and it’s wrong in every respect (so be careful what you buy). It’s wrong because it’s not segmented. It’s wrong because it’s too high-level. And most of all it’s wrong because it’s too static. So while a customer journey framework is more a capability and a process than a “thing”, it’s also true that you have to start somewhere. Getting that initial segmented journey map in place provides the high-level strategic framework for your digital strategy and for your analytics and testing. It’s the key strategic piece welding your operational capabilities to your strategic vision.

My third foundational building block is (Chorus sings refrain) “2-Tiered segmentation”. I’ve written voluminously on digital segmentation and how it works, so I won’t add much more here. But if journey mapping is the piece linking your strategic vision to your operational capabilities, 2-tiered segmentation is the equivalent piece linking at the tactical level. At every touchpoint in a customer journey there is the need to understand who somebody is and where in their journey they are. That’s what 2-tiered segmentation provides.

Auditing your data, creating a journey mapping and tying that to a digital segmentation are truly foundational. They are all “you can’t get there from here without going through these” kind of activities. Almost every significant report, analysis and decision that you make will rely on these three activities.

That’s not really true for my next two foundational activities. I chose building an integrated voice of customer (VoC) capability as my fourth key building block. If you’ve read my book, you know that one of the main uses for a VoC program is to refine and tune your journey map and segmentation. So in one sense, this capability may be prior to either of those. But you can do enough VoC to support those two activities without really building a full VoC program. And what I have in mind here is a full program. What do I mean by a full program? I mean an enterprise feedback management system that makes it easy to deploy surveys at any point in the journey across any device. I mean a set of organizational processes that ideate, design, deploy, interpret and socialize VoC information constantly. I mean an enterprise-wide reporting capability that integrates different VoC sources, classifies them, tracks them, and provides drill-down (and that’s important because VoC data is virtually useless without cross-tabulation) access to them across the organization. I also mean a culture where one of the natural and immediate parts of making a decision is looking at what customer’s think and – if that isn’t available – launching a survey to figure it out. I put VoC as part of this foundational set because I think it’s one of the easiest ways to deliver real wins to the organization. I also like the idea of driving a combination of tactical (data, segmentation) and strategic (journey, VoC) initiatives in your early phases. As I’ve pointed out elsewhere, we analytics folks tend to over-focus on the tactical.

Finally, I’ve included building a campaign measurement framework into the initial set of foundational activities. This might not be the right choice for every organization, but if you spend a significant amount of money on marketing, it’s a critical element in evolving your maturity. Like data audits, a lot of my clients are already pretty good at this. For many folks, campaigns are already measured using a pretty rich and well-thought out framework and the pain point tends to be deeper – around attribution and mix. But I also see organizations jumping right to questions of attribution before they’ve really done the work necessary to pick the right KPIs to optimize against. That’s a prescription for disaster. If you don’t put in the intellectual sweat equity to understand how campaigns should be measured (and it’s often surprisingly complicated in real-world businesses where conversion rate is rarely the be-all-and-end-all of optimization), then your attribution modelling is doomed to fail.

So here’s the first five things to tackle in building out the analytics part of a digital transformation effort:

foundational Transformation Step 1Small

These five activities provide a rich foundation for analytics driven transformation along with some core strategic analytic capabilities. I’ll cover what comes after this in my next post.

Building Analytics Culture – One Decision at a Time

In my last post, I argued that much of what passes for “building culture” in corporate America is worthless. It’s all about talk. And whether that talk is about diversity, ethics or analytics, it’s equally arid. Because you don’t build culture by talking. You build culture though actions. By doing things right (or wrong if that’s the kind of culture you want). Not only are words not effective in building culture, they can be positively toxic. When words and actions don’t align, the dishonesty casts other – possibly more meaningful words – into disrepute. Think about which is worse – a culture where bribery is simply the accepted and normal way of getting things done (and is cheerfully acknowledged) and one where bribery is ubiquitous but is cloaked behind constant protestations of disinterest and honesty? If you’re not sure about your answer, take it down to a personal level and ask yourself the same question. Do we not like an honest villain better than a hypocrite? If hypocrisy is the compliment vice pays to virtue, it is a particularly nasty form of flattery.

What this means is that you can’t build an analytics culture by telling people to be data driven. You can’t build an analytics culture by touting the virtues of analysis. You can’t even build an analytics culture by hiring analysts. You build an analytics culture by making good (data-driven) decisions.

That’s the only way.

But how do you get an organization to make data-driven decisions? That’s the art of building culture. And in that last post, I laid out seven (a baker’s half-dozen?) tactics for building good decision-making habits: analytic reporting, analytics briefing sessions, hiring a C-Suite analytics advisor, creating measurement standards, building a rich meta-data system for campaigns and content, creating a rapid VoC capability and embracing a continuous improvement methodology like SPEED.

These aren’t just random parts of making analytic decisions. They are tactics that seem to me particularly effective in driving good habits in the organization and building the right kind of culture. But seven tactics doesn’t nearly exhaust my list. Here’s another set of techniques that are equally important in helping drive good decision-making in the organization (my original list wasn’t in any particular order so it’s not like the previous list had all the important stuff):

Yearly Agency Performance Measurement and Reviews

What it is: Having an independent annual analysis of your agency’s performance. This should include review of goals and metrics, consideration of the appropriateness of KPIs and analysis of variation in campaign performance along three dimensions (inside the campaign by element, over time, and across campaigns). This must not be done by the agency itself (duh!) or by the owners of the relationship.

Why it builds culture: Most agencies work by building strong personal relationships. There are times and ways that this can work in your favor, but from a cultural perspective it both limits and discourages analytic thinking. I see many enterprises where the agency is so strongly entrenched you literally cannot criticize them. Not only does the resulting marketing nearly always suck, but this drains the life out of an analytics culture. This is one of many ways in which building an analytic culture can conflict with other goals, but here I definitely believe analytics should win. You don’t need a too cozy relationship with your agency. You do need objective measurement of their performance.

 

Analytics Annotation / Collaboration Tool like Insight Rocket

What it is: A tool that provides a method for rich data annotation and the creation and distribution of analytic stories across the analytics team and into the organization. In Analytic Reporting, I argued for a focus on democratizing knowledge not data. Tools like Insight Rocket are a part of that strategy, since they provide a way to create and rapidly disseminate a layer of meaning on top of powerful data exploration tools like Tableau.

Why it builds culture: There aren’t that many places where technology makes much difference to culture, but there are a few. As some of my other suggestions make clear, you get better analytics culture the more you drive analytics across and into the organization (analytic reporting, C-Suite Advisor, SPEED, etc.). Tools like Insight Rocket have three virtues: they help disseminate analytics thinking not just data, they boost analytics collaboration making for better analytic teams, and they provide a repository of analytics which increases long-term leverage in the enterprise. Oh, here’s a fourth advantage, they force analysts to tell stories – meaning they have to engage with the business. That makes this piece of technology a really nice complement to my suggestion about a regular cadence of analytics briefings and a rare instance of technology deepening culture.

 

In-sourcing

What it is: Building analytics expertise internally instead of hiring it out and, most especially, instead of off-shoring it.

Why it builds culture: I’d be the last person to tell you that consulting shouldn’t have a role in the large enterprise. I’ve been a consultant for most of my working life. But we routinely advise our clients to change the way they think about consulting – to use it not as a replacement for an internal capability but as a bootstrap and supplement to that capability. If analytics is core to digital (and it is) and if digital is core to your business (which it probably is), then you need analytics to be part of your internal capability. Having strong, capable, influential on-shore employees who are analysts is absolutely necessary to analytics culture. I’ll add that while off-shoring, too, has a role, it’s a far more effective culture killer than normal consulting. Off-shoring creates a sharp divide between the analyst and the business that is fatal to good performance and good culture on EITHER side.

 

Learning-based Testing Plan

What it is: Testing plans that include significant focus on developing best design practices and resolving political issues instead of on micro-optimizations of the funnel.

Why it works: Testing is a way to make decisions. But as long as its primary use is to decide whether to show image A or image B or a button in this color or that color, it will never be used properly. To illustrate learning-based testing, I’ve used the example of video integration – testing different methods of on-page video integration, different lengths, different content types and different placements against each key segment and use-case to determine UI parameters for ALL future videos. When you test this way, you resolve hundreds of future questions and save endless future debate about what to do with this or that video. That’s learning based testing. It’s also about picking key places in the organization where political battles determine design – things like home page real-estate and the amount of advertising load on a page – and resolving them with testing; that’s learning based testing, too. Learning based testing builds culture in two ways. First, in and of itself, it drives analytic decision-making. Almost as important, it demonstrates the proper role of experimentation and should help set the table for decision-makers tests to ask for more interesting tests.

 

Control Groups

What it is: Use of control groups to measure effectiveness whenever new programs (operational or marketing) are implemented. Control groups use small population subsets chosen randomly from a target population who are given either no experience or a neutral (existing) experience instead. Nearly all tests feature a baseline control group as part of the test, but the use of control groups transcends A/B testing tools. Use of control groups common in traditional direct response marketing and can be used in a wide variety of on and offline contexts (most especially as I recently saw Elea Feit of Drexel hammer home at the DAA Symposium – as a much more effective approach to attribution).

Why it works: One of the real barriers to building culture is a classic problem in education. When you first teach students something, they almost invariably use it poorly. That can sour others on the value of the knowledge itself. When people in an organization first start using analytics, they are, quite inevitably, going to fall into the correlation trap. Correlation is not causation. But in many cases, it sure looks like it is and this leads to many, many bad decisions. How to prevent the most common error in analytics? Control groups. Control groups build culture because they get decision-makers thinking the right way about measurement and because they protect the organization from mistakes that will otherwise sour the culture on analytics.

 

Unified Success Framework

What it is: A standardized, pre-determined framework for content and campaign success measurement that includes definition of campaign types, description of key metrics for those types, and methods of comparing like campaigns on an apples-to-apples basis.

Why it works: You may not be able to make the horse drink, but leading it to water is a good start. A unified success framework puts rigor around success measurement – a critical part of building good analytics culture. On the producer side, it forces the analytics team to make real decisions about what matters and, one hopes, pushes them to prove that proxy measures (such as engagement) are real. On the consumer side, it prevents that most insidious destroyer of analytics culture, the post hoc success analysis. If you can pick your success after the game is over, you’ll always win.

 

The Enterprise VoC Dashboard

What it is: An enterprise-wide state-of-the-customer dashboard that provides a snapshot and trended look at how customer attitudes are evolving. It should include built in segmentation so that attitudinal views are ALWAYS shown sliced by key customer types with additional segmentation possible.

Why it works: There are so many good things going on here that it’s hard to enumerate them all. First, this type of dashboard is one of the best ways to distill customer-first thinking in the organization. You can’t think customer-first, until you know what the customer thinks. Second, this type of dashboard enforces a segmented view of the world. Segmentation is fundamental to critical thinking about digital problems and this sets the table for better questions and better answers in the organization. Third, opinion data is easier to absorb and use than behavioral data, making this type of dashboard particularly valuable for encouraging decision-makers to use analytics.

 

Two-Tiered Segmentation

What it is: A method that creates two-levels of segmentation in the digital channel. The first level is the traditional “who” someone is – whether in terms of persona or business relationship or key demographics. The second level captures “what” they are trying to accomplish. Each customer touch-point can be described in this type of segmentation as the intersection of who a visitor is and what their visit was for.

Why it works: Much like the VoC Dashboard, Two-Tiered Segmentation makes for dramatically better clarity around digital channel decision-making and evaluation of success. Questions like ‘Is our Website successful?’ get morphed into the much more tractable and analyzable question ‘Is our Website successful for this audience trying to do this task?’. That’s a much better question and big part of building analytics culture is getting people to ask better questions. This also happens to be the main topic of my book “Measuring the Digital World” and in it you can get a full description of both the power and the methods behind Two-Tiered Segmentation.

 

I have more, but I’m going to roll the rest into my next post on building an agile organization since they are all deeply related to the integration of capabilities in the organization. Still, that’s fifteen different tactics for building culture. None of which include mission statements, organizational alignment or C-Level support (okay, Walking the Walk is kind of that but not exactly and I didn’t include it in the fifteen) and none of which will take place in corporate retreats or all-hands conferences. That’s a good thing and makes me believe they might actually work.

Ask yourself this: is it possible to imagine an organization that does even half these things and doesn’t have a great analytics culture? I don’t think it is. Because culture just is the sum of the way your organization works and these are powerful drivers of good analytic thinking. You can imagine an organization that does these things and isn’t friendly, collaborative, responsible, flat, diverse, caring or even innovative. There are all kinds of culture, and good decision-making isn’t the only aspect of culture to care about*. But if you do these things, you will have an organization that makes consistently good decisions.

*Incidentally, if you want to build culture in any of these other ways, you have to think about similar approaches. Astronomers have a clever technique for seeing very faint objects called averted vision. The idea is that you look just to the side of the object if you want to get the most light-gathering power from your eyes. It’s the same with culture. You can’t tackle it head-on by talking about it. You have to build it just a little from the side!

Measuring the Digital World

After several months in pre-order purgatory, my book, Measuring the Digital World is now available. If you’re even an occasional reader of this blog, I hope you’ll find the time to read it.

I know that’s no small ask. Reading a professional book is a big investment of time. So is reading Measuring the Digital World worth it?

Well, if you’re invested in digital optimization and analytics, I think it is – and here’s why. We work in a field that is still very immature. It’s grown up, as it were, underneath our feet. And while that kind of organic growth is always the most exciting, it’s also the most unruly. I’m betting that most of us who have spent a few years or more in digital analytics have never really had a chance to reflect on what we do and how we do it. Worse, most of those who are trying to learn the field, have to do so almost entirely by mentored trial-and-error. That’s hard. Having a framework for how and why things work makes the inevitable trial-and-error learning far more productive.

My goal in Measuring the Digital World wasn’t so much to create a how-to book as to define a discipline. I believe digital analytics is a unique field. A field defined by a few key problems that we must solve if we are to do it well. In the book, I wanted to lay out those problems and show how they can be tackled – irrespective of the tools you use or the type of digital property you care about.

At the very heart of digital analytics is a problem of description. Measurement is basic to understanding. We are born with and soon learn to speak and think in terms of measurement categories that apply to the physical world. Dimensionality, weight, speed, direction and color are some of the core measurement categories that we use over and over and over again in understanding the world we live in. These things don’t exist in the digital world.

What replaces them?

Our digital analytics tools provide the eyes and ears into the digital world. But I think we should be very skeptical of the measurement categories they suggest. Having lived through the period when those tools where designed and took their present shape, I’ve seen how flawed were the measurement conceptions that drove their form and function.

It’s not original, but it’s still true to say that our digital analytics tools mostly live at the wrong level and have the wrong set of measurement categories – that they are far too focused on web assets and far too little on web visitors.

But if this is a mere truism, it nevertheless lays the ground work for a real discipline. Because it suggests that the great challenge of digital is how to understand who people are and what they are doing using only their viewing behavior. We have to infer identity and intention from action. Probably 9 out of every 10 pages in Measuring the Digital World are concerned with how to do this.

The things that make it hard are precisely the things that define our discipline. First, to make the connection between action and both identity and intention, we have to find ways to generate meaning based on content consumption. This means understanding at a deep level what content is about – it also means making the implicit assumption that people self-select the things that interest them.

For the most part, that’s true.

But it’s also where things get tricky. Because digital properties don’t contain limitless possibilities and they impose a structure that tries to guide the user to specific actions. This creates a push-pull in every digital world. On the one hand, we’re using what people consume to understand their intention and, at the very same time, we’re constantly forcing their hand and trying to get them to do specific actions! Every digital property – no matter its purpose or design – embodies this push-pull. The result? A complex interplay between self-selection, intention and web design that makes understanding behavior in digital a constant struggle.

That’s the point – and the challenge – of digital analytics. We need to have techniques for moving from behavior to identity and intention. And we need to have techniques that control for the structure of digital properties and the presence or absence of content. These same challenges are played out on Websites, on mobile apps and, now, on omni-channel customer journeys.

This is all ground I’ve walked before, but Measuring the Digital World embodies an orderly and fairly comprehensive approach to describing these challenges and laying out the framework of our discipline. How it works. Why it’s hard. What challenges we still face. It’s all there.

So if you’re an experienced analyst and just want to reflect your intuitions and knowledge against a formal description of digital analytics and how it can be done, this book is for you. I’m pretty sure you’ll find at least a few new ideas and some new clarity around ideas you probably already have.

If you’re relatively new to the field and would like something that is intellectually a little more meaty than the “bag of tips-and-tricks” books that you’ve already read, then this book is for you. You’ll get a deep set of methods and techniques that can be applied to almost any digital property to drive better understanding and optimization. You’ll get a sense, maybe for the first time, of exactly what our discipline is – why it’s hard and why certain kinds of mistakes are ubiquitous and must be carefully guarded against.

And if you’re teaching a bunch of MBA or Business Students about digital analytics and want something that actually describes a discipline, this book is REALLY for you (well…for your students). Your students will get a true appreciation for a cutting edge analytics discipline, they’ll also get a sense of where the most interesting new problems in digital analytics are and what approaches might bear fruit. They’ll get a book that illuminates how the structure of a field – in this case digital – demands specific approaches, creates unique problems, and rewards certain types of analysis. That’s knowledge that cuts deeper than just understanding digital analytics – it goes right to the heart of what analytics is about and how it can work in any business discipline. Finally, I hope that the opportunity to tackle deep and interesting problems illuminated by the book’s framework, excites new analysts and inspires the next generation of digital analysts to go far beyond what we’ve been able to do.

 

Yes, even though I’m an inveterate reader, I know it’s no trivial thing to say “read this book”. After all, despite my copious consumption, I delve much less often into business or technical books. So many seem like fine ten-page articles stretched – I’m tempted to say distorted – into book form. You get their gist in the first five pages and the rest is just filler. That doesn’t make for a great investment of time.

And now that I’ve actually written a book, I can see why that happens. Who really has 250 pages worth of stuff to say? I’m not sure I do…actually I’m pretty sure there’s some filler tucked in there in a spot or two. But I think the ratio is pretty good.

With Measuring the Digital World I tried to do something very ambitious – define a discipline. To create the authoritative view of what digital analytics is, how it works, and why it’s different than any other field of analytics. Not to answer every question, lay out every technique or solve every problem. There are huge swaths of our field not even mentioned in the book. That doesn’t bother me. What we do is far too rich to describe in a single book or even a substantial collection. Digital is, as the title of the book suggests, a whole new world. My goal was not to explore every aspect of measuring that world, but only to show how that measurement, at its heart, must proceed. I’m surely not the right person to judge to what extent I succeeded. I hope you’ll do that.

Here’s the link to Measuring the Digital World on Amazon.

[By the way, if you’d like signed copy of Measuring the Digital World, just let me know. You can buy a copy online and I’ll send you a book-plate. I know it’s a little silly, but I confess to extreme fondness for the few signed books I possess!]

Digital Transformation – How to Get Started, Real KPIs, the Necessary Staff and So Much More!

In the last couple of months, I’ve been writing an extended series on digital transformation that reflects our current practice focus. At the center of this whole series is a simple thesis: if you want to be good at something you have to be able to make good decisions around it. Most enterprises can’t do that in digital. From the top on down, they are setup in ways that make it difficult or impossible for decision-makers to understand how digital systems work and act on that knowledge. It isn’t because people don’t understand what’s necessary to make good decisions. Enterprises have invested in exactly the capabilities that are necessary: analytics, Voice of Customer, customer journey mapping, agile development, and testing. What they haven’t done is changed their processes in ways that take advantage of those capabilities.

I’ve put together what I think is a really compelling presentation of how most organizations make decisions in the digital channel, why it’s ineffective, and what they need to do to get better. I’ve put a lot of time into it (because it’s at the core of our value proposition) and really, it’s one of the best presentations I’ve ever done. If you’re a member of the Digital Analytics Association, you can see a chunk of that presentation in the recent webinar I did on this topic. [Webinars are brutal – by far the hardest kind of speaking I do – because you are just sitting there talking into the phone for 50 minutes – but I think this one, especially the back-half, just went well] Seriously, if you’re a DAA member, I think you’ll find it worthwhile to replay the webinar.

If you’re not, and you really want to see it, drop me a line, I’m told we can get guest registrations setup by request.

At the end of that webinar I got quite a few questions. I didn’t get a chance to answer them all and I promised I would – so that’s what this post is. I think most of the questions have inherent interest and are easily understood without watching the webinar so do read on even if you didn’t catch it (but watch the darn webinar).

Q: Are metrics valuable to stakeholders even if they don’t tie in to revenues/cost savings?

Absolutely. In point of fact, revenue isn’t even the best metric on the positive side of the balance sheet. For many reasons, lifetime value metrics are generally a better choice than revenue. Regardless, not every useful metric has to, can or should tie back to dollars. There are whole classes of metrics that are important but won’t directly tie to dollars: satisfaction metrics, brand awareness metrics and task completion metrics. That being said, the most controversial type of non-revenue metric are proxies for engagement which is, in turn, a kind of proxy for revenue. These, too, can be useful but they are far more dangerous. My advice is to never use a proxy metric unless you’ve done the work to prove it’s a valid proxy. That means no metrics plucked from thin air because they seem reasonable. If you can’t close the loop on performance with behavioral data, use re-survey methods. It’s absolutely critical that the metrics you optimize with be the right ones – and that means spending the extra time to get them right. Finally, I’ve argued for awhile that rather than metrics our focus should be on delivering models embedded in tools – this allows people to run their business not just look at history.

Q: What is your favorite social advertising KPI? I have been using $ / Site Visit and $ / Conversion to measure our campaigns but there is some pushback from the social team that we are not capturing social reach.

A very related question – and it’s interesting because I actually didn’t talk much about KPIs in the webinar! I think the question boils down to this (in addition to everything I just said about metrics) – is reach a valid metric? It can be, but reach shouldn’t be taken as is. As per my answer above, the value of an impression is quite different on every channel. If you’re not doing the work to figure out the value of an impression in a channel then what’s the point of reporting an arbitrary reach number? How can people possibly assess whether any given reach number makes a buy good or bad once they realize that the value of an impression varies dramatically by channel? I also think a strong case can be made that it’s a mistake to try and optimize digital campaigns using reported metrics even direct conversion and dollars. I just saw a tremendous presentation from Drexel’s Elea Feit at the Philadelphia DAA Symposium that echoed (and improved) what I’ve been saying for years. Namely that non-incremental attribution is garbage and that the best way to get true measures of lift is to use control groups. If your social media team thinks reach is important, then it’s worth trying to prove if they are right – whether that’s because those campaigns generate hidden short-term lift or or because they generate brand awareness that track to long term lift.

Q: For companies that are operating in the way you typically see, what is the one thing you would recommend to help get them started?

This is a tough one because it’s still somewhat dependent on the exact shape of the organization. Here are two things I commonly recommend. First, think about a much different kind of VoC program. Constant updating and targeting of surveys, regular socialization with key decision-makers where they drive the research, an enterprise-wide VoC dashboard in something like Tableau that focuses on customer decision-making not NPS. This is a great and relatively inexpensive way to bootstrap a true strategic decision support capability. Second, totally re-think your testing program as a controlled experimentation capability for decision-making. Almost every organization I work with should consider fundamental change in the nature, scope, and process around testing.

Q: How much does this change when there are no clear conversions (i.e., Non-Profit, B2B, etc)?

I don’t think anything changes. But, of course, everything does change. What I mean is that all of the fundamental precepts are identical. VoC, controlled experiments, customer journey mapping, agile analytics, integration of teams – it’s all exactly the same set of lessons regardless of whether or not you have clear conversions on your website. On the other hand, every single measurement is that much harder. I’d argue that the methods I argue for are even more important when you don’t have the relatively straightforward path to optimization that eCommerce provides. In particular, the absolute importance of closing the loop on important measurements simply can’t be understated when you don’t have a clear conversion to optimize to.

Q: What is the minimum size of analytics team to be able to successfully implement this at scale?

Another tricky question to answer but I’ll try not to weasel out of it. Think about it this way, to drive real transformation at enterprise scale, you need at least 1 analyst covering every significant function. That means an analyst for core digital reporting, digital analytics, experimentation, VoC, data science, customer journey, and implementation. For most large enterprises, that’s still an unrealistically small team. You might scrape by with a single analyst in VoC and customer journey, but you’re going to need at least small teams in core digital reporting, analytics, implementation and probably data science as well. If you’re at all successful, the number of analytics, experimentation and data science folks is going to grow larger – possibly much larger.  It’s not like a single person in a startup can’t drive real change, but that’s just not the way things work in the large enterprise. Large enterprise environments are complex in every respect and it takes a significant number of people to drive effective processes.

Q: Sometimes it feels like agile is just a subject line for the weekly meeting. Do you have any examples of organizations using agile well when it comes to digital?

Couldn’t agree more. My rule of thumb is this: if your organization is studying how to be innovative, it never will be. If your organization is meeting about agile, it isn’t. In the IT world, Agile has gone from a truly innovative approach to development to a ludicrous over-engineered process managed, often enough, by teams of consulting PMs. I do see some organizations that I think are actually quite agile when it comes to digital and doing it very well. They are almost all gaming companies, pure-play internet companies or startups. I’ll be honest – a lot of the ideas in my presentation and approach to digital transformation come from observing those types of companies. Whether I’m right that similar approaches can work for a large enterprise is, frankly, unclear.

Q: As a third party measurement company, what is the best way to approach or the best questions to ask customers to really get at and understand their strategic goals around their customer journeys?

This really is too big to answer inside a blog – maybe even too big to reasonably answer as a blog. I’ll say, too, that I’m increasingly skeptical of our ability to do this. As a consultant, I’m honor-bound to claim that as a group we can come in, ask a series of questions of people who have worked in an industry for 10 or 20 years and, in a few days time, understand their strategic goals. Okay…put this way, it’s obviously absurd. And, in fact, that’s really not how consulting companies work. Most of the people leading strategic engagements at top-tier consulting outfits have actually worked in an industry for a long-time and many have worked on the enterprise side and made exactly those strategic decisions. That’s a huge advantage. Most good consultants in a strategic engagement know 90% of what they are going to recommend before they ask a single question.

Having said that, I’m often personally in a situation where I’m asked to do exactly what I’ve just said is absurd and chances are if you’re a third party measurement company you have the same problem. You have to get at something that’s very hard and very complex in a very short amount of time and your expertise (like mine) is in analytics or technology not insurance or plumbing or publishing or automotive.

Here’s a couple of things I’ve found helpful. First, take the journey’s yourself. It’s surprising how many executives have never bought an online policy from their own company, downloaded a whitepaper to generate a lead, or bought advertising on their own site. You may not be able to replicate every journey, but where you can get hands on, do it. Having a customer’s viewpoint on the journey never hurts and it can give you insight your customers should but often don’t have. Second, remember that the internet is your best friend. A little up-front research from analysts is a huge benefit when setting the table for those conversations. And I’m often frantically googling acronyms and keywords when I’m leading those executive conversations. Third, check out the competition. If you do a lead on the client’s website, try it on their top three competitors too. What you’ll see is often a great table-set for understanding where they are in digital and what their strategy needs to be. Finally, get specific on the journey. In my experience, the biggest failing in senior leaders is their tendency to generality. Big generalities are easy and they sound smart but they usually don’t mean much of anything. The very best leaders don’t ever retreat into useless generality, but most of us will fall into it all too easily.

Q: What are some engagement models where an enterprise engages 3rd party consulting? For how long?

The question every consultant loves to hear! There are three main ways we help drive this type of digital transformation. The first is as strategic planners. We do quite a bit of pure digital analytics strategy work, but for this type of work we typically expand the strategic team a bit (beyond our core digital analytics folks) to include subject matter experts in the industry, in customer journey, and in information management. The goal is to create a “deep” analytics strategy that drives toward enterprise transformation. The second model (which can follow the strategic phase) is to supplement enterprise resources with specific expertise to bootstrap capabilities. This can include things like tackling specific highly strategic analytics projects, providing embedded analysts as part of the team to increase capacity and maturity, building out controlled experiment teams, developing VoC systems, etc. We can also provide – and here’s where being part of a big practice really helps – PM and Change Management experts who can help drive a broader transformation strategy. Finally, we can help soup to nuts building the program. Mind you, that doesn’t mean we do everything. I’m a huge believer that a core part of this vision is transformation in the enterprise. Effectively, that means outsourcing to a consultancy is never the right answer. But in a soup-to-nuts model, we keep strategic people on the ground, helping to hire, train, and plan on an ongoing basis.

Obviously, the how-long depends on the model. Strategic planning exercises are typically 10-12 weeks. Specific projects are all over the map, and the soup-to-nuts model is sustained engagement though it usually starts out hot and then gets gradually smaller over time.

Q: Would really like to better understand how you can identify visitor segments in your 2-tier segmentation when we only know they came to the site and left (without any other info on what segment they might represent).  Do you have any examples or other papers that address how/if this can be done?

A couple years back I was on a panel at a Conference in San Diego and one of the panelists started every response with “In my book…”. It didn’t seem to matter much what the question was. The answer (and not just the first three words) were always the same. I told my daughters about it when I got home, and the gentleman is forever immortalized in my household as the “book guy”. Now I’m going to go all book guy on you. The heart of my book, “Measuring the Digital World” is an attempt to answer this exact question. It’s by far the most detailed explication I’ve ever given of the concepts behind 2-tiered segmentation and how to go from behavior to segmentation. That being said, you can only pre-order now. So I’m also going to point out that I have blogged fairly extensively on this topic over the years. Here’s a couple of posts I dredged out that provide a good overview:

http://semphonic.blogs.com/semangel/2012/05/digital-segmentation.html

http://semphonic.blogs.com/semangel/2011/06/building-a-two-tiered-segmentation-semphonics-digital-segmentation-techniques.html

and – even more important – here’s the link to pre-order the book!

That’s it…a pretty darn good list of questions. I hope that’s genuinely reflective of the quality of the webinar. Next week I’m going to break out of this series for a week and write about our recent non-profit analytics hackathon – a very cool event that spurred some new thoughts on the analysis process and the tools we use for it.