Tag Archives: store analytics

The Myth of the Single KPI for Testing

Continuous Improvement through testing is a simple idea. That’s no surprise. The simplest, most obvious ideas are often the most powerful. And testing is a powerful idea. An idea that forms and shapes the way digital is done by the companies that do it best. And those same companies have changed the world we live in.

If testing and continuous improvement is a process, analytics is the driver of that process; and as any good driver knows, the more powerful the vehicle, the more careful you have to be as a driver. Testing analytics seems so easy. You run a test, you measure which worked better. You choose the winner.

It’s like reading the scoreboard at a football game. It doesn’t take a lot of brains to figure out who’s ahead.

Except it’s usually not that easy.

Sporting events just are decided by the score. Games have rules and a single goal. Life and business mostly don’t. What makes measuring tests surprising tricky is that you rarely have a single unequivocal measure of success.

Suppose you add a merchandising drive to a section of your store or on the product detail page of your website. You test. And you generate more sales of that product.

Success!

Success?

Let’s start with the obvious caveat. You may have generated more sales, but you gave up margin. Was it worth it? Usually, the majority of buyers with a discount would have bought without one. Still, that kind of cannibalization is fairly easy to baseline and measure.

Here’s a trickier problem. What else changed? Because when you add a merchandising drive to a product, you don’t just shift that product’s buying pattern. The customer who buys might have bought something else. Maybe something with a better margin.

To people who don’t run tests, this may come as a bit of surprise. Shouldn’t tests be designed to limit their impact so that the “winner” is clear? ‘

Part of a good experimental design is, indeed, creating a test that limits external impacts. But this isn’t the lab. Limiting the outside impact of a test isn’t easy and you can  never be sure you’ve actually succeeded in doing that unless you carefully measure.

Worse, the most important tests usually have the most macro-impact. Small creative tests can often be isolated to a single win-loss metric. Sadly, that metric usually doesn’t matter or doesn’t move.

If you need proof of that, check out this meta-study by Will Browne & Mike Jones (those names feel like generic test products, right?) that looked at the impact of different types of test. Their finding? UI changes of the color and call-to-action type had, essentially, zero impact. Sadly, that’s what most folks spend all their time testing. (http://www.qubit.com/sites/default/files/pdf/qubit_meta_analysis.pdf)

If your test actually changes shopper behavior, believe me, there will be macro impacts.

It’s usually straightforward to measure the direct results of a store test. It’s often much harder to determine the macro impact. But it’s something you MUST look at. The macro impact can be as or more important than the direct impact. What’s more, it often – I’ll say usually – runs in the opposite direction.

So if you fail to measure the macro impact of a store test and you focus only on the obvious outcome, you’ll often pick the wrong result or grossly overstate the impact. Either way, you’re not using your analytics to drive appropriately.

Of course, one of the very real challenges you’ll face is that many tools don’t measure the macro impact of tests at all. In the digital world, the vast majority of dedicated testing tools require you to focus on a single KPI and provide absolutely no measurement of macro impacts. They simply assume that the test was completely compartmentalized. That works okay for things like email testing, but it’s flat-out wrong when it comes to testing store or website changes.

If your experiment worked well enough to change a shopper’s behavior and got them to buy something, the chances are quite good that it changed more than just that behavior. You may have given up margin. You likely lost some sales elsewhere. You almost certainly changed what else in the store or the site the shopper engaged with. That stuff matters.

In the store world, most tools don’t measure enough to give you even the immediate win-loss results. To heck with the rest of the story. So it can tempting, when you first have real measurement, to focus on the obvious: which test won. Don’t.

In some of my recent posts, I’ve talked about the ways in which DM1 – our store testing and measurement platform – lets you track the full customer journey, segmentfunnel and compare. Those capabilities are key to doing test measurement right. They give you the ability to see the immediate impact of a test AND the ways in which a change affected macro customer behavior.

You can see an example of how this works (and how important that macro behavior is in store layout) in this DM1 video that focuses on the Comparison capabilities of the tool.

https://www.youtube.com/watch?v=lbpaeSmaE74&t=13s

It’s the right way to use all that power a store testing program can provide.

Store Testing & Continuous Improvement

Continuous improvement is what drives the digital world. Whether applied as a specific methodology or simply present as a fundamental part of the background against which we do business, the discipline of change and measure is a fundamental part of the digital environment. A key part of our mission at Digital Mortar is simply this: to take that discipline of continuous improvement via change and measurement and bring it to stores.

Every part of DM1 – from store visualizations to segmentation to funnel analytics – is there to help measure and illuminate the in-store customer journey. You can’t build an effective strategy or process for continuous improvement without having that basic measurement environment. It provides the context that let’s decision-makers talk intelligently about what’s working, what isn’t and what change might accomplish.

But as I pointed out in my last post, some analytic techniques are particularly useful for the role they play in shaping strategy and action. Funnel Analysis, I argued, is particularly good at focusing optimization efforts and making them easily measurable. Funnels help shape decisions about what to change. Equally important, they provide clear guidance about what to measure to judge the success of that change. After all, if you made a change to improve the funnel, you’re going to measure the impact of the change using that same funnel.

That’s a good thing.

One of the biggest mistakes in enterprise measurement (and – surprisingly – even in broader scientific contexts) is failing to commit to your measurement of success when you start an experiment. It turns out that you can nearly always find some measure that improved after an experiment. It just may not be the right measure. If folks are looking for a way to prove success, they’ll surely find it.

Since we expect our clients to use DM1 to drive store testing, we’ve tried to make it easy on both ends of the process. Tools like funnel analysis help analysts find and target areas for improvement. At the other end of the process, analysts need to be able to easily see whether changes actually generated improvement.

This isn’t just for experimentation. As an analyst, I find that one of the most common tasks I have do is compare numbers. By store. By page. By time-period. By customer segment. Comparison provides basic measurement of change and context on that change.

Which makes comparison the core capability necessary for analyzing store tests but also applicable to many analytics exercises.

Though comparison is a fundamental part of the analytic process, it’s surprising how often it’s poorly supported in bespoke analytics tools. It took many years for tools like Adobe’s Workspace to evolve – providing comprehensive comparison capabilities. Until quite recently in digital analytics, you had to export reports to Excel if you wanted to lay key digital analytic data points from different reports side-by-side.

DM1’s Comparison tool is simple. It’s not a completely flexible canvas for analysis. It just takes any analytic view DM1 provides and allows you to use it in a side-by-side comparison. Simple. But it turns out to be quite powerful in practice.

Suppose you’re running a test in Store A with Store B as a control. DM1’s comparison view lets you lay those two Stores side-by-side during the testing period and see exactly what’s different. In this view, I’ve compared two similar stores by area looking at which areas drove the most shopper conversions:

Retail Analytics and Store Testing: Store Comparison in DM1

You can use ANY DM1 visualization in the Comparison. The funnel, the Store Viz or traditional reports and charts. In this view, I’ve compared the Shopper Funnel around a single merchandising category at two different stores. Not only can I see which store is more effective, I can see exactly where in the funnel the performance differences occur:

Retail Analytics and Store Testing: Time Comparison

Don’t have a control store? If you’re only measuring the customer journeys in a single store or if your store is a concept store, you won’t have another store to use as a control. No problem, DM1’s comparison view lets you compare the same store across two different time periods. You can compare season over season or consecutive time periods. You don’t even have to evenly match time periods. Here I’ve compared the October Funnel to Pre-Holiday November:

Retail Analytics and Funnels: Store Testing

Store and Date/Time are the most common type of comparison. But DM1’s comparison tool lets you compare on Segments and Metrics as well. I often want to understand how a single segment is different than other groups of visitors. By setting up a segmentation visualization, I can quickly page through a set of comparison segments while holding my target group constant. In the first screen, I’ve compared shoppers interested in Backpacks with shoppers focused on Team Gear in terms of how effective interactions with Associates are. With one click, I can do the same comparison between Women’s Jacket shoppers and Team Gear:

Funnel Analytics and Store TestingStore Analytics Comparison: Store Testing Segments

The ability to do this kind of comparison in the context of the visualizations is unusual AND powerful. The Comparison tool isn’t the only part of DM1 that supports comparison and contextualization. The Dashboard capability is surprisingly flexible and allows the analyst to put all sorts of different views side by side. And, of course, standard reporting tools like Charts and Table provide significant ways to do comparisons. But particularly when you want to use bespoke visualizations like Funnels and DM1’s store visualizations, having the ability to lay them side by side and quickly adjust metrics and view parameters is extraordinarily useful.

If you want to create a process of continuous improvement in the store, having measurement is THE essential component. Measurement that can help you identify and drive potential store testing opportunities. And measurement that can make understanding the real-world impact of change in all its complexity.

DM1 does both.

Click here to sign-up for a Demo of DM1.

Retail Analytics: Store Visualization and DM1

Location analytics isn’t really about where the shopper was. After all, a stream of X,Y coordinates doesn’t tell us much about the shopper. The interesting fact is what was there – in the store – where the shopper was. To answer most questions about the shopper’s experience (what they were interested in, what they might have bought but didn’t, whether they had sales help or not, and what they passed but didn’t consider), we have to understand the store. In my last post, I explained why the most common method of mapping behavior to the store – heatmaps – doesn’t work very well. Today, I’m going to tackle how DM1 does it differently and (in my humble opinion) much better.

Here are the seven requirements I listed for Store Visualization and where and why heatmaps come up short:

Store Visualization: Heatmaps and retail analytics

Designing DM1’s store visualization, I started with the idea that its core function is to represent how an area of the store is performing. Not a point. An area. That’s an important distinction. Heatmaps function rather like a camera exposure. There’s an area down there somewhere of course – but it’s only at the tiny level of the pixel. That’s great for a photograph where the smaller the pixel the better, but analytically those points are too small to be useful. Besides, store measurement isn’t like taking a picture. The smaller the pixel the more accurate the photo. But our measurement capture systems aren’t accurate enough to pinpoint a specific location in the store. Instead, they generate a location with a circle of error that, depending on the system being used, can actually be quite large. It doesn’t make a lot of sense to pretend that measurement is happening at a pixel location when the circle of error on the measurement is 5 feet across!

This got me thinking along the lines of the grid system used in classic board games I played as a kid. If you ever played those games, you know what I’m talking about. The board was a map (of the D-Day beaches or Gettysburg or all of Europe) and overlaid on the map was a (usually hexagonal) grid system that looked like this:

BoardGame

Units occupied grid spaces and their movement was controlled by grid spaces. The grid became the key to the game – with the map providing the underlying visual metaphor. This grid overlay is obviously artificial. Today’s first person shooter games don’t need or use anything like it, but strategy games like Civ still do. Why? Because it’s a great way to quantize spatial information about things like how far a unit can move or shoot, the distance to the enemy, the direction of an attack, the density of units in a space and much, much more.

DM1 takes this grid concept and applies it to store visualization. Picture a store:

store journey analytics

Now lay a grid over it:

Visualizing Store Data

And you can take any place the shopper spends time and map it to a grid-coordinates:

Mapping customer data to the store

And here’s where it really gets powerful. Because not only can you now map every measurement ping to a quantifiable grid space, you can attach store meta-data to the grid space in a deterministic and highly maintainable way. If we have a database that describes GridPoint P14 as being part of Customer Service on a given day, then we know exactly what a shopper saw there. Even better, by mapping actual traffic and store meta-data to grid-points, we can reliably track and trend those metrics over time. No matter how the shape or even location of a store area changes, our trends and metrics will be accurate. So if grid-point P14 is changed from Customer Service to Laptop Displays, we can still trend Customer Service traffic accurately – before, after and across the change.

That’s how DM1 works.

Here’s a look at DM1 displaying a store at the Section level:

Retail Analytics: Store Visualization in DM1

In this case, the metric is visits and each section is color-coded to represent how much foot traffic the section got. These are fully quantified numbers. You can mouse over any area and get the exact counts and metrics for it. Not that you don’t need a separate planogram to match to the store. The understanding of what’s there is captured right along side the metric visualization. Now obviously, Section isn’t the grid level for the store. We often need to be much more fine-grained. In DM1, you can drill-down to the actual grid level to get a much more detailed view:

Retail Analytics: Store Detail in DM1

How detailed? As detailed as your collection system will support. We setup the grid in DM1 to match the appropriate resolution of your system. You’re not limited to drilling down, though. You can also drill up to levels above a Section. Here’s a DM1 view at the Department level:

Retail Analytics: Store Meta Data and Levels in DM1

In fact, with DM1, you have pretty much complete flexibility in how you describe the store. You can define ANY level of meta-data for each grid-point and then view it on the store. Here, for example, is where promotions were placed in the store:

Retail Analytics: Store Merchandising Data Overlay

DM1 also takes advantage of the Store Visualization to make it easy to compare stores – head to head or the same store over time. The Comparison views shows two stores viewed (in this example) at the Section Level and compared by Conversion Efficiency:

Retail Analytics: Store Comparison in DM1

It takes only a glance to instantly see which Sections perform better and which worse at each store. That’s a powerful viz!

In DM1, pretty much ANY metric can be mapped on the store at ANY meta-data level. You can see visits, lingers, linger rate, avg. time, attributed conversions, exits, bounces, Associate interactions, STARs ratio, Interaction Success Rate and so much more (almost fifty metrics) – mapped to any logical level of the store; from macro-levels like Department or Floor all the way down the smallest unit of measurement your collection system can support. Best of all, you define those levels. They aren’t fixed. They’re entirely custom to the way you want to map, measure and optimize your stores.

And because DM1 keeps an historical database of the layouts and meta-data over time, it provides simple, accurate and easily intelligible trending over time.

I love the store visualization capability in DM1 and I think it’s a huge advance compared to heat-maps. As an analyst, I can tell you there’s just no comparison in terms of how useful these visualizations are. They do so much more and do it so much better that it hardly seems worth comparing them to the old way of doing things. But here it is anyway:

DM1 Retail Analytics Store Visualization Advantages

DM1’s store visualization is one powerful analytic hammer. But as good as they are, this type of store visualization doesn’t solve every problem. In my next post, I’ll show how DM1 uses another powerful visual paradigm for mapping and understanding the in-store funnel!

[BTW – if you want to see how DM1 Store Visualization actually works, check out these live videos of DM1 in Action]

Creating a Measurement Language for the Store

Driving real value with analytics is much harder than people assume. Doing it well requires solving two separate, equally thorny problems. The first – fairly obvious problem – is being able to use data to deepen your understanding of important business questions. That’s what analytics is all about. The second problem is being able to use that understanding to drive business change. Affecting change is a political/operational problem that’s often every bit as difficult as doing the actual analysis. Most people have a hard time understanding what the data means and are reluctant to change without that understanding. So, giving analysts tools that help describe and contextualize the data in a way that’s easy to understand is a double-edged sword in the best of ways – it helps solves two problems. It helps the analyst use the data and it helps the analyst EXPLAIN the data to others more effectively. That’s why having a rich, powerful, UNDERSTANDABLE set of store metrics is critical to analytic success with in-store customer tracking.

Some kinds of data are very intuitive for most of us. We all understand basic demographic categories. We understand the difference between young and old. Between men and women. We live those data points on a daily basis. But behavioral data has always been more challenging. When I first started using web analytics data, the big challenge was how to make sense of a bunch of behaviors. What did it mean that someone viewed 7 pages or spent 4.5 minutes on a Website? Well, it turned out that it didn’t mean much at all. The interesting stuff in web analytics wasn’t how many pages a visitor had consumed – it was what those pages were about. It meant something to know that a visitor to a brokerage site clicked on a page about 529 accounts. It meant they had children. It meant they were interested in 529 accounts. And depending on what 529 information they chose to consume, it might indicate they were actively comparing plans or just doing early stage research. And the more content someone consumed, the more we knew about who they were and what they cared about.

Which was what we needed to optimize the experience. To personalize. To surface the right products. With the right messages. At the right time. Knowing more about the customer was the key to making analytics actionable and finding the right way to describe the behavior with data was the key to using analytics effectively.

So when it comes to in-store customer measurement, what kind of data is meaningful? What’s descriptive? What helps analysts understand? What helps drive action?

The answer, it turns out, isn’t all that different from what works in the digital realm. Just as the key to understanding a web visit turns out to be understanding the content a visitor selected and consumed, the key to understanding a store visit turns out to be understanding the store. You have to know what the shopper looked at. What was there when they stopped and lingered. What was along the corridor that they traversed but didn’t shop. You have to know the fitting room from the cash-wrap and an endcap from an aisle and you have to know what products were there. What’s more, you have to place the data in that context.

Here’s what the data from an in-store measurement collection system looks like in its raw form, frame by frame:

TimeXY
04:06.03560
06:50.0966
09:10.02374
11:02.01892
11:35.03398
13:15.02874
14:25.0781
16:16.04175
19:09.04962
21:03.04572
23:23.05583
23:58.05490
24:09.04086
25:05.01590
27:24.0779
27:45.04399
28:42.03797
29:25.04580
32:07.04775
33:05.01677
35:31.03765
36:08.03475
36:33.0973
39:16.03576
40:07.01397

That’s a visit to a store. A little challenging to make sense of, right?

It’s our job to translate that into a journey with the necessary context to make the data useful.

That starts by mapping the data onto the store:

store journey analytics

By overlaying the measurement frames, we can distinguish the path the user took through the store:

StoreFrame1

With simple analysis of the frames, we can figure out where and when a customer shifted from navigating the store to actually spending time. And that first place the shopper actually spends time, has special significance for understanding who they are.

In DM1, the first shopping point is marked as the DRAW. It’s where the shopper WENT FIRST in the store:storeFrame2

In this case, Customer Service was the Draw – indicating that this shopping visit is a return or in-store pickup. But the visit didn’t end there.

Following the journey, we can see what else the customer was exposed to and where else they actually spent time and shopped. In DM1, we capture each place the shopper spent time as a LINGER:

storeFrame3

Lingers tell us about opportunity and interest. These are the things the shopper cared about and might have purchased.

But not every linger is created equal. In some places, the shopper might spend significantly more time – indicating a higher level of engagement. In DM1, these locations are called out on the journey as CONSIDERS:

storeframe4

Having multiple levels of shopper engagement lets DM1 create a more detailed picture of the shopper and a better in-store funnel. Of course, one of the keys to understanding the in-store funnel is knowing when a shopper interacts with an Associate. That’s a huge sales driver (and a huge driver – positive or negative – to customer experience). In DM1, we track the places where a shopper talked with and Associate as INTERACTIONS. They’re a key part of the journey:

storeFrame5

Of course, you also want to know when/if a customer actually purchased. We track check-outs as CONVERSIONS – and have the ability to do that regardless of whether it’s a traditional cash-wrap or a distributed checkout environment:

storeFrame6

Since we have the whole journey, we can also track which areas a customer shopped prior to checkout and we’ve created two measures for that. One is the area shopped directly before checkout (which is called the CONVERSION DRIVER) and the other captures every area the customer lingered prior to checkout – called ATTRIBUTED CONVERSIONS.

StoreFrame8

To use measurement effectively, you have to be able to communicate what the numbers mean. For the in-store journey, there simply isn’t a standardized way of talking about what customers did. With DM1, we’ve not only captured that data, we’ve constructed a powerful, working language (much of it borrowed from the digital realm) that describes the entire in-store funnel.

From Visits (shopper entering store), to Lingers (spending time in an area), to Consideration (deeper engagement), to Investment (Fitting Rooms, etc.), to Interactions (Associate conversations) to Conversion (checkout) along with metrics to indicate the success of each stage along the way. We’ve even created the metric language for failure points. DM1 tracks where customers Lingered and then left the store without buying (Exits) and even visits where the shopper only lingered in one location before exiting (Bounces).

Having a rich set of metrics and a powerful language for describing the customer journey may seem like utter table-stakes to folks weaned on digital analytics. But it took years for digital analytics tools to offer a mature and standardized measurement language. In-store tracking hasn’t had anything remotely similar. Most existing solutions offer two basic metrics (Visits and Dwells). That’s not enough for good analytics and it’s not a rich enough vocabulary to even begin to describe the in-store journey.

DM1 goes a huge mile down the road to fixing that problem.

[BTW – if you want to see how DM1 Store Visualization actually works, check out these live videos of DM1 in Action]

Segmentation is the Key to Marketing Analytics

The equation in retail today is simple. Evolve or die. But if analytics is one of the core tools to drive successful  evolution, we have a problem. From an analytics perspective, we’re used to a certain view of the store. We know how many shoppers we get (door counting) and we know what we sold. We know how many Associates we had. We (may) know what they sold. This isn’t dog food. If you had to pick a very small set of metrics to work with to optimize the store, most of these would belong. But we’re missing a lot, too. We’re missing almost any analytic detail around the customer journey in the store. That’s a particularly acute lack (as I noted in my last post) in a world where we’re increasingly focused on delivering (and measuring) better store experiences. In a transaction-focused world, transactions are the key measures. In an experience world? Not so much. So journey measurement is a critical component of today’s store optimization. And there’s the problem. Because the in-store measurement systems we have available are tragically limited. DM1, our new platform, is designed to fix that problem.

People like to talk about analytics as if it just falls out of data. As if analysts can take any data set and any tool and somehow make a tasty concoction. It isn’t true. Analytics is hard work. A really great analyst can work wonders, but some data sets are too poor to use. Some tools lock away the data or munge it beyond recognition.  And remember, the most expensive part of analytics is the human component. Why arm those folks with tools that make their job slow and hard? Believe me, when it comes to getting value out of analytics, it’s hard enough with good tools and good data. You can kid yourself that it’s okay to get by with less. But at some point you’re just flushing your investment and your time away. In two previous posts, I called out a set of problems with the current generation of store customer measurement systems. Sure, every system has problems – no analytics tool is perfect. But some problems are much worse than others. And some problems cripple or severely limit our ability to use journey data to drive real improvement.

When it comes to store measurement tools, here are the killers: lack of segmentation, lack of store context, inappropriate analytics tools, inability to integrate Associate data and interactions, inability to integrate into the broader analytics ecosystem and an unwillingness to provide cleaned, event-level data that might let analysts get around these other issues.

Those are the problems we set out to solve when we built DM1.

Let’s start with Segmentation. Segmentation can sound like a fancy add-on. A nice to have. Important maybe, but not critical.

That isn’t right. Marketing analytics just is segmentation. There is no such thing as an average customer. And when it comes to customer journey’s, trying to average them makes them meaningless. One customer walks in the door, turns around and leaves. Another lingers for twenty minutes shopping intensively in two departments. Averaging the two? It means nothing.

Almost every analysis you’ll do, every question you’ll try to answer about store layout, store merchandising, promotion performance, or experience will require you to segment. To be able to look at the just the customers who DID THIS. Just the customers who experienced THAT.

Think about it. When you build a new experience, and want to know how it changed behavior you need to segment. When you change a greeting script or adjust a presentation and want to know if it improved store performance you need segmentation. When you change Associate interaction strategies and want to see how it’s impacting customer behavior you need segmentation. When you add a store event and want to see how it impacted key sections, you need segmentation. When you want to know what other stuff shoppers interested in a category cared about, you need segmentation. When you want to know how successful journeys differed from unsuccessful ones, you need segmentation. When you want to know what happens with people who do store pickup or returns, you need segmentation.

In other words, if you want to use customer journey tracking tools for tracking customer journeys, you need segmentation.

If your tool doesn’t provide segmentation and it doesn’t give the analyst access to the data outside it’s interface, you’re stuck. It doesn’t matter how brilliant you are. How clever. Or how skilled. You can’t manufacture segmentation.

Why don’t most tools deliver segmentation?

If it’s so important, why isn’t it there? Supporting segmentation is actually kind of hard. Most reporting systems work by aggregating the data. They add it up by various dimensions so that it can be collapsed into easily accessible chunks delivered up into reports. But when you add segmentation into the mix, you have to chunk every metric by every possible combination of segments. It’s messy and it often expands the data so much that reports take forever to run. That’s not good either.

We engineered DM1 differently. In DM1, all the data is stored in memory. What does that mean? You know how on your PC, when you save something to disk or first load it from the hard drive it takes a decent chunk of time? But once it’s loaded everything goes along just fine? That’s because memory is much faster than disk. So once your PowerPoint or spreadsheet is loaded into memory, things run much faster. With DM1, your entire data set is stored in-memory. Every record. Every journey. And because it’s in-memory, we can pass all your data for every query, really fast. But we didn’t stop there. When you run a query on DM1, that query is split up into lots of chunks (called threads) each of which process its own little range of data – usually a day or two. Then they combine all the answers together and deliver them back to you.

That means that not only does DM1 deliver reports almost instantaneously, it means we can run even pretty complex queries without pre-aggregating anything and without having to worry about the performance. Things like…segmentation.

Segmentation and DM1

In DM1, you can segment on quite a few different things. You can segment on where in the store the shopper spent time. You can segment on how much time they spent. You can segment on their total time in the store. You can segment on when they shopped (both by day of week and time of day). You can segment on whether they purchased or not. And even whether they interacted with an Associate.

If, for example, you want to understand potential cross-sells, you can apply a segment that selects only visitors who spent a significant amount of time shopping in a section or department. Actually, this undersells the capability because it’s in no way limited to any specific type of store area. You can segment on any store area down to the level of accuracy achieved by the collection architecture.

What’s more, DM1 keeps track of historical meta-data for every area of the store. Meaning that even if you changed, moved or re-sized an area of the store, DM1 still tracks and segments on it appropriately.

So if you want to see what else shoppers who looked at, for example, Jackets also considered, you can simply apply the segmentation. It will work correctly no matter how many times the area was re-defined. It will work even in store roll-ups with fundamentally different store types. And with the segment applied, you can view any DM1 visualization, chart or table. So you can look at where else Jacket Shoppers passed through, where they lingered, where they engaged more deeply, what else they were likely to buy, where they exited from, where they went first, where they spent the most time, etc. etc. You can even answer questions such as whether shoppers in Jackets were more or less likely to interact with Sales Associates in that section or another.

Want to see if Jacket shoppers are different on weekdays and weekends? If transactors are different from browsers? If having an Associate interaction significantly increases browse time? Well, DM1 let’s you stack segments. So you can choose any other filter type and apply it as well. I think the Day and Time part segmentation’s are particularly cool (and unusual). They let you seamlessly focus on morning shoppers or late afternoon, weekend shoppers or even just shoppers who come in over lunchtime. Sure, with door-counting you know your overall store volume. But with day and time-part segmentation you know volume, interest, consideration, and attribution for every measured area of the store and every type of customer for every hour and day of week.

DM1’s segmentation capability makes it easy to see whether merchandise is grouped appropriately. How different types of visitor journeys play out. Where promotional opportunities exist. And how and where the flow of traffic contradicts the overall store layout or associate plan. For identified shoppers, it also means you can create extraordinarily rich behavioral profiles that capture in near real-time what a shopper cares about right now.

It comes down to this. Without segmentation, analytics solutions are just baby toys. Segmentation is what makes them real marketing tools.

The Roadmap

DM1 certainly delivers far more segmentation than any other product in this space. But it’s still quite a bit short of what I’d like to deliver. I mean it when I say that segmentation is the heart and soul of marketing analytics. A segmentation capability can never be too robust.

Not only do we plan to add even more basic segmentation options to DM1, we’ve also roadmapped a full segmentation builder (of the sort that the more recent generation of digital analytics tools include). Our current segmentation interface is simple. Implied “ors” within a category and implied “ands” across segmentation types. That’s by far the most common type of segmentation analysts use. But it’s not the only kind that’s valuable. Being able to apply more advanced logic and groupings, customized thresholds, and time based concepts (visited before / after) are all valuable for certain types of analysis.

I’ve also roadmapped basic machine learning to create data-driven segmentations and a UI that provides a more persona-based approach to understanding visitor types and tracking them as cohorts.

The beauty of our underlying data structures is that none of this is architecturally a challenge. Creating a good UI for building segmentations is hard. But if you can count on high performance processing event level detail in your queries (and by high-performance I mean sub-second – check out my demos if you don’t believe me), you can support really robust segmentation without having to worry about the data engine or the basic performance of queries. That’s a luxury I plan to take full advantage of in delivering a product that segments. And segments. And segments again.

Evolve or Die: Analytics and Retail

In my last three posts, I assessed the basic technologies (wifi, camera, etc.) for in-store customer measurement and took a good hard look at the state of the analytics platforms using that measurement. My conclusion? The technologies are challenging but, deployed properly, can work at scale for a reasonable cost. The analytics platforms, on the other hand, have huge gaping holes that seriously limit the ability of analysts to use that data. Our DM1 platform is designed to solve most (I hope all) of those problems. But it’s not worth convincing anyone that DM1 is a better solution unless people get why this whole class of solution is so important.

Over about the same amount of time as those posts, I’ve seen multiple stories on the crisis in mall real-estate, the massive disruption driven in physical retail when eCommerce cross sales thresholds as a percentage of total purchases, and the historical and historically depressing pace of store closings in 2017.

It’s bad out there. No…that doesn’t really capture things. For lots of folks, this is potentially an extinction level event. It’s a simple Darwinian equation:

Evolve or die.

And people get that. The pace of innovation and change in retail has never been as high. Is it high enough? Probably not. But retailers and mall operators are exploring a huge number of paths to find competitive advantage. At a high-level, those paths are obvious and easily understood.

Omni-Channel is Key: You can’t out-compete in pure digital with “he who must not be named”…so your stores have to be a competitive advantage not an anchor. How does that happen? Integration of the digital experience – from desktop to mobile – with the store. Delivering convenience, experience, and personalization in ways that can’t be done in the purely digital realm.

Experience is Everything: If people have to WANT to go to stores (in a line I’ve borrowed from Lee Peterson that I absolutely love), delivering an experience is the bottom line necessary to success. What that experience should be is, obviously, much less clear and much more unique to each business. Is it in-store digital experiences like Oak Labs’ delivers – something that combines a highly-customized digital shopping experience integrated right into the store operation? Is it bringing more and better human elements to the table with personalized clienteling? Is it a fundamentally different mix of retail and experience providers sharing a common environment? It’s all of these and more, of course.

The Store as a Complex Ecosystem: A lot of factors drive the in-store experience. The way the store is laid out. The merchandising. The product itself. Presentations. In-store promotions. Associate placement, density, training and role. The digital environment. Music. Weather. It’s complicated. So changing one factor is never going to be a solution.  Retail professionals have both informed and instinctive knowledge of many of these factors. They have years of anecdotal evidence and real data from one-off studies and point-of-sale. What they don’t have is any way to consistently and comprehensively measure the increasingly complex interactions in the ecosystem. And, of course, the more things change, the less we all know. But part of what’s involved in winning in retail is getting better at what makes the store a store. Better inventory management. Better presentation. Better associates and better clienteling strategies. Part of winning in a massively disrupted environment is just being really good at what you do.

The Store in an Integrated Environment: Physical synergies exist in a way that online synergies don’t. In the friction free world of the internet, there’s precious little reason to embed one web site inside another. But in the physical world, it can be a godsend to have a coffee bar inside the store while my daughters shop! Taking advantage of those synergies may mean blending different levels of retail (craft shows, farmers markets) with traditional retail, integrating experiences (climbing walls, VR movies) or taking advantage of otherwise unusable real-estate to create traffic draws (museums, shared return centers).

In one sense, all of these things are obvious. But none of them are a strategy. They’re just words that point in a general direction to real decisions that people have to make around changes that turn out to be really hard and complex. That’s where analytics comes in and that’s why customer journey measurement is critically important right now.

Because nobody knows A) The right ways to actually solve these problems and  B) How well the things they’re trying to do are actually working.

Think about it. In the past, Point of Sale data was the ultimate “scoreboard” metric in retail and traffic was the equivalent for malls. It’s all that really mattered and it was enough to make most optimization decisions. Now, look at the strategies I just enumerated: omni-channel, delivering experience, optimizing the ecosystem and integrating broader environments…

Point-of-Sale and traffic measure any of that?

Not really. And certainly, they don’t measure it well enough to drive optimization and tuning.

So if you’re feverishly building new stores, designing new store experiences, buying into cutting edge digital integrations, or betting the farm on new uses for your real-estate, wouldn’t it be nice to have a way to tell if what you’re trying is actually working? And a way to make it work better since getting these innovative, complex things right the first time isn’t going to happen?

This is the bottom line: these days in retail, nobody needs to invest in customer measurement. After all, there’s a perfectly good alternative that just takes a little bit longer.

It’s called natural selection. And the answers it gives are depressingly final.

In-Store Customer Analytics: Broken Inside and Out

In my last post, I described four huge deficiencies in the current generation of in-store tracking solutions. The inability to track full customer journeys, do real segmentation, or properly contextualize data to the store make life very hard on a retail analyst trying to do interesting work. And over-reliance on non-analytic heatmaps – a tool that looks nice but is analytically unrewarding – just makes everything worse.

Of course, you don’t need to use one of these solutions. You can build an analytics warehouse and use some combination of extraordinarily powerful general purpose tools like Tableau, Datameer, Watson, and R to solve your problems.

Or can you?

Here are three more problems endemic to the current generation of in-store tracking solutions that limit your ability to integrate them into a broader analytics program.

Too Much or Too Little Associate Data

In retail, the human factor is often a critical part of the customer journey. As such, it needs to be measured. In-store counting solutions have tended toward two bad extremes when it comes to Associate data. Really, really bad solutions have just tracked Associates as customers. That’s a disaster. In the online world, we worked to screen-out the IP addresses of employees from our actual web site counting even though it was a tiny fraction of the overall measurement total. In the store world, it’s not a tiny fraction – especially given the flaws of zone-counting solutions. We’ve seen cases where a small number of associates can look like hundreds of customers. So including associate data in the store customer counts is pretty much a guarantee that your data will be garbage. On the other hand, tracking associates just so you can throw their data away isn’t the right answer either. Those interactions are important – and they are important at the journey level. Solutions that throw this data away or aggregate it up to levels like hour or day counts are missing the point. Your solution needs to be able to identify which visits had interactions, which didn’t, and which were successful. If it can’t do that, it’s not going to solve any real-world problems.

Which brings me to…

Lack of Bespoke Analytics

One of the obvious truths about analytics in the modern world is that no bespoke analytics solution is going to deliver everything you need. Even mature, enterprise solutions like Adobe Analytics don’t deliver all of the visualization and analytics you need. What bespoke analytics tools should deliver is analytics uniquely contextualized to the business problem. This business contextualization is hard to get out of general purpose tools; so it’s the real life-blood of industry and application targeted solutions. If a solution doesn’t deliver this, it’s ripe for replacement by general purpose analytic platforms. But by going exclusively to general purpose solutions, the organization will lose the shorter time to value that targeted analytics can provide.

Unfortunately, the vast majority of in-store customer tracking tools seem to deliver the sort of generic reports and charts that you might expect from an offshore outfit doing $10/hour Tableau reports. The whole point of bespoke solutions is to deliver analytics contextualized to the problem. If they are just doing a bad job of replicating general purpose OLAP tools you have to ask why you wouldn’t just pipe the data into an analytic warehouse.

Which brings me to my final point…

Lack of a True Event Level Data Feed

No matter how good your bespoke analytics solution is, it won’t solve every problem. It isn’t going to visualize data better than Tableau. It won’t be as cognitive as Watson. Or as good a platform for integration as Datameer. And its analytics capabilities are not going to equal SAS or R. Part of being a good analytics solution in today’s world is recognizing that custom-fit solutions need to integrate into a broader data science world. For in-store customer journey tracking, this is especially important because the solution and the data collection mechanism are often bound together (much as they are in most digital analytics). So if you’re solution doesn’t open up the data, you CAN’T use that data in other tools.

That should be a deal killer. Any tool that doesn’t provide a true, event level data feed (not aggregated report-level data which is useless in most of those other solutions) to your analytics warehouse doesn’t deserve to be on an enterprise short-list of customer journey tracking tools.

Open integration and enterprise data ownership should be table stakes in today’s world.

Summing it Up

There’s a lot not to like about the current generation of in-store customer journey solutions. For the most part, they haven’t delivered the necessary capabilities to solve real-world problems in retail. They lack adequate journey tracking, real segmentation, proper store contextualization, bespoke analytics, and open data feeds. These are the tools essential to solving real-world problems. Not surprisingly, the widespread perception among those who’ve tried these solutions is that they simply don’t add much value.

For us at Digital Mortar, the challenge isn’t being better than these solutions. That’s not how we’re measuring ourselves, because being better isn’t enough. We have to be good enough to drive real-world improvement.

That’s much harder.

In my next post(s), I’ll show how we’ve engineered our new platform, DM1, to include these capabilities and how that, in turn, can help drive real-world improvement.