Tag Archives: store measurement

Machine Learning and Optimizing the Store

My previous post covered the first half of my presentation on Machine Learning (ML) and store analytics at the Toronto Symposium. Here, I’m going to work through the case study on using ML to derive an optimal store path. For that analysis, we used our DM1 platform to source, clean, map and aggregate the data and then worked with a data science partner (DXi) on the actual analysis.

Why this problem?

Within DM1 we feel pretty good about the way we’ve built out visualizations of the store data that are easy to use and surprisingly powerful. The Full Path View, Funnel View and Store Layout View  all provide really good ways to explore shopper path in the store.

But for an analyst, exploring data and figuring out a model are utterly different tasks. A typical store presents a nearly infinite number of possible paths – even when the paths are aggregated up to section level. So there’s no way to just explore the paths and find optimal ones.

Even at the most basic level of examining individual shopper paths, deciding what’s good and bad is really hard. Here’s two shopper paths in a store:

Which is better? Does either have issues? It’s pretty hard to know.

 

Why Machine Learning?

Optimal store pathing meets the basic requirements for using supervised ML – we have a lot of data and we have a success criteria (checkout). But ML isn’t worth deploying on every problem that has a lot of data and success criteria. I think about it this way – if I can get want I want by writing simple algorithmic code, then I don’t need ML. In other words, if I can write (for example) a sort and then some simple If-Then rules that will identify the best path or find problem path points, then that’s what we’ll do. If, for example, I just wanted to identify sections that didn’t convert well, it would be trivial to do that. I have a conversion efficiency metric, I sort by it (Ascending) and then I take the worst performers. Or maybe I have a conversion threshold and simply pick any Section that performs worse. Maybe I even calculate a standard deviation and select any section that is worse than 1 standard deviation below the average Section conversion efficiency. All easy.

But none of those things are really very useful when it comes to finding poor path performance in a robust fashion.

So we tried ML.

 

The Analysis Basics

The analysis was focused on a mid-sized apparel store with around 25 sections. We had more than 25,000 shopper visits. Which may not seem like very much if you’re used to digital analytics, but is a pretty good behavior base for a store. In addition to the basic shopper journey, we also had Associate interaction points (and time of interaction), and whether or not the shopper converted. The goal was to find potential store layout problems and understand which parts of the store drove to (or subtracted from) overall conversion efficiency.

Preparing the Data

The first step in any analysis (once you know what you want) is usually data preparation.

Our data starts off as a stream of location events. Those location events have an X,Y, Z coordinates that are offset from a zero point in the store. In the DM1 platform, we take that data and map it against a digital planogram capability that keeps a full, historical record of the store. That tells us what shoppers actually looked and where they spent time. This is the single most critical step in turning the raw data into something that’s analytically useful.

Since we also track Associates, we can track interaction points by overlaying the Associate data stream on top of the shopper stream. This isn’t perfect – it’s easy to miss short interactions or be confused by a crowded store – but particularly when it’s app to app tracking it works pretty well. Associate interaction points are hugely important in the store (as the subsequent analysis will prove).

Step 3 is knowing whether and when a shopper purchased. Most of the standard machine learning algorithms require having a way to determine if a behavior pattern was successful or not – that’s what they are optimizing too. We’re using purchase as our success metric.

The underlying event data gets aggregated into a single row per shopper visit. That row contains a visit identifier, a start and stop time, an interaction count, a first interaction time, a last interaction time, the first section visited, the time spent in each section and, of course, our success metric – a purchase flag.

That’s it.

The actual analytic heavy lifting was done by DXi on their machine learning platform. They use an ensemble approach – throwing the kitchen sink at the problem by using 25+ different algorithms to identify potential winners/losers (if you’d like more info or an introduction to them, drop me a line and I’ll connect you).

 

Findings

Here’s some of the interesting stuff that surfaced, plucked from the Case-Study I gave at the Symposium:

One of the poorest performing sections – unpicked by a single DXi ML algorithm as important – sits right smack dab in the middle of the store. That central position really surprised us. Yes, as you’ll see in a moment, the store has a successful right rail pattern – but this was a fairly trafficked spot with good sightlines and easy flow into high-value areas of the store.

Didn’t work well though. And that’s definitely worth thinking about from a store layout perspective.

One common browsing behavior for shoppers is a race-track pattern – navigating around the perimeter of the store. There’s a good example of that on the right-side image I showed earlier:

The main navigation path through the store is the red rectangle (red because this shopper spent considerable time there) – and you can see that while the shopper frequently deviated from that main path that their overall journey was a circuit around the store.

The ML algo’s don’t know anything about that – but they did pick out the relevant sections in the analyzed store along that starting path as really important for conversion.

We took that to mean that the store is working well for that race-track shopper type. An important learning.

For this particular store, casual shoes was picked as important by every ML algorithm – making it the most important section of the store. It also had the largest optimal time value – and clearly rewarded more time with higher conversion rates. Shoes, of course, is going to be this way. It’s not a grab and go item. So there’s an element of the obvious here – something you should expect when you unleash ML on a dataset (and hey – most analytics projects will, if they work at all, vacillate between the interesting and the obvious). But even compared to other types of shoe – this section performed better and rewarded more time spent – so there is an apples-to-apples part of this comparison as well.

The next finding was an interesting one and illustrates a bit of the balance you need to think about between the analyst and the algorithm. The display in question was located fairly close to cash-wrap on a common path to checkout. It didn’t perform horribly in the ML – some of the DXi algorithms did pick it as important for conversion. On the other hand, it was one of the few sections with a negative weighting to time spent – so more time spent means less likely conversion. We interpreted that combination as indicating that the section’s success was driven by geography not efficiency. It’s kind of like comparing Saudi Arabia vs. U.S. Shale drillers. Based purely on the numbers, Saudi Arabia looks super efficient and successful with the lowest cost per barrel of oil extracted in the world. But when you factor in the geographic challenges, the picture changes completely. SA has the easiest path to oil recovery in the world. Shale producers face huge and complex technical challenges and still manage to be price competitive. Geography matters and that’s just a core fact of in-store analytics.

Our take on the numbers when we sifted through the DXi findings was that this section was actually underperforming. It might take a real A/B test to prove that, but regardless I think it’s a good example of how an analyst has to do more than run an algorithm. It’s easy to fool even very sophisticated algorithms with strong correlations and so much of our post-analysis ANALYSIS was about understanding how the store geography and the algorithm results play together.

In addition to navigation findings like these, the analysis also included the impact of Associates on conversion. In general, the answer we got was the more interactions the merrier (at the cash register). Not every store may yield the same finding (and it’s also worth thinking about whether a single conversion optimization metric is appropriate here – in my Why Analytics Fails talk I argue for the value in picking potentially countervailing KPIs like conversion and shopper satisfaction as dual optimization points).

Even after multiple interactions, additional interactions had a positive impact on sales.

This should be obvious but I’ll hearken back to our early digital analytics days to make a point. We sometimes found that viewing more pages on a Website was a driver of conversion success. But that didn’t mean chopping pages in half (as one client did) so that that the user had to consume more pages to read the same content was a good strategy.

Just because multiple Associate interactions in a store with a normal interaction strategy created lift, it doesn’t mean that, for example, having your Associates tackle customers (INTERACTIOOOON!!!) as they navigate the floor will boost conversion.

But in this case, too much interaction was a legitimate concern. And the data indicates that – at least as measured by conversion rates – the concern did not manifest itself in shopper turn-off.

If you’re interested in getting the whole deck – just drop me a note. It’s a nice intro into the kind of shopper journey tracking you can do with our DM1 platform and some of the ways that machine learning can be used to drive better practice. And, as I mentioned, if you’d like to check out the DXi stuff – and it’s interesting from a pure digital perspective too – drop me a line and I’ll introduce you.

Machine Learning and Store Analytics

Not too long ago I spoke in Toronto at a Symposium focused on Machine Learning to describe what we’ve done and are trying to do with Machine Learning (ML) in our DM1 platform and with store analytics in general. Machine Learning is, in some respects, a fraught topic these days. When something is hard on the hype cycle, the tendency is to either believe it’s the answer to every problem or to dismiss the whole thing as an illusion. The first answer is never right. The second sometimes is. But ML isn’t an illusion – it’s a real capability with a fair number of appropriate applications. I want to cover – from our hands-on, practical perspective – where we’ve used ML, why we used ML and show a case-study of some of the results.

 

Just what is Machine Learning?

In its most parochial form, ML is really nothing more than a set of (fairly mature) statistical techniques dressed up in new clothes.

Here’s a wonderful extract from the class notes of a Stanford University expert on ML: (http://statweb.stanford.edu/~tibs/stat315a/glossary.pdf)

Machine Learning vs Statistics

It’s pretty clear why we should all be talking ML not statistics! And seriously, wasn’t data science enough of a salary upgrade for statisticians without throwing ML into the hopper?

Unlike big data, I have no desire in this case to draw any profound definitional difference between ML and statistics. In my mind, I think of ML as being the domain of neural networks, deep learning and Support Vector Machines (SVMs). Statistics is the stuff we all know and love like regression and factor analysis and p values. That’s a largely ad hoc distinction (and it’s particularly thin on the unsupervised learning front), but I think it mostly captures what people are thinking when they talk about these two disciplines.

 

What Problems Have We Tried to Solve with ML

At a high-level, we’ve tackled three types of problems with ML (as I’ve casually defined it): improving data quality, shopper type classification, and optimal store path analysis.

Data quality is by far the least sexy of these applications, but it’s also the area where we’ve done the most work and where the everyday application of our platform takes actual advantage of some ML work.

When we setup a client instance on DM1, there’s a number of highly specific configurations that control how data gets processed. These configurations help guide the platform in key tasks like distinguishing Associate electronic devices from shopper devices. Why is this so important? Well, if you confuse Associates with shoppers, you’ll grossly over-count shoppers in the store. Equally bad, you’ll miss out on a real treasure trove of Associate data including when Associate/Shopper interactions occur, the ratio of Shoppers to Associates (STARs), and the length and outcome from interactions. That’s all very powerful.

If you identify store devices, it’s easy enough to signature them in software. But we wanted a system that would do the same work without having to formally identify store devices. Not only does this make it a lot easier to setup a store, it fixes a ton of compliance issues. You may tell Associates not to carry their own devices on the floor, but if you think that rule is universally followed your kidding yourself. So even if you BLE badge employees, you’re still likely picking up their personal phones as shopper devices. By adding behavioral identification of Associates, we make the data better and more accurate while minimizing (in most cases removing) operational impact.

We use a combination of rule-based logic and ML to classify Associate behavior on ALL incoming devices. It turns out that Associates behave quite differently in stores than shoppers. They spend more time. Go places shoppers can’t. Show up more often. Enter at different times. Exit at different times. They’re different. Some of those differences are easily captured in simple IF-then programming logic – but often the patterns are fairly complex. They’re different, but not so easily categorized. That’s where the ML kicks in.

We also work in a lot of electronically dense environments. So we not only need to identify Associates, we need to be able to pick-out static devices (like display computers, endless aisle tablets, etc.). That sounds easy, and in fact it is fairly easy. But it’s not quite as trivial as it sounds; given the vagaries of positioning tech, a static device is never quite static. We don’t get the same location every time – so we have to be able to distinguish between real movement and the type of small, Brownian motion we get from a static device.

Fixing data quality is never all that exciting, but in the world of shopper journey measurement it’s essential. Without real work to improve the data – work that ML happens to be appropriate for – the data isn’t good enough.

The second use we’ve found for machine learning is in shopper classification. We’re building a generalized shopper segmentation capability into the next release of DM1. The idea is pretty straightforward. For years, I’ve championed the notion of 2-tiered segmentation in digital analytics. That’s just a fancy name for adding a visit-type segmentation to an existing customer segmentation. And the exact same concept applies to stores.

As consultants, we typically built highly customized segmentation schemes. Since Digital Mortar is a platform company, that’s not a viable approach for us. Instead, what we’ve done is taken a set of fairly common in-store behavioral patterns and generalized their behavioral signatures. These patterns include things like “Clearance Shoppers”, “Right-Rail Shoppers”, “Single Product Focused Shoppers”, “Product Returners”, and “Multi-Product Browsers”. By mapping store elements to key behavior points, any store can then take advantage of this pre-existing ML-driven segmentation.

Digital Mortar's DM1 Shopper Segmentation

It’s pretty cool stuff and I’m excited to get it into the DM1 platform.

The last problem we’ve tackled with ML is finding optimal store paths. This one’s more complex – more complex than we’ve been comfortable taking on directly. We have a lot of experience in segmentation techniques – from cluster analysis to random forests to SVMs. We’re pretty comfortable with that problem set. But for optimal path analysis, we’ve been working with DXi. They’re an ML company with a digital heritage and a lot of experience working on event-level digital data. We’ve always said that a big part of what drew us to store journey measurement is how similar the data is to digital journey data and this was a chance to put that idea to the test. We’ve given them some of our data and had them work on some optimal path problems – essentially figuring out whether the store layout is as good as possible.

Why use a partner for this? I’ve written before about how I think Digital Mortar and the DM1 platform fit in a broader analytics technology stack for retail. DM1 provides a comprehensive measurement system for shopper tracking and highly bespoke reporting appropriate to store analytics. It’s not meant to be a general purpose analytics platform and it’s never going to have the capabilities of tools like Tableau or R or Watson. Those are super-powerful general-purpose analytics tools that cover a wide range of visualization, data exploration and analytic needs. Instead of trying to duplicate those solutions we’ve made it really easy (and free) to export the event level data you need to drive those tools from our platform data.

I don’t see DM1 becoming an ML platform. As analysts, we’ll continue to find uses for ML where we think it’s appropriate and embed those uses in the application. But trying to replicate dedicated ML tools in DM1 just doesn’t make a lot of sense to me.

In my next post, I’ll take a deeper dive into that DXi work, give a high-level view of the analytics process, and show some of the more interesting results.

An Easy Introduction to In-Store Measurement and Retail Analytics with DM1

My last post made the case that investing in store measurement and location analytics is a good move from a career perspective. The reward? Becoming a leader in a discipline that’s poised to grow dramatically. The risk? Ending up with a skill set that isn’t much in demand. For most people, though, risk/reward is only part of the equation. There are people who will expend the years and the effort to become a lawyer even without liking the law – simply on the basis of its economic return. I’m not a fan of that kind of thinking. To me, it undervalues human time and overvalues the impact of incremental prosperity. So my last and most important argument was simple: in-store measurement and location analytics is fun and interesting.

But there’s not a ton of ways you can figure out if in-store measurement is your cup of tea are there?

So I put together another video using our DM1 platform that’s designed to give folks a quick introduction to basic in-store measurement.

It’s a straightforward, short (3 minute) introduction to basic concepts in store-tracking with DM1 – using just the Store Layout tool.

The video walks through three core tasks for in-store measurement: understanding what customer’s do in-store, evaluating how well the store itself performed, and drilling into at least one aspect of performance drivers with a look at Associate interactions.

The first section walks through a series of basic metrics in store location analytics. Starting with where shoppers went, it shows increasingly sophisticated views that cover what drew shoppers into the store, how much time shoppers spend in different areas, and which parts of the store shoppers engaged with most often:

retail analytics: measuring store efficiency and conversion with DM1

The next section focuses on measures of store efficiency and conversion. It shows how you can track basic conversion metrics, analyze how proximity to the cash-wrap drives impulse conversion, and analyze unsuccessful visits in terms of exit and bounce points.

DM1 Layout Overview Video

Going from what to why is probably the hardest task in behavioral analytics. And in the 3rd section, I do a quick dive into a set of Associate metrics to show how they can help that journey along. Understanding where associates ARE relative to shoppers (this is where the geo-spatial element is critical), when and where Associates create lift, and whether your deployment of Associates is optimized for creating lift can be a powerful part of explaining shopper success.

retail analytics with dm1 - analyzing associate performance, STARs and lift with DM!

The whole video is super-quick (just 3 minutes in total) and unlike most of what I’ve done in the past, it doesn’t require audio. There’s a brief audio introduction (about 15 seconds) but for the rest, the screen annotations should give you a pretty good sense of what’s going on if you prefer to view videos in quiet mode.

I know you’re not going to learn in-store measurement in 3 minutes. And this is just a tiny fraction of the analytic capability in a product like DM1. It’s more of an amuse bouche – a little taste –  to see if you find something enjoyable and interesting.

I’m going to be working through a series of videos intended to serve that purpose (and also provide instructional content for new DM1 users). As part of that, I’m working on a broader overview right now that will show-off more of the tools available. Then I’m going to work on building a library of instructional vids for each part of DM1 – from configuring a store to creating and using metadata (like store events) to a deep-dive into funnel-analytics.

I’d love to hear what you think about this initial effort!

Check it out:

Analyzing the In-Store Journey as a Funnel with DM1

Visualizing the customer journey in the context of the store is the foundation for analyzing in-store data. The metrics and the store context provide a framework for translating customer measurement data into something that is immediately understandable as a shopper’s journey. But visualizing information is just the first step in making it actionable. Understanding the data is, of course, essential. But you can understand data quite well and still have no idea what to do with it. In fact, that’s a problem we see all the time with analytics. And while it’s a problem that no technology solution can solve entirely (since there are always business and organizational issues to be tackled),  there are analytic and reporting techniques that can really help. We’ve built a number of them into DM1, starting with in-store funnel analytics.

The idea behind a conversion funnel is simple. The customer journey is chopped up into discrete steps based on increasing likelihood to purchase. If we analyze the journey by those discrete steps, we can work to optimize the flow from one step to the next. Improve the flow between any funnel step and the next, and the chance is excellent that you’ll improve the overall funnel conversion as well. Funnels give you a specific place to start. They let you figure out which parts of the overall customer journey are already working well and which aren’t. They let you focus on specific areas with the confidence that if you can improve performance you’ll make a significant difference. And they make it possible to easily measure success. All you have to measure is the number of people moving from one step to the next.

Funnels are THE paradigm for analytics and optimization in eCommerce. In fact, it was largely on their ability to help merchants understand and improve eCommerce funnels that digital analytics solutions first gained traction. And to this day, eCommerce testing and analytics practitioners almost always work by breaking down the customer journey into funnel steps and then working to optimize each step. While the measurement of funnels is itself interesting, I think the real value in funnel analysis is the process it supports. That ability to target specific aspects of the journey, figure out which ones are the most broken, and then test possible improvements is at the heart of so much of the continuous improvement that makes digital players successful.

One of our big goals with Digital Mortar is to bring the in-store funnel paradigm and the discipline of continuous improvement to the store. DM1 delivers on the technology and analytic part of that program.

With DM1, you can start a funnel at any place in the store and at any stage in the customer journey. But the most natural place to start is with a shopper entering the store. As you can see, DM1 lets you choose any area of the store you’ve defined and lets you pick from a range of engagement metrics.

Retail Analytics - In-Store Shopper Funnel DM1

 

Nearly 84 thousand shoppers entered the store in October. Since that’s where the measurement starts, this first step of the funnel doesn’t have any fallout. Everyone I measured, by definition, entered the store. It’s worth noting – and I get asked this a lot – that you CAN track Retail Analytics - In-Store Shopper Funnelpass-by traffic if you setup the measurement system appropriately. Doing so allows you to extend the funnel outside the store!

I could build a store-wide funnel, looking at conversion across the whole store. But it’s usually more interesting and actionable to focus a bit. So my funnel is going to focus on a specific section of the store – Team Gear.Retail Analytics - In-Store Shopper Funnel Linger and Consideration

Adding “Visits to Team Gear” to the funnel, I can see that around 15 thousand shoppers – about 18% of store visitors – visited Team Gear. It took the average visitor about 2 minutes before entry to reach Team Gear. Which makes sense because this area is pretty front of store

But one of the real complexities to in-store measurement is that since shoppers are navigating a physical environment they often pass-thru areas without being interested in them. That doesn’t happen much in digital.

I want to know how many people SHOPPED in Team Gear out of the folks who had the opportunity. And I caRetail Analytics - In-Store Shopper Funnel falloutn see that by selecting Lingers as my metric in the next funnel step. These last two steps illustrate a powerful metric in store measurement that’s simply never been available before. Stores have been able to measure conversion (checkouts/door entries) at the macro level, but at the area level this gets reduced to sales per square foot.

That isn’t reflective of the real opportunity a square foot provides. By measuring where shoppers actually WENT and where they SHOPPED, we have a real KPI of how well a section is performing given its opportunity.

Only about 1 in 7 shoppers who passed through Team Gear actually Shopped there. That’s a problem I’d probably want to tackle.

From here, I can add Fitting Room and CashWrap to the funnel. At every step along the way I can see how many shoppers I’m losing from the total opportunity. I can also see how much time is passing and how many stops the shopper made in-between.

In the end, I have a customer funnel for Team Gear that runs from Store Entry to Cash-Wrap that looks like this:

Retail Analytics - In-Store Shopper Funnel and Funnel Analytics

Any start place. Any level of engagement. Any steps in between. DM1 builds the funnels you need to support analytics and testing.

Pretty cool.

There’s no doubt in my mind that the picture of the shopper journey that DM1 provides drives better understanding. But as I said earlier, analytics isn’t improvement. It’s a way to drive improvement.

The funnel paradigm works less because of it’s analytics potential than because of the process it helps define. In-store funnels focus optimization efforts and make them easily measurable. Whether I tackle the step with the highest abandonment rate, try to build the initial opportunity, or attempt to remove distractions between key steps, funnel analysis helps guide my reasoning about what to test in the store and provides a fully baked way to measure whether store changes drove the desired behavior.

Retail Analytics: Store Visualization and DM1

Location analytics isn’t really about where the shopper was. After all, a stream of X,Y coordinates doesn’t tell us much about the shopper. The interesting fact is what was there – in the store – where the shopper was. To answer most questions about the shopper’s experience (what they were interested in, what they might have bought but didn’t, whether they had sales help or not, and what they passed but didn’t consider), we have to understand the store. In my last post, I explained why the most common method of mapping behavior to the store – heatmaps – doesn’t work very well. Today, I’m going to tackle how DM1 does it differently and (in my humble opinion) much better.

Here are the seven requirements I listed for Store Visualization and where and why heatmaps come up short:

Store Visualization: Heatmaps and retail analytics

Designing DM1’s store visualization, I started with the idea that its core function is to represent how an area of the store is performing. Not a point. An area. That’s an important distinction. Heatmaps function rather like a camera exposure. There’s an area down there somewhere of course – but it’s only at the tiny level of the pixel. That’s great for a photograph where the smaller the pixel the better, but analytically those points are too small to be useful. Besides, store measurement isn’t like taking a picture. The smaller the pixel the more accurate the photo. But our measurement capture systems aren’t accurate enough to pinpoint a specific location in the store. Instead, they generate a location with a circle of error that, depending on the system being used, can actually be quite large. It doesn’t make a lot of sense to pretend that measurement is happening at a pixel location when the circle of error on the measurement is 5 feet across!

This got me thinking along the lines of the grid system used in classic board games I played as a kid. If you ever played those games, you know what I’m talking about. The board was a map (of the D-Day beaches or Gettysburg or all of Europe) and overlaid on the map was a (usually hexagonal) grid system that looked like this:

BoardGame

Units occupied grid spaces and their movement was controlled by grid spaces. The grid became the key to the game – with the map providing the underlying visual metaphor. This grid overlay is obviously artificial. Today’s first person shooter games don’t need or use anything like it, but strategy games like Civ still do. Why? Because it’s a great way to quantize spatial information about things like how far a unit can move or shoot, the distance to the enemy, the direction of an attack, the density of units in a space and much, much more.

DM1 takes this grid concept and applies it to store visualization. Picture a store:

store journey analytics

Now lay a grid over it:

Visualizing Store Data

And you can take any place the shopper spends time and map it to a grid-coordinates:

Mapping customer data to the store

And here’s where it really gets powerful. Because not only can you now map every measurement ping to a quantifiable grid space, you can attach store meta-data to the grid space in a deterministic and highly maintainable way. If we have a database that describes GridPoint P14 as being part of Customer Service on a given day, then we know exactly what a shopper saw there. Even better, by mapping actual traffic and store meta-data to grid-points, we can reliably track and trend those metrics over time. No matter how the shape or even location of a store area changes, our trends and metrics will be accurate. So if grid-point P14 is changed from Customer Service to Laptop Displays, we can still trend Customer Service traffic accurately – before, after and across the change.

That’s how DM1 works.

Here’s a look at DM1 displaying a store at the Section level:

Retail Analytics: Store Visualization in DM1

In this case, the metric is visits and each section is color-coded to represent how much foot traffic the section got. These are fully quantified numbers. You can mouse over any area and get the exact counts and metrics for it. Not that you don’t need a separate planogram to match to the store. The understanding of what’s there is captured right along side the metric visualization. Now obviously, Section isn’t the grid level for the store. We often need to be much more fine-grained. In DM1, you can drill-down to the actual grid level to get a much more detailed view:

Retail Analytics: Store Detail in DM1

How detailed? As detailed as your collection system will support. We setup the grid in DM1 to match the appropriate resolution of your system. You’re not limited to drilling down, though. You can also drill up to levels above a Section. Here’s a DM1 view at the Department level:

Retail Analytics: Store Meta Data and Levels in DM1

In fact, with DM1, you have pretty much complete flexibility in how you describe the store. You can define ANY level of meta-data for each grid-point and then view it on the store. Here, for example, is where promotions were placed in the store:

Retail Analytics: Store Merchandising Data Overlay

DM1 also takes advantage of the Store Visualization to make it easy to compare stores – head to head or the same store over time. The Comparison views shows two stores viewed (in this example) at the Section Level and compared by Conversion Efficiency:

Retail Analytics: Store Comparison in DM1

It takes only a glance to instantly see which Sections perform better and which worse at each store. That’s a powerful viz!

In DM1, pretty much ANY metric can be mapped on the store at ANY meta-data level. You can see visits, lingers, linger rate, avg. time, attributed conversions, exits, bounces, Associate interactions, STARs ratio, Interaction Success Rate and so much more (almost fifty metrics) – mapped to any logical level of the store; from macro-levels like Department or Floor all the way down the smallest unit of measurement your collection system can support. Best of all, you define those levels. They aren’t fixed. They’re entirely custom to the way you want to map, measure and optimize your stores.

And because DM1 keeps an historical database of the layouts and meta-data over time, it provides simple, accurate and easily intelligible trending over time.

I love the store visualization capability in DM1 and I think it’s a huge advance compared to heat-maps. As an analyst, I can tell you there’s just no comparison in terms of how useful these visualizations are. They do so much more and do it so much better that it hardly seems worth comparing them to the old way of doing things. But here it is anyway:

DM1 Retail Analytics Store Visualization Advantages

DM1’s store visualization is one powerful analytic hammer. But as good as they are, this type of store visualization doesn’t solve every problem. In my next post, I’ll show how DM1 uses another powerful visual paradigm for mapping and understanding the in-store funnel!

[BTW – if you want to see how DM1 Store Visualization actually works, check out these live videos of DM1 in Action]

Four Fatal Flaws with In-Store Tracking

I didn’t start Digital Mortar because I was impressed with the quality of the reporting and analytics platforms in the in-store customer tracking space. I didn’t look at this industry and say to myself, “Wow – here’s a bunch of great platforms that are meeting the fundamental needs in the space at an enterprise level.” Building good analytics software is hard. And while I’ve seen great examples of SaaS analytics platforms in the digital space, solutions like Adobe and Google Analytics took many years to reach a mature and satisfying form. Ten years ago, GA was a toy and Adobe (Omniture SiteCatalyst at the time) managed to be both confusing and deeply under-powered analytically. In our previous life as consultants, we had the opportunity to use the current generation of in-store customer journey measurement tools. That hands-on experience convinced me that this data is invaluable. But it also revealed deep problems with the way in-store measurement is done.

When we started building a new SaaS in-store measurement solution here at Digital Mortar, these are the problems in the technology that we wanted to solve:

Lack of Journey Measurement

Most of today’s in-store measurement systems are setup as, in essence, fancy door counters. They start by having you draw zones in the store. Then they track how many people enter each zone and how long they spend there (dwell time).

This just sucks.

It’s like the early days of digital analytics when all of our tracking was focused on the page view. We kept counting pages and thinking it meant something. Till we finally realized that it’s customers we need to understand, not pages. With zone counting, you can’t answer the questions that matter. What did customers look at first? What else did customers look at when they shopped for something specific? Did customers interact with associates? Did those interactions drive sales? Did customer engagement in an area actually drive sales? Which parts of the store were most and least efficient? Does that efficiency vary by customer type?

If you’re not asking and answering questions about customers, you’re not doing serious measurement. Measurement that can’t track the customer journey across zones just doesn’t cut it. Which brings me to…

Lack of Segmentation

My book, Measuring the Digital World, is an extended argument for the central role of behavioral segmentation in doing customer analytics. Customer demographics and relationship variables are useful. But behavior – what customers care about right now – will nearly always be more important. If you’re trying to craft better omni-channel experiences, drive integrated marketing, or optimize associate interactions, you must focus on behavioral segmentation. The whole point of in-store customer tracking is to open up a new set of critically important customer behaviors for analysis and use. It’s all about segmentation.

Unfortunately, if you can’t track the customer journey (as per my point above), you can’t segment. It’s just that simple. When a customer is nothing more than a blip in the zone, you have no data for behavioral segmentation. Of course, even if you track the customer journey, segmentation may be deeply limited in analytic tools. You could map the improvement of Adobe or Google Analytics by charting their gradually improving segmentation capabilities. From limited filtering on pre-defined variables to more complex, query-based segmentation to the gradual incorporation of sophisticated segmentation capabilities into the analyst’s workbench.

You can have all the fancy charts and visualizations in the world, but without robust segmentation, customer analytics is crippled.

Lack of Store Context

When I introduce audiences to in-store customer tracking, I often use a slide like this:

In-store Customer Analytics

The key point is that the basic location data about the customer journey is only meaningful when its mapped to the actual store. If you don’t know WHAT’S THERE, you don’t have interesting data. The failure to incorporate “what’s there” into their reporting isn’t entirely the fault of in-store tracking software. Far too many retailers still rely on poor, paper-based planograms to track store setups. But “what’s there” needs to be a fundamental part of the collection and the reporting. If data isn’t stored, aggregated, trended and reported based on “what’s there”, it just won’t be usable. Which brings me to…

Use of Heatmaps

Heatmaps sure look cool. And, let’s face it, they are specifically designed to tackle the problem of “Store Context” I just talked about. Unfortunately, they don’t work. If you’ve ever tried to describe (or just figure out) how two heat-maps differ, you can understand the problem. Dialog like: “You can see there’s a little more yellow here and this area is a little less red after our test” isn’t going to cut it in a Board presentation. Because heat-maps are continuous, not discrete, you can’t trend them meaningfully. You can’t use them to document specific amounts of change. And you can’t use them to compare customer segments or changed journeys. In fact, as an analyst who’s tried first hand to use them, I can pretty much attest that you can’t actually use heat-maps for much of anything. They are the prettiest and most useless part of in-store customer measurement systems. If heat-maps are the tool you have to solve the problem of store context, you’re doomed.

These four problems cripple most in-store customer journey solutions. It’s incredibly difficult to do good retail analytics when you can’t measure journeys, segment customers, or map your data effectively onto the store. And the ubiquity of heat-maps just makes these problems worse.

But the problems with in-store tracking solutions don’t end here. In my next post, I’ll detail several more critical shortcomings in the way most in-store tracking solutions are designed. Shortcomings that ensure that not only can’t the analyst effectively solve real-world business problems with the tool, but that they can’t get AT THE DATA with any tools that might be able to do better!

Want to know more about how Digital Mortar can drive better store analytics? Drop me a line.

Taking In-Store Measurement…Out of the Store

In my last few posts, I explained what in-store journey analytics is, described the basics of the technology and the data collection used, and went into some detail about its potential business uses. Throughout, and especially in that last part around business uses, I wrote on the assumption that this type of measurement is all about retail stores. After all, brick & mortar stores are the primary focus of Digital Mortar AND of nearly every company in the space. But here’s the thing, this type of measurement is broadly applicable to a wide variety of applications where customer movement though a physical environment is a part of the experience. Stadiums, malls, resorts, cruise ships, casinos, events, hospitals, retail banks, airports, train stations and even government buildings and public spaces can all benefit from understanding how physical spaces can be optimized to drive better customer or user experiences.

In these next few posts, I’m going to step outside the realm of stores and talk about the opportunities in the broader world for customer journey tracking. I’ll start by tackling some of the differences between the tracking technologies and measurement that might be appropriate in some of these areas versus retail, and then I’m going to describe specific application areas and delve a little deeper into how the technology might be used differently than in traditional retail. While the underlying measurement technology can be very similar, the type of reporting and analytics that’s useful to a stadium or resort is different than what makes sense for a mall store.

Since I’m not going to cover every application of customer journey tracking outside retail in great detail, I’ll start with some general principles of location measurement based upon industry neutral things like the size of the space and the extent to which the visitors will opt-in to wifi or use an app.

Measuring BIG Spaces versus little ones

With in-store journey tracking, you have three or four alternatives when choosing the underlying measurement collection technology. Cameras, passive wifi, opt-in wifi and bluetooth, and dedicated sniffers are all plausible solutions. With large spaces like stadiums and airports, it’s often too expensive to provide comprehensive camera coverage. It can even be too expensive to deploy custom measurement devices (like sniffers). That’s especially true in environments where the downtime and wiring costs can greatly exceed the cost of the hardware itself.

So for large spaces, wifi tracking often becomes the only realistic technology for deploying a measurement system. That’s not all bad. While out-of-the-box wifi is the least accurate measurement technology, most large spaces don’t demand fine-grained resolution. In a store, a 3 meter circle of error might place a customer in a completely different section of the store. In an airport, it’s hard to imagine it would make much difference.

Key Considerations Driven by Size of Location:

  • How much measurement accuracy to do you need?
  • How expensive will measurement specific equipment and installation be and is it worth the cost?
  • Are there special privacy considerations for your space or audience?

Opt-in vs. Anonymous Tracking

Cameras, passive wifi and sniffers can all deliver anonymous tracking. Wifi, Bluetooth and mobile apps all provide the potential for opt-in tracking. There are significant advantages to opt-in based tracking. First, it’s more accurate. Particularly in out-of-the-box passive wifi, the changes in IoS to randomize MAC addresses have crippled straightforward measurement and made reasonably accurate customer measurement a challenge. When a user connects to your wifi or opens an app, you can locate them more frequently and more precisely and their phone identity is STABLE so you can track them over time. If your primary interest is in understanding specific customers better for your CRM, tracking over-time populations or you have significant issues with the privacy implications of anonymized passive tracking, then opt-in tracking is your best bet. However, this choice is dependent on one further fact: the extent to which your customers will opt-in. For stadiums and resorts, log-in rates are quite high. Not so much at retail banks. Which brings us to…

Key Considerations for Opt-In Based Tracking

  • Will a significant segment of your audience opt-in?
  • Are you primarily interested in CRM (where opt-in is critical) or in journey analytics (which can be anonymous)?

How good is the sample?

Some technologies (like camera) provide comprehensive coverage by default. Most other measurement technologies inherently take some sample. Any form of signal detection will start with a sample that includes only people with phones. That isn’t much of a sample limitation though it will exclude most smaller children. Passive methods further restrict the population to people with wifi turned on. Most estimates place the wifi-activated rate at around 80%. That’s a fairly high number and it seems unlikely that this factor introduces significant sample bias. However, when you start factoring in things like Android user or App downloader or wifi user, you’re often introducing significant reductions in sample size AND adding sample biases that may or may not be difficult to control for. App users probably aren’t a  representative sample of, for example, the likelihood of a shopper to convert in a store. But even if they are a small percentage of your total users, they are likely perfectly representative of how long people spend queuing in lines at a resort. One of the poorly understood aspects of measurement science is that the same sample can be horribly biased for some purposes but perfectly useful for others!

Key Considerations for Sampling

  • Does your measurement collection system bias your measurement in important ways?
  • Are people who opt-in a representative sample for your measurement purposes?

The broad characteristics that define what type of measurement system is right for your needs are, of course, determined by what questions you need to answer. I’ll take a close look at some of the business questions for specific applications like sports stadiums next time. In general, though, large facilities by their very nature need less fine-grained measurement than smaller ones. For most applications outside of retail, being able to locate a person within a 3 meter circle is perfectly adequate. And while the specific questions being answered are often quite specific to an application area, there is a broad and important divide between measurement that’s primarily focused on understanding patterns of movement and analysis that’s focused on understanding specific customers. When your most interested in traffic patterns, then samples work very well. Even highly biased samples will often serve. If, on the other hand, you’re looking to use customer journey tracking to understand specific customers or customer segments (like season-ticket holders) better, you should focus on opt-in based techniques. In those situations, identification trumps accuracy.

If you have questions about the right location-based measurement technology solution for your business, drop us a line at info@digitalmortar.com

Next up, I’ll tackle the surprisingly interesting world of stadium/arena measurement.