Machine Learning and Optimal Store Path

My previous post covered the first half of my presentation on Machine Learning (ML) and store analytics at the Toronto Symposium. Here, I’m going to work through the case study on using ML to derive an optimal store path. For that analysis, we used our DM1 platform to source, clean, map and aggregate the data and then worked with a data science partner (DXi) on the actual analysis.

 

Why this problem?

Within DM1 we feel pretty good about the way we’ve built out visualizations of the store data that are easy to use and surprisingly powerful. The Full Path View, Funnel View and Store Layout View  all provide really good ways to explore shopper path in the store.

But for an analyst, exploring data and figuring out a model are utterly different tasks. A typical store presents a nearly infinite number of possible paths – even when the paths are aggregated up to section level. So there’s no way to just explore the paths and find optimal ones.

 

Even at the most basic level of examining individual shopper paths, deciding what’s good and bad is really hard. Here’s two shopper paths in a store:

Which is better? Does either have issues? It’s pretty hard to know.

 

Why Machine Learning?

Optimal store pathing meets the basic requirements for using supervised ML – we have a lot of data and we have a success criteria (checkout). But ML isn’t worth deploying on every problem that has a lot of data and success criteria. I think about it this way – if I can get want I want by writing simple algorithmic code, then I don’t need ML. In other words, if I can write (for example) a sort and then some simple If-Then rules that will identify the best path or find problem path points, then that’s what we’ll do. If, for example, I just wanted to identify sections that didn’t convert well, it would be trivial to do that. I have a conversion efficiency metric, I sort by it (Ascending) and then I take the worst performers. Or maybe I have a conversion threshold and simply pick any Section that performs worse. Maybe I even calculate a standard deviation and select any section that is worse than 1 standard deviation below the average Section conversion efficiency. All easy.

But none of those things are really very useful when it comes to finding poor path performance in a robust fashion.

So we tried ML.

 

The Analysis Basics

The analysis was focused on a mid-sized apparel store with around 25 sections. We had more than 25,000 shopper visits. Which may not seem like very much if you’re used to digital analytics, but is a pretty good behavior base for a store. In addition to the basic shopper journey, we also had Associate interaction points (and time of interaction), and whether or not the shopper converted. The goal was to find potential store layout problems and understand which parts of the store drove to (or subtracted from) overall conversion efficiency.

 

Preparing the Data

The first step in any analysis (once you know what you want) is usually data preparation.

Our data starts off as a stream of location events. Those location events have an X,Y, Z coordinates that are offset from a zero point in the store. In the DM1 platform, we take that data and map it against a digital planogram capability that keeps a full, historical record of the store. That tells us what shoppers actually looked and where they spent time. This is the single most critical step in turning the raw data into something that’s analytically useful.

Since we also track Associates, we can track interaction points by overlaying the Associate data stream on top of the shopper stream. This isn’t perfect – it’s easy to miss short interactions or be confused by a crowded store – but particularly when it’s app to app tracking it works pretty well. Associate interaction points are hugely important in the store (as the subsequent analysis will prove).

Step 3 is knowing whether and when a shopper purchased. Most of the standard machine learning algorithms require having a way to determine if a behavior pattern was successful or not – that’s what they are optimizing too. We’re using purchase as our success metric.

The underlying event data gets aggregated into a single row per shopper visit. That row contains a visit identifier, a start and stop time, an interaction count, a first interaction time, a last interaction time, the first section visited, the time spent in each section and, of course, our success metric – a purchase flag.

That’s it.

The actual analytic heavy lifting was done by DXi on their machine learning platform. They use an ensemble approach – throwing the kitchen sink at the problem by using 25+ different algorithms to identify potential winners/losers (if you’d like more info or an introduction to them, drop me a line and I’ll connect you).

 

Findings

Here’s some of the interesting stuff that surfaced, plucked from the Case-Study I gave at the Symposium:

One of the poorest performing sections – unpicked by a single DXi ML algorithm as important – sits right smack dab in the middle of the store. That central position really surprised us. Yes, as you’ll see in a moment, the store has a successful right rail pattern – but this was a fairly trafficked spot with good sightlines and easy flow into high-value areas of the store.

Didn’t work well though. And that’s definitely worth thinking about from a store layout perspective.

One common browsing behavior for shoppers is a race-track pattern – navigating around the perimeter of the store. There’s a good example of that on the right-side image I showed earlier:

The main navigation path through the store is the red rectangle (red because this shopper spent considerable time there) – and you can see that while the shopper frequently deviated from that main path that their overall journey was a circuit around the store.

The ML algo’s don’t know anything about that – but they did pick out the relevant sections in the analyzed store along that starting path as really important for conversion.

We took that to mean that the store is working well for that race-track shopper type. An important learning.

For this particular store, casual shoes was picked as important by every ML algorithm – making it the most important section of the store. It also had the largest optimal time value – and clearly rewarded more time with higher conversion rates. Shoes, of course, is going to be this way. It’s not a grab and go item. So there’s an element of the obvious here – something you should expect when you unleash ML on a dataset (and hey – most analytics projects will, if they work at all, vacillate between the interesting and the obvious). But even compared to other types of shoe – this section performed better and rewarded more time spent – so there is an apples-to-apples part of this comparison as well.

The next finding was an interesting one and illustrates a bit of the balance you need to think about between the analyst and the algorithm. The display in question was located fairly close to cash-wrap on a common path to checkout. It didn’t perform horribly in the ML – some of the DXi algorithms did pick it as important for conversion. On the other hand, it was one of the few sections with a negative weighting to time spent – so more time spent means less likely conversion. We interpreted that combination as indicating that the section’s success was driven by geography not efficiency. It’s kind of like comparing Saudi Arabia vs. U.S. Shale drillers. Based purely on the numbers, Saudi Arabia looks super efficient and successful with the lowest cost per barrel of oil extracted in the world. But when you factor in the geographic challenges, the picture changes completely. SA has the easiest path to oil recovery in the world. Shale producers face huge and complex technical challenges and still manage to be price competitive. Geography matters and that’s just a core fact of in-store analytics.

Our take on the numbers when we sifted through the DXi findings was that this section was actually underperforming. It might take a real A/B test to prove that, but regardless I think it’s a good example of how an analyst has to do more than run an algorithm. It’s easy to fool even very sophisticated algorithms with strong correlations and so much of our post-analysis ANALYSIS was about understanding how the store geography and the algorithm results play together.

In addition to navigation findings like these, the analysis also included the impact of Associates on conversion. In general, the answer we got was the more interactions the merrier (at the cash register). Not every store may yield the same finding (and it’s also worth thinking about whether a single conversion optimization metric is appropriate here – in my Why Analytics Fails talk I argue for the value in picking potentially countervailing KPIs like conversion and shopper satisfaction as dual optimization points).

Even after multiple interactions, additional interactions had a positive impact on sales.

This should be obvious but I’ll hearken back to our early digital analytics days to make a point. We sometimes found that viewing more pages on a Website was a driver of conversion success. But that didn’t mean chopping pages in half (as one client did) so that that the user had to consume more pages to read the same content was a good strategy.

Just because multiple Associate interactions in a store with a normal interaction strategy created lift, it doesn’t mean that, for example, having your Associates tackle customers (INTERACTIOOOON!!!) as they navigate the floor will boost conversion.

But in this case, too much interaction was a legitimate concern. And the data indicates that – at least as measured by conversion rates – the concern did not manifest itself in shopper turn-off.

If you’re interested in getting the whole deck – just drop me a note. It’s a nice intro into the kind of shopper journey tracking you can do with our DM1 platform and some of the ways that machine learning can be used to drive better practice. And, as I mentioned, if you’d like to check out the DXi stuff – and it’s interesting from a pure digital perspective too – drop me a line and I’ll introduce you.

The Really Short Introduction to DM1 and In-Store Measurement

Take a minute (okay – a minute and a half) to check out this video overview of our DM1 store measurement platform. It’s the shortest and crispest introduction we’ve produced so far.

As more than one famous writer/philosopher has remarked, “If I had more time, it would have been shorter.” Brevity, like wit, takes work. And practice. We haven’t achieved wit, but we’re getting close to brevity:

I also really like the video’s flow. It starts with a very short intro into the basic concept of store measurement and then introduces the platform with the Digital Planogram tool – the Configurator. When you get right down to it, this capability is the single most important part of the platform. Digital representations of the store are critical to every report and analysis DM1 delivers. And the ability to rapidly create, adjust and maintain those digital maps is essential to making the tool work.

When we first released DM1 the configurator lagged behind some of the reporting tools – not very friendly and a little prone to bugginess. Its grown into quite a good tool – a pleasure to use and capable of handling even very complex store layouts pretty easily.

From the configurator, the video flows into the Layout tool – which just maps metrics right onto those digital planograms. Not only does this show how effortlessly you move from a map of the store to a metric, but I really like the way the video works through a small set of metrics to show how easy the visual interpretation is.

Once you’ve got a feel for basic metrics in the Store Layout, the next logical step is to tackle journey. And the next two sections highlight funnel and path analysis. Both of these tools help transition thinking from a static view of store performance to a focus on shopper journey. Funnels tell you how effective the store is in moving shoppers down an engagement path. Path helps you understand which in-store paths are popular and which drive conversion. After this, it’s a quick look at the data exploration capabilities of the platform – and the ability to build reports around whatever problem you choose to tackle. Finally, it wraps up with a sample of the dashboards.

Truth to tell, I’ve sometimes done this same presentation in almost the reverse order – starting with Dashboards and ending with configuration. It’s plausible that way too, but I think this works better for analysts. Because while dashboards are the first view for end-users of DM1, for analysts their task really starts with store mapping, proceeds through various levels of analysis, and ends with wrapping a nice, neat bow around the data for others. That’s that way this video proceeds and that makes the structure more compelling and natural if that’s the way you tend to think.

Check it out.

 

Hey, unless you’re a very fast reader, you’ve already spent more time on this post than the you will on the video!

 

 

Machine Learning and Optimizing the Store

My previous post covered the first half of my presentation on Machine Learning (ML) and store analytics at the Toronto Symposium. Here, I’m going to work through the case study on using ML to derive an optimal store path. For that analysis, we used our DM1 platform to source, clean, map and aggregate the data and then worked with a data science partner (DXi) on the actual analysis.

Why this problem?

Within DM1 we feel pretty good about the way we’ve built out visualizations of the store data that are easy to use and surprisingly powerful. The Full Path View, Funnel View and Store Layout View  all provide really good ways to explore shopper path in the store.

But for an analyst, exploring data and figuring out a model are utterly different tasks. A typical store presents a nearly infinite number of possible paths – even when the paths are aggregated up to section level. So there’s no way to just explore the paths and find optimal ones.

Even at the most basic level of examining individual shopper paths, deciding what’s good and bad is really hard. Here’s two shopper paths in a store:

Which is better? Does either have issues? It’s pretty hard to know.

 

Why Machine Learning?

Optimal store pathing meets the basic requirements for using supervised ML – we have a lot of data and we have a success criteria (checkout). But ML isn’t worth deploying on every problem that has a lot of data and success criteria. I think about it this way – if I can get want I want by writing simple algorithmic code, then I don’t need ML. In other words, if I can write (for example) a sort and then some simple If-Then rules that will identify the best path or find problem path points, then that’s what we’ll do. If, for example, I just wanted to identify sections that didn’t convert well, it would be trivial to do that. I have a conversion efficiency metric, I sort by it (Ascending) and then I take the worst performers. Or maybe I have a conversion threshold and simply pick any Section that performs worse. Maybe I even calculate a standard deviation and select any section that is worse than 1 standard deviation below the average Section conversion efficiency. All easy.

But none of those things are really very useful when it comes to finding poor path performance in a robust fashion.

So we tried ML.

 

The Analysis Basics

The analysis was focused on a mid-sized apparel store with around 25 sections. We had more than 25,000 shopper visits. Which may not seem like very much if you’re used to digital analytics, but is a pretty good behavior base for a store. In addition to the basic shopper journey, we also had Associate interaction points (and time of interaction), and whether or not the shopper converted. The goal was to find potential store layout problems and understand which parts of the store drove to (or subtracted from) overall conversion efficiency.

Preparing the Data

The first step in any analysis (once you know what you want) is usually data preparation.

Our data starts off as a stream of location events. Those location events have an X,Y, Z coordinates that are offset from a zero point in the store. In the DM1 platform, we take that data and map it against a digital planogram capability that keeps a full, historical record of the store. That tells us what shoppers actually looked and where they spent time. This is the single most critical step in turning the raw data into something that’s analytically useful.

Since we also track Associates, we can track interaction points by overlaying the Associate data stream on top of the shopper stream. This isn’t perfect – it’s easy to miss short interactions or be confused by a crowded store – but particularly when it’s app to app tracking it works pretty well. Associate interaction points are hugely important in the store (as the subsequent analysis will prove).

Step 3 is knowing whether and when a shopper purchased. Most of the standard machine learning algorithms require having a way to determine if a behavior pattern was successful or not – that’s what they are optimizing too. We’re using purchase as our success metric.

The underlying event data gets aggregated into a single row per shopper visit. That row contains a visit identifier, a start and stop time, an interaction count, a first interaction time, a last interaction time, the first section visited, the time spent in each section and, of course, our success metric – a purchase flag.

That’s it.

The actual analytic heavy lifting was done by DXi on their machine learning platform. They use an ensemble approach – throwing the kitchen sink at the problem by using 25+ different algorithms to identify potential winners/losers (if you’d like more info or an introduction to them, drop me a line and I’ll connect you).

 

Findings

Here’s some of the interesting stuff that surfaced, plucked from the Case-Study I gave at the Symposium:

One of the poorest performing sections – unpicked by a single DXi ML algorithm as important – sits right smack dab in the middle of the store. That central position really surprised us. Yes, as you’ll see in a moment, the store has a successful right rail pattern – but this was a fairly trafficked spot with good sightlines and easy flow into high-value areas of the store.

Didn’t work well though. And that’s definitely worth thinking about from a store layout perspective.

One common browsing behavior for shoppers is a race-track pattern – navigating around the perimeter of the store. There’s a good example of that on the right-side image I showed earlier:

The main navigation path through the store is the red rectangle (red because this shopper spent considerable time there) – and you can see that while the shopper frequently deviated from that main path that their overall journey was a circuit around the store.

The ML algo’s don’t know anything about that – but they did pick out the relevant sections in the analyzed store along that starting path as really important for conversion.

We took that to mean that the store is working well for that race-track shopper type. An important learning.

For this particular store, casual shoes was picked as important by every ML algorithm – making it the most important section of the store. It also had the largest optimal time value – and clearly rewarded more time with higher conversion rates. Shoes, of course, is going to be this way. It’s not a grab and go item. So there’s an element of the obvious here – something you should expect when you unleash ML on a dataset (and hey – most analytics projects will, if they work at all, vacillate between the interesting and the obvious). But even compared to other types of shoe – this section performed better and rewarded more time spent – so there is an apples-to-apples part of this comparison as well.

The next finding was an interesting one and illustrates a bit of the balance you need to think about between the analyst and the algorithm. The display in question was located fairly close to cash-wrap on a common path to checkout. It didn’t perform horribly in the ML – some of the DXi algorithms did pick it as important for conversion. On the other hand, it was one of the few sections with a negative weighting to time spent – so more time spent means less likely conversion. We interpreted that combination as indicating that the section’s success was driven by geography not efficiency. It’s kind of like comparing Saudi Arabia vs. U.S. Shale drillers. Based purely on the numbers, Saudi Arabia looks super efficient and successful with the lowest cost per barrel of oil extracted in the world. But when you factor in the geographic challenges, the picture changes completely. SA has the easiest path to oil recovery in the world. Shale producers face huge and complex technical challenges and still manage to be price competitive. Geography matters and that’s just a core fact of in-store analytics.

Our take on the numbers when we sifted through the DXi findings was that this section was actually underperforming. It might take a real A/B test to prove that, but regardless I think it’s a good example of how an analyst has to do more than run an algorithm. It’s easy to fool even very sophisticated algorithms with strong correlations and so much of our post-analysis ANALYSIS was about understanding how the store geography and the algorithm results play together.

In addition to navigation findings like these, the analysis also included the impact of Associates on conversion. In general, the answer we got was the more interactions the merrier (at the cash register). Not every store may yield the same finding (and it’s also worth thinking about whether a single conversion optimization metric is appropriate here – in my Why Analytics Fails talk I argue for the value in picking potentially countervailing KPIs like conversion and shopper satisfaction as dual optimization points).

Even after multiple interactions, additional interactions had a positive impact on sales.

This should be obvious but I’ll hearken back to our early digital analytics days to make a point. We sometimes found that viewing more pages on a Website was a driver of conversion success. But that didn’t mean chopping pages in half (as one client did) so that that the user had to consume more pages to read the same content was a good strategy.

Just because multiple Associate interactions in a store with a normal interaction strategy created lift, it doesn’t mean that, for example, having your Associates tackle customers (INTERACTIOOOON!!!) as they navigate the floor will boost conversion.

But in this case, too much interaction was a legitimate concern. And the data indicates that – at least as measured by conversion rates – the concern did not manifest itself in shopper turn-off.

If you’re interested in getting the whole deck – just drop me a note. It’s a nice intro into the kind of shopper journey tracking you can do with our DM1 platform and some of the ways that machine learning can be used to drive better practice. And, as I mentioned, if you’d like to check out the DXi stuff – and it’s interesting from a pure digital perspective too – drop me a line and I’ll introduce you.

The Measurement Minute

If I’m known for anything, it’s mind-numbingly long blog posts. Brevity? Not my style. But I’ve been challenging myself to go shorter and the Measurement Minute is the ultimate test. These are one-minute podcasts covering just about anything measurement and analytics related. I’ll try to keep them coming. Though as many famous writers have remarked (or been said to remark) – making things shorter takes time.

Check it out on iTunes:

https://t.co/t5CK2NkZRz

Machine Learning and Store Analytics

Not too long ago I spoke in Toronto at a Symposium focused on Machine Learning to describe what we’ve done and are trying to do with Machine Learning (ML) in our DM1 platform and with store analytics in general. Machine Learning is, in some respects, a fraught topic these days. When something is hard on the hype cycle, the tendency is to either believe it’s the answer to every problem or to dismiss the whole thing as an illusion. The first answer is never right. The second sometimes is. But ML isn’t an illusion – it’s a real capability with a fair number of appropriate applications. I want to cover – from our hands-on, practical perspective – where we’ve used ML, why we used ML and show a case-study of some of the results.

 

Just what is Machine Learning?

In its most parochial form, ML is really nothing more than a set of (fairly mature) statistical techniques dressed up in new clothes.

Here’s a wonderful extract from the class notes of a Stanford University expert on ML: (http://statweb.stanford.edu/~tibs/stat315a/glossary.pdf)

Machine Learning vs Statistics

It’s pretty clear why we should all be talking ML not statistics! And seriously, wasn’t data science enough of a salary upgrade for statisticians without throwing ML into the hopper?

Unlike big data, I have no desire in this case to draw any profound definitional difference between ML and statistics. In my mind, I think of ML as being the domain of neural networks, deep learning and Support Vector Machines (SVMs). Statistics is the stuff we all know and love like regression and factor analysis and p values. That’s a largely ad hoc distinction (and it’s particularly thin on the unsupervised learning front), but I think it mostly captures what people are thinking when they talk about these two disciplines.

 

What Problems Have We Tried to Solve with ML

At a high-level, we’ve tackled three types of problems with ML (as I’ve casually defined it): improving data quality, shopper type classification, and optimal store path analysis.

Data quality is by far the least sexy of these applications, but it’s also the area where we’ve done the most work and where the everyday application of our platform takes actual advantage of some ML work.

When we setup a client instance on DM1, there’s a number of highly specific configurations that control how data gets processed. These configurations help guide the platform in key tasks like distinguishing Associate electronic devices from shopper devices. Why is this so important? Well, if you confuse Associates with shoppers, you’ll grossly over-count shoppers in the store. Equally bad, you’ll miss out on a real treasure trove of Associate data including when Associate/Shopper interactions occur, the ratio of Shoppers to Associates (STARs), and the length and outcome from interactions. That’s all very powerful.

If you identify store devices, it’s easy enough to signature them in software. But we wanted a system that would do the same work without having to formally identify store devices. Not only does this make it a lot easier to setup a store, it fixes a ton of compliance issues. You may tell Associates not to carry their own devices on the floor, but if you think that rule is universally followed your kidding yourself. So even if you BLE badge employees, you’re still likely picking up their personal phones as shopper devices. By adding behavioral identification of Associates, we make the data better and more accurate while minimizing (in most cases removing) operational impact.

We use a combination of rule-based logic and ML to classify Associate behavior on ALL incoming devices. It turns out that Associates behave quite differently in stores than shoppers. They spend more time. Go places shoppers can’t. Show up more often. Enter at different times. Exit at different times. They’re different. Some of those differences are easily captured in simple IF-then programming logic – but often the patterns are fairly complex. They’re different, but not so easily categorized. That’s where the ML kicks in.

We also work in a lot of electronically dense environments. So we not only need to identify Associates, we need to be able to pick-out static devices (like display computers, endless aisle tablets, etc.). That sounds easy, and in fact it is fairly easy. But it’s not quite as trivial as it sounds; given the vagaries of positioning tech, a static device is never quite static. We don’t get the same location every time – so we have to be able to distinguish between real movement and the type of small, Brownian motion we get from a static device.

Fixing data quality is never all that exciting, but in the world of shopper journey measurement it’s essential. Without real work to improve the data – work that ML happens to be appropriate for – the data isn’t good enough.

The second use we’ve found for machine learning is in shopper classification. We’re building a generalized shopper segmentation capability into the next release of DM1. The idea is pretty straightforward. For years, I’ve championed the notion of 2-tiered segmentation in digital analytics. That’s just a fancy name for adding a visit-type segmentation to an existing customer segmentation. And the exact same concept applies to stores.

As consultants, we typically built highly customized segmentation schemes. Since Digital Mortar is a platform company, that’s not a viable approach for us. Instead, what we’ve done is taken a set of fairly common in-store behavioral patterns and generalized their behavioral signatures. These patterns include things like “Clearance Shoppers”, “Right-Rail Shoppers”, “Single Product Focused Shoppers”, “Product Returners”, and “Multi-Product Browsers”. By mapping store elements to key behavior points, any store can then take advantage of this pre-existing ML-driven segmentation.

Digital Mortar's DM1 Shopper Segmentation

It’s pretty cool stuff and I’m excited to get it into the DM1 platform.

The last problem we’ve tackled with ML is finding optimal store paths. This one’s more complex – more complex than we’ve been comfortable taking on directly. We have a lot of experience in segmentation techniques – from cluster analysis to random forests to SVMs. We’re pretty comfortable with that problem set. But for optimal path analysis, we’ve been working with DXi. They’re an ML company with a digital heritage and a lot of experience working on event-level digital data. We’ve always said that a big part of what drew us to store journey measurement is how similar the data is to digital journey data and this was a chance to put that idea to the test. We’ve given them some of our data and had them work on some optimal path problems – essentially figuring out whether the store layout is as good as possible.

Why use a partner for this? I’ve written before about how I think Digital Mortar and the DM1 platform fit in a broader analytics technology stack for retail. DM1 provides a comprehensive measurement system for shopper tracking and highly bespoke reporting appropriate to store analytics. It’s not meant to be a general purpose analytics platform and it’s never going to have the capabilities of tools like Tableau or R or Watson. Those are super-powerful general-purpose analytics tools that cover a wide range of visualization, data exploration and analytic needs. Instead of trying to duplicate those solutions we’ve made it really easy (and free) to export the event level data you need to drive those tools from our platform data.

I don’t see DM1 becoming an ML platform. As analysts, we’ll continue to find uses for ML where we think it’s appropriate and embed those uses in the application. But trying to replicate dedicated ML tools in DM1 just doesn’t make a lot of sense to me.

In my next post, I’ll take a deeper dive into that DXi work, give a high-level view of the analytics process, and show some of the more interesting results.

Mobile Apps, Geo-Location and Shopper Analytics

The hardest part about doing enterprise shopper journey measurement and analytics is data collection. Putting new hardware in the store is no joke – and yet it’s often necessary to get the measurement you want. Still, often isn’t the same as always. Last week I talked about how you can get surprisingly powerful store measurement by taking data from your existing store WiFi and flowing it into our DM1 platform. Store Wifi gives you broad population coverage (no, shoppers don’t have to connect) but it isn’t very accurate positionally. On the other end of the measurement spectrum is geo-locating your mobile app users. It’s another way – and a good one – to get fascinating measurement about how shoppers navigate your store.

 

Geo-locating your mobile app users is easy and quite inexpensive. It can be done with no additional hardware in the store. It’s very accurate and, by feeding the data to DM1, you can get powerful and detailed analytics on what your mobile app users are doing in-store. When you add geo-location to your Mobile App (it just takes a few lines of code), it sends you a stream of positional data that tells you exactly where a shopper was throughout their in-store journey. Our DM1 platform ingests that stream, aggregates it, and provides you the store analytics to understand paths, funnels, usage, interactions, and much more.

That’s why, when I speak on geo-location analytics, I steal the line from Lenox Financial and describe mobile app geo-location as the biggest no brainer in the history of earth.

 

There’s only one real drawback to shopper measurement via mobile app and it’s the obvious one – it’s limited to the population of your mobile app users. For most retailers, that’s a small and totally non-random segment of their population.

 

Before I discuss the implications of that, here’s what you need to know about getting this kind of app-tracking to work and integrating it with Digital Mortar’s platform.

 

We’re all mobile phone users and we all know that our phones position us. Most of us could barely navigate our home city without Google or Waze or Apple Maps. I remember being in Venice and wondering how ANYONE ever got around there before GPS. It’s like the old D&D game – a maze of twisty passages, all alike. I imagine people just got lost a lot and that was probably part of the fun.

 

We also know that the built-in outdoor GPS positioning on the phone is pretty accurate but not super-precise. When you use it for walking you can often see just how dislocated that little blue-dot is from where your actually standing. And it can take some real mental work to figure out exactly where you are and when to turn if – as in places like Venice – you’re not navigating long straight blocks.

 

Indoor wayfinding has its own set of challenges. Indoor spaces by their very nature are more tightly packed so there’s a higher premium on positional accuracy. But indoor spaces are also more challenging from a measurement standpoint because signals are routinely blocked, distorted or mirrored. And, of course, indoor space are often importantly three dimensional. Outdoor mapping doesn’t have to worry about floors – but in buildings, knowing what floor you’re on is fundamental.

 

Fortunately, your typical smart phone these days has a whole grab bag of sensors that can be used for better indoor wayfinding. Good indoor wayfinding systems take advantage of the whole array of phone sensors – starting with GPS positioning but adding WiFi, BlueTooth signals, radio signals, magnetic fields, the inertial sensor platform and even barometric pressure.

 

This works pretty well since most environments these days are signal rich. It’s also very easy to improve the performance of indoor way-finding if you find that there are inside areas where positional accuracy isn’t great. In most cases, dropping a beacon or two will solve the problem.

 

Typically, indoor wayfinding systems work as code libraries. You put their code into your mobile app and make a few simple function calls. From a developer perspective, this type of integration is simple and straightforward. What’s more, unlike say digital analytics tagging where you need to tie measurement messaging tightly to the functionality, the geo-location libraries (at least when used for measurement) function almost as a stand-alone element of your App. So it’s trivial for developers to integrate the code – and it requires minimal design cycles. Compared to adding good digital analytics tagging to your App, it’s a breeze.

 

With a 3rd Party library in your App, there’s only two other things you need to do. The first is to fingerprint your location – this is essentially a calibration and mapping step where you translate the signals into site location. It’s not hard, but if you really want a turnkey setup, Digital Mortar can do this for you – it takes less than a day and involves no disruption of the site. It doesn’t even have to be done after hours.

 

The last step is to provision a feed from the 3rd Party Cloud instance (or your own cloud instance if you’re using a non-turnkey library that just sources the data to your servers) to our DM1 platform. Most providers provide a good, event-level feed as part of their core service. So all you have to do is turn it on. It’s not that much harder in the DIY world.

 

Keep in mind that most geo-location service providers are thinking about messaging, indoor way-finding and other interactive uses for their service – not analytics. So the analytics you’ll get out of the box is mostly non-existent or even less compelling that what you’d get from a WiFi vendor (and, as I mentioned last week, that aint great).

 

That’s what DM1 is for. Because there is no better source of data for our platform. The beauty of fully-configured mobile app services is that the positional accuracy is terrific. The event stream can be generated at a pre-determined frequency – so we’re not dependent on the somewhat random ping rates that come with other forms of electronic tracking. That means we can capture a full, accurate, and very detailed customer journey.

 

Even better, the nature of mobile apps is that they can provide a true omni-channel join. So you can take DM1’s CRM-based feed and integrate with your customer digital behavior to create a full journey customer database. Our CRM feed includes the customer id you pass us (usually a hashed identifier), basic visit information (visit time, length, and flags for purchase and interaction), and the time spent in each area of the store. Adding that to your customer record is powerful. And yes, it’s just for your mobile app users. But often, those are your very best customers.

 

Plus, there are important applications where the biases inherent in a mobile app sample aren’t particularly damaging. If, for example, you want to know how long customers are queuing at cash-wrap it’s perfectly possible to use mobile app data. When they are standing in line, they are there for the same amount of time as everyone else. And how mobile app users shop the store and take advantage of omni-channel experiences is, let’s just say, quite interesting and valuable.

That being said, it’s like any other case where you’re working with a non-random sample. You can’t assume that all your shoppers behave the way your mobile population does – and if you try to make those kinds of extrapolations, you’re going to get it wrong.

 

That’s why, though a mobile app feed might be the primary customer source you feed into DM1, it’s more likely that you’ll combine a mobile app feed with a full customer feed from iViu, WiFi or camera.

In-store shopper measurement technology compared reviews

In DM1, we keep each feed as a separate segment. With a little bit of a code tweak to your mobile app, we can also integrate your mobile app data directly with the iViu feed so there’s no double counting. But most times, you’ll work with them as separate populations.

 

Either way, you get the full power of DM1’s analytics on the mobile app shopper data. Pathing, funnels, store layout, segmentation, etc. etc.:

Digital Mortars DM1 - Shopper measurement and geo-location analytics. Path Analytics, Funnel Analysis

Finally, this is also one of the best ways to collect and integrate Associate tracking. DM1 provides full Associate measurement functionality allowing you to understand when and where you’re under or over staffed in the store. Adding geo-location to your associate devices is just as easy as it is on the shopper side – and this is something you can do even if you’re not heavily invested in customer-facing mobile apps.

 

 

So if you’re suitably excited, the next question ought to be – where do you get this and how much does it cost?

 

There are tons of options for adding geo-location measurement to your app. The easiest and most fully-baked come from providers like IndoorAtlas and Radar. Hey, even my old digital analytics friends at Adobe and Google do this. The most full-service systems include the code libraries, platforms for fingerprinting, and robust cloud feeds. They make going from App setup to DM1 analytics a walk in the park. There are plenty of DIY alternatives as well – many open-sourced and free.

 

The full-service platform vendors typically charge you per location based on broad square footage ranges. It’s quite inexpensive – though the out-of-the-box pricing models tend to work better for single, very large locations than for large numbers of mid-sized stores. Most of these companies seem to engage in enterprise pricing – meaning that the price you pay is largely a function of whatever you can negotiate. And if you’d prefer, we can provide developer support integrating an open-source solution into your App. It probably won’t be quite as robust, but if your primary goal is measurement it will more than get the job done.

 

From the standpoint of integrating with DM1, it’s pretty much out of the box. If we don’t support the feed already, we’ll create the integration as part of getting you setup – no charge. It’s not too hard because the data streams are pretty much identical – identifier, timestamp, x, y coordinates. There really isn’t much else to it.

 

The measurement costs are trivial compared to what you spend on App development and small compared to what you spend on digital analytics app measurement and analysis. The data is extremely robust and – in a field plagued by bad data – quite accurate. The omni-channel join possibilities are like adding hot fudge sauce to an already delicious sundae. Paired with DM1, you can measure and optimize exactly how this critical and growing customer segment uses the store. You can study how digital and store behaviors interact. And you have an excellent data source for overall store navigation and store usage that you can pair with other data sources or use as is.

 

Okay…it may not be the biggest no-brainer in the history of earth. But adding geo-location and DM1 analytics to your mobile app is definitely the biggest no-brainer in shopper measurement.

Using Your Existing Store WiFi for Shopper Measurement

The most daunting part of doing shopper measurement isn’t the analytics, it’s the data collection piece. Nobody likes to put new technology in the store; it’s expensive and it’s a hassle. And most stores feel like they have plenty of crap dangling from their ceilings already.

 

If you’re in that camp, but would love to have real in-store shopper measurement, there are three technologies you should consider. The first, and the one I’m going to discuss today, is your existing WiFi access points.

 

Most modern WiFi access points can geo-locate the signals they receive. Now you may be thinking to yourself that the overwhelming majority of shoppers don’t connect to your WiFi. But that’s okay. Phones with their WiFi enabled ping out to your access points on a regular basis even when they don’t connect to your WiFi. And, yes, it’s both possible and acceptable to use that for anonymous measurement.

 

What that means, is that you can use your store’s WiFi to measure the journeys for a significant percentage of your shoppers. Access point tracking is incredibly convenient. Since it’s based off your existing customer WiFi system, you already have the necessary hardware. If your equipment is modern, it’s usually just a matter of flipping a software switch to get geo-location data in the cloud.

 

Providers like Meraki have been gradually improving the positional accuracy of the data and they make it super-easy to enable this and get a full data feed. And if you’re equipment is older or from a vendor that doesn’t do that? It’s not a lost cause. Every reasonably modern WiFi Access Point generates a log file that includes the basic data necessary for positional triangulation. It’s not as convenient as the cloud-based feeds that come with the best systems, but if you don’t mind doing a little bit of traditional IT file wrangling, it can work almost as well. We’ll do the heavy lifting on the positioning.

 

The biggest downside to traditional WiFi measurement has been the lack of useful analytics. Working from the raw feed is very challenging for an enterprise (harder than just installing new devices) and the reporting and analytics you get out of the box from WiFi vendors is…well…about what you’d expect from WiFi vendors. Let’s just say their business isn’t analytics.

 

That’s where our DM1 platform really makes a huge difference. DM1 is an open, shopper analytics platform. It’s built to ingest ANY detailed, geo-located data stream. It can take data from your mobile app users. It can take data from dedicated measurement video cameras. It can take data from iViu passive network sniffers. Really, any measurement system that creates timestamped shopper/device and x,y coordinates can be easily ingested.

 

Your existing WiFi Access Point data fits that bill.

 

Imagine being able to take your WiFi geolocation data and with the flip of switch and no hardware install be able to do full-store pathing:

DM1 Digital Mortar store analytics full shopper path analytics

 

Full in-store funnels:

Digital Mortar Store Analytics DM1 Funnel Analysis for retail analytics and shopper tracking

 

Even cooler, because DM1 uses statistical methods to identify Associate devices, we’ll automatically parse that WiFi data to identify shoppers and associates. That lets you track associate presence and intraday STARs for any section of the store. No changes to store operations. No compliance issues. You can even do a path analysis on the shopper journey by salesperson or sales team:

DM1 Retail Analytics digital mortar full store path analytics and associate interactions

 

How cool is that!

 

And remember what I said about other data sources? DM1 can simultaneously ingest your mobile app user data and your WiFi data and let you track each as separate segments. You get the extra detail and positional accuracy for all your mobile shoppers along with the ability to rapidly swap views and see how the broader population of smartphone users is navigating your store.

 

Coupling DM1 to WiFi geo-location data really is the easiest, cheapest way to give serious, enterprise-class in-store shopper measurement a try.

 

And the Fine Print

If you’re wondering if there are drawbacks to WiFi measurement, the answer is yes. We see it as a great, no-pain way to get started with shopper analytics. But there are strong reasons why, to get really good measurement, you’ll need to migrate at least some stores to dedicated measurement collection. WiFi’s positional accuracy suffers in comparison to dedicated measurement devices like iViu’s or camera-based solutions. And it also measures fewer shoppers. Even compared to other means of electronic detection, you’ll lose a significant number of phones – especially IOS devices.

 

If you were reading closely, you’ll remember that I said there were three technologies to consider if you want to do shopper journey measurement without adding in-store hardware. WiFi is the easiest and the most widespread of these. But there are slam-dunk solutions for mobile app measurement that I’ll cover in my next post. And if you have relatively modern security cameras, there’s even a software-based solution that can help you turn that data into grist for the DM1 mill. That’s a solution we’ve been hoping for since day 1 – and it’s finally starting to become a reality.