Tag Archives: analytics

Seeing into the Soul of the Shopper

The integration of behavioral research and Voice of Customer is remarkably fruitful. I saw that first hand in a project we recently completed with our partner Insight Safari for a top 10 retailer in the United States. For the project, Insight Safari’s research teams fanned out across the country and did shopper interviews at stores in markets like Pittsburgh, Dallas, and Los Angeles. This is deep qualitative research – designed to get to the emotional core of shopping decisions and see how attitudes and intent shape in-store behavior. It’s what Insight Safari does. But this time there was behavioral twist. Shoppers were given a mobile device with Digital Mortar’s tracking built-in. And at the end of the journey, the survey giver was able to tailor the survey based on a detailed map of exactly what the shopper had done.

In-Store VoC Shopper Journey Analytics with Digital Mortar

Integrating a behavioral view into an attitudinal project enriches the project immeasurably. But it’s not trivial to do and there’s a reason why Insight Safari is uniquely well-positioned to do this. To understand both the challenge and the opportunity, a little background on voice-of-customer and behavioral analytics is necessary.

 

VoC and Behavioral Analytics

Voice of Customer research usually tries to capture four key elements about a shopper. Who the shopper is (demographics), what the shopper did (behavior), why the shopper did it (drivers), and how the shopper felt about it (satisfaction). One of the things that makes opinion research an art is finding the right balance between each of these. And there’s always an opportunity cost – any time you spend in one category inevitably reduces the amount of time you spend in another. Beyond the opportunity cost, though, it’s particularly challenging to disentangle a shopper’s description of behavior and drivers. Ask a shopper why they came to the store and then ask them what they did, and the answer they give to the first is highly likely to influence the answer they give to the second. What’s more, the shopper is prioritizing the shopping session by their internal measures of what they did – they forget the distractions, the things they looked at but didn’t buy, and the places they went that turned out not to be important. But if you’re the store – understanding those failure points is what you’re really after!

Many of the shopping sessions that we tracked with Insight Safari lasted 30 minutes to an hour. Think about that. How likely are you to be able to say what you looked at and explain how you navigated a store over that amount of time?

Insight Safari sometimes hires a videographer to (literally) stalk the shoppers and record sample journeys in the store. But that strategy falls victim to its own kind of quantum uncertainty – the act of measurement alters the system dynamics too much. We’re all used to having a phone in our pocket – but unless you’re a Kardashian, having a videographer following you around just doesn’t feel natural.

It turns out that of the four primary types of data collection for VoC, understanding what the shopper did is actually the hardest to get right with self-reporting. There’s an amusing anecdote we like to tell from our days in digital analytics. One of our clients had a very negative satisfaction score for internal search sessions (super common for various reasons ranging from the type of sessions that use internal search to most internal search engines being really crappy). Interestingly, though, when we actually integrated their online survey data to their behavioral data, we found that almost a third of the visitors who complained most about search on the site hadn’t actually “searched”.  We were asking about internal search – typing in keywords in a little box – but they were thinking about “searching the site and not finding what they were looking for”.

What’s more, we found that for a significant percentage of visitors, their self-reported site visit reason just didn’t square with their actual behavior. A shopper might report that they were in the store to pick up clothes for the kids, but spend nearly all their time in the beauty aisle. It’s not that shoppers are lying about their behavior. Mostly they just aren’t objective or reflective about it. But getting through those layers of thoughtlessness is hard – sometimes flat out impossible. And getting even a remotely accurate approximation of the shopper’s behavior takes deep, detailed questioning that inevitably chews up a lot of time (opportunity cost) and leaves the analyst wondering how accurate the behavioral account really is.

So imagine if instead of having to interrogate the shopper about behavior – did you do this? What about this? Did you look at this on the way? Did you stop here? Did you go down this aisle? – you could just SEE their behavior. Every twist, every turn, every linger point!

Suddenly you’re not guessing about shopper behavior or the accuracy of self-reporting. You can focus the interview entirely on the why and the satisfaction of the experience. And you can use details of the behavior to call back elements of the journey to the shopper’s mind. What were you looking at when you stopped here? Why did you go back to the electronics department 3 times? What made you turn down this aisle?

It’s powerful.

But it’s not as easy as it looks, either. And in understanding why this is a little harder than it seems illuminates what makes Insight Safari particularly able to take advantage of the Digital Mortar behavioral data.

 

The Biggest Challenge Integrating Behavioral Data into the Survey Process

Voice of Customer data runs the gamut from highly quantitative (large sample sizes, standardized surveys) to fully anecdotal (guided focus groups). There’s interesting work to be done at any place along this spectrum and Insight Safari customizes the research approach to fit the business questions in play. But their specialty and primary focus is on going deep into shopper motivations and psyche – and that’s best done in the more personal, anecdotal end of the spectrum. At the same time, they like to have enough data to infer how common core shopper motivations are and how likely those are to play out in a given store. So Insight Safari usually works in the range of hundreds of surveys – not tens of thousands like we did in digital analytics and not 5-10 like a focus group company.

Most companies who take hundreds of surveys, rely on quite a bit of standardization in the survey design. Each shopper essentially takes the same survey with minor deviations for branching.

This sucks for a variety of reasons. Unless you know specifically what you’re looking for, it’s likely to miss the interesting parts of most shopper’s journeys. And if you’ve ever worked with this kind of data, you know that it’s almost certain to raise issues that leave you wishing you’d been able to ask one more question to really understand what the shopper was thinking! It can be frustrating!

But a rigid survey design also means that the behavioral data isn’t mapped into the questioning process. It can’t be – because you don’t the behaviors in advance. So while it’s possible to compare, for example, stated visit intent with actual shopping behavior, you aren’t using the data to drive the questions.

Insight Safari doesn’t work that way. Their survey givers aren’t part-times hired the day before to hang out in the store. They use research professionals – the equivalent of full-on focus group leaders – who are deeply knowledgeable about survey research. So their survey isn’t a rigid framework but a kind of loose framework that ensures they collect like kinds of data from each shopper but leaves the giver free to delve into interesting answers in great depth.

That turns out to be perfect for integrating behavioral data.

When shoppers finished their journey, the survey giver would enter the survey respondent id on their iPad and then get the detailed break-down of what the shopper did. Right then. While they were talking with the shopper.

And Insight Safari’s pros seamlessly integrated that into the flow of questions – even using the path visualization to directly explore decisions with the shopper. Most companies just don’t use survey givers skilled enough to do that. That’s no big knock. I’m not skilled enough to do that. Being able to drive intelligent field research takes an unusual combination of people skills, empathy, and objective analytic prowess. You have to be able think fast, be nice, and listen closely. It’s the equivalent of having no prep-time and still being able to do a great interview. Not easy.

There are ways to take the behavioral data and create survey integrations that are more mechanistic but still capture much of the uniqueness of the shopper journey. But there aren’t many companies who could take this time of in-store behavioral data and integrate it as deeply and seamlessly into their process as Insight Safari.

 

A Little About the Software

We customized our system pretty extensively for Insight Safari. We build a small mobile app (Android-based) that had a really, really simple user interface to it. The survey giver just had to press a button to start a survey and, when the phone was returned, press the stop button to end recording. The App pinged out every 2 seconds with the shoppers geo-location and included the survey id. We store that information in our real-time database. The shopper never has to do anything with the phone or app. They can carry it or it was attached to their cart.

The App also created a local store of the information in case there were connectivity problems (we had a few but not many). This allowed the App to send the survey data whenever connectivity was restored.

When the survey giver got the phone back and pressed Stop, the phone sends a message to the server and the session is closed. Once it’s closed it’s immediately surfaced in a custom report in the DM1 platform showing the most recent surveys completed.

The survey giver can search for any previous respondent, but almost never has to do that. They just click on the most recent survey closed and get the detailed behavioral report.

That report includes two elements: a tabular breakdown of the visit by time spent and graphical animation of the shopper visit laid on the digital planogram of the store. The tabular view is sorted by time spent and shows all the sections in the store the shopper visited, how much time they spent, and whether they returned to the section (went to it more than once). The animation is built on top of the store layout view (a core part of DM1) and replays the journey in fifteen seconds with time spent relative to replay time.

In-Store Shopper Measurement and VoC Store Surveys and Digital Mortar

This custom report view is what the survey giver uses to drive the survey.

But it’s not the only report available. Since all the data is collected, it can also be analyzed quantitatively in the core DM1 Workbench and it can even be segmented by survey response variables uploaded through the meta-data interface.

It’s a compelling combination – helping drive the survey itself, providing a rich quantification of the data afterward, and making it easy for Insight Safari to show how specific individual patterns translate into significant population segments.

 

And a Little bit About the Results

Obviously the findings are both totally proprietary and highly particularized to the client. This isn’t the sort of research that leads to some standardized best-practice recommendation. But there are some high-level aspects of the project that I found striking.

First, while there are some very straightforward shopping visits where the behavior is crisp and matches closely to intent, the number of those visits is dramatically lower than what we see when we look at Websites. Most visits are amazingly complex squiggly patterns that bear only a passing resemblance to any kind of coordinated exploration of the store.

Sure, there are visits where, for example, a race-track pattern is dominant. But in almost all those visits there are least a few strange loops, diversions, and short-cuts. Further, the degree to which shopper intent doesn’t capture the intricacy (or even the primary focus) of the visit is much more visible in store visits than in comparable Website visits. Stores just are better distractors than Websites – and the physical realities of navigating a space create many more opportunities for divergence.

Second, the ability to see how experiential elements like coffee bars impacted both the behavior and emotional impact of the shopper journey was fascinating. It’s really hard to measure how these elements are driving PoS, but when you hear how people talk about it and how much it changes their sense and description of the shopping experience, it really comes alive. Making shoppers want to come to the store is part and parcel of today’s retail mission. And hearing how a smile from a barista can transform a chore into a reprieve from the daily grind is just one of the ways that VoC can make behavioral data sing.

And lastly, these behavior patterns are often most telling for what shoppers didn’t do. In case after case, we could see shopper’s lop-off parts of the journey that seemed like the logical extensions of their basic path. Some of those turning points were highly individual and probably hard to action – but others showed up with a consistency that made it clear that for some journeys, the store layout just wasn’t optimal.

 

Get a Piece of the Action

I don’t think there’s a store in the world that wouldn’t benefit from this kind of thoughtful research. Intelligent Voice of Customer is always provocative and useful. And the integration of Digital Mortar’s behavioral journey mapping into the Insight Safari process lets them do what they do at a level that simply can’t be matched with any other technique. It truly is the best of both worlds.

To learn more, give either of us a shout!

Machine Learning and Optimizing the Store

My previous post covered the first half of my presentation on Machine Learning (ML) and store analytics at the Toronto Symposium. Here, I’m going to work through the case study on using ML to derive an optimal store path. For that analysis, we used our DM1 platform to source, clean, map and aggregate the data and then worked with a data science partner (DXi) on the actual analysis.

Why this problem?

Within DM1 we feel pretty good about the way we’ve built out visualizations of the store data that are easy to use and surprisingly powerful. The Full Path View, Funnel View and Store Layout View  all provide really good ways to explore shopper path in the store.

But for an analyst, exploring data and figuring out a model are utterly different tasks. A typical store presents a nearly infinite number of possible paths – even when the paths are aggregated up to section level. So there’s no way to just explore the paths and find optimal ones.

Even at the most basic level of examining individual shopper paths, deciding what’s good and bad is really hard. Here’s two shopper paths in a store:

Which is better? Does either have issues? It’s pretty hard to know.

 

Why Machine Learning?

Optimal store pathing meets the basic requirements for using supervised ML – we have a lot of data and we have a success criteria (checkout). But ML isn’t worth deploying on every problem that has a lot of data and success criteria. I think about it this way – if I can get want I want by writing simple algorithmic code, then I don’t need ML. In other words, if I can write (for example) a sort and then some simple If-Then rules that will identify the best path or find problem path points, then that’s what we’ll do. If, for example, I just wanted to identify sections that didn’t convert well, it would be trivial to do that. I have a conversion efficiency metric, I sort by it (Ascending) and then I take the worst performers. Or maybe I have a conversion threshold and simply pick any Section that performs worse. Maybe I even calculate a standard deviation and select any section that is worse than 1 standard deviation below the average Section conversion efficiency. All easy.

But none of those things are really very useful when it comes to finding poor path performance in a robust fashion.

So we tried ML.

 

The Analysis Basics

The analysis was focused on a mid-sized apparel store with around 25 sections. We had more than 25,000 shopper visits. Which may not seem like very much if you’re used to digital analytics, but is a pretty good behavior base for a store. In addition to the basic shopper journey, we also had Associate interaction points (and time of interaction), and whether or not the shopper converted. The goal was to find potential store layout problems and understand which parts of the store drove to (or subtracted from) overall conversion efficiency.

Preparing the Data

The first step in any analysis (once you know what you want) is usually data preparation.

Our data starts off as a stream of location events. Those location events have an X,Y, Z coordinates that are offset from a zero point in the store. In the DM1 platform, we take that data and map it against a digital planogram capability that keeps a full, historical record of the store. That tells us what shoppers actually looked and where they spent time. This is the single most critical step in turning the raw data into something that’s analytically useful.

Since we also track Associates, we can track interaction points by overlaying the Associate data stream on top of the shopper stream. This isn’t perfect – it’s easy to miss short interactions or be confused by a crowded store – but particularly when it’s app to app tracking it works pretty well. Associate interaction points are hugely important in the store (as the subsequent analysis will prove).

Step 3 is knowing whether and when a shopper purchased. Most of the standard machine learning algorithms require having a way to determine if a behavior pattern was successful or not – that’s what they are optimizing too. We’re using purchase as our success metric.

The underlying event data gets aggregated into a single row per shopper visit. That row contains a visit identifier, a start and stop time, an interaction count, a first interaction time, a last interaction time, the first section visited, the time spent in each section and, of course, our success metric – a purchase flag.

That’s it.

The actual analytic heavy lifting was done by DXi on their machine learning platform. They use an ensemble approach – throwing the kitchen sink at the problem by using 25+ different algorithms to identify potential winners/losers (if you’d like more info or an introduction to them, drop me a line and I’ll connect you).

 

Findings

Here’s some of the interesting stuff that surfaced, plucked from the Case-Study I gave at the Symposium:

One of the poorest performing sections – unpicked by a single DXi ML algorithm as important – sits right smack dab in the middle of the store. That central position really surprised us. Yes, as you’ll see in a moment, the store has a successful right rail pattern – but this was a fairly trafficked spot with good sightlines and easy flow into high-value areas of the store.

Didn’t work well though. And that’s definitely worth thinking about from a store layout perspective.

One common browsing behavior for shoppers is a race-track pattern – navigating around the perimeter of the store. There’s a good example of that on the right-side image I showed earlier:

The main navigation path through the store is the red rectangle (red because this shopper spent considerable time there) – and you can see that while the shopper frequently deviated from that main path that their overall journey was a circuit around the store.

The ML algo’s don’t know anything about that – but they did pick out the relevant sections in the analyzed store along that starting path as really important for conversion.

We took that to mean that the store is working well for that race-track shopper type. An important learning.

For this particular store, casual shoes was picked as important by every ML algorithm – making it the most important section of the store. It also had the largest optimal time value – and clearly rewarded more time with higher conversion rates. Shoes, of course, is going to be this way. It’s not a grab and go item. So there’s an element of the obvious here – something you should expect when you unleash ML on a dataset (and hey – most analytics projects will, if they work at all, vacillate between the interesting and the obvious). But even compared to other types of shoe – this section performed better and rewarded more time spent – so there is an apples-to-apples part of this comparison as well.

The next finding was an interesting one and illustrates a bit of the balance you need to think about between the analyst and the algorithm. The display in question was located fairly close to cash-wrap on a common path to checkout. It didn’t perform horribly in the ML – some of the DXi algorithms did pick it as important for conversion. On the other hand, it was one of the few sections with a negative weighting to time spent – so more time spent means less likely conversion. We interpreted that combination as indicating that the section’s success was driven by geography not efficiency. It’s kind of like comparing Saudi Arabia vs. U.S. Shale drillers. Based purely on the numbers, Saudi Arabia looks super efficient and successful with the lowest cost per barrel of oil extracted in the world. But when you factor in the geographic challenges, the picture changes completely. SA has the easiest path to oil recovery in the world. Shale producers face huge and complex technical challenges and still manage to be price competitive. Geography matters and that’s just a core fact of in-store analytics.

Our take on the numbers when we sifted through the DXi findings was that this section was actually underperforming. It might take a real A/B test to prove that, but regardless I think it’s a good example of how an analyst has to do more than run an algorithm. It’s easy to fool even very sophisticated algorithms with strong correlations and so much of our post-analysis ANALYSIS was about understanding how the store geography and the algorithm results play together.

In addition to navigation findings like these, the analysis also included the impact of Associates on conversion. In general, the answer we got was the more interactions the merrier (at the cash register). Not every store may yield the same finding (and it’s also worth thinking about whether a single conversion optimization metric is appropriate here – in my Why Analytics Fails talk I argue for the value in picking potentially countervailing KPIs like conversion and shopper satisfaction as dual optimization points).

Even after multiple interactions, additional interactions had a positive impact on sales.

This should be obvious but I’ll hearken back to our early digital analytics days to make a point. We sometimes found that viewing more pages on a Website was a driver of conversion success. But that didn’t mean chopping pages in half (as one client did) so that that the user had to consume more pages to read the same content was a good strategy.

Just because multiple Associate interactions in a store with a normal interaction strategy created lift, it doesn’t mean that, for example, having your Associates tackle customers (INTERACTIOOOON!!!) as they navigate the floor will boost conversion.

But in this case, too much interaction was a legitimate concern. And the data indicates that – at least as measured by conversion rates – the concern did not manifest itself in shopper turn-off.

If you’re interested in getting the whole deck – just drop me a note. It’s a nice intro into the kind of shopper journey tracking you can do with our DM1 platform and some of the ways that machine learning can be used to drive better practice. And, as I mentioned, if you’d like to check out the DXi stuff – and it’s interesting from a pure digital perspective too – drop me a line and I’ll introduce you.

The Measurement Minute

If I’m known for anything, it’s mind-numbingly long blog posts. Brevity? Not my style. But I’ve been challenging myself to go shorter and the Measurement Minute is the ultimate test. These are one-minute podcasts covering just about anything measurement and analytics related. I’ll try to keep them coming. Though as many famous writers have remarked (or been said to remark) – making things shorter takes time.

Check it out on iTunes:

https://t.co/t5CK2NkZRz

Machine Learning and Store Analytics

Not too long ago I spoke in Toronto at a Symposium focused on Machine Learning to describe what we’ve done and are trying to do with Machine Learning (ML) in our DM1 platform and with store analytics in general. Machine Learning is, in some respects, a fraught topic these days. When something is hard on the hype cycle, the tendency is to either believe it’s the answer to every problem or to dismiss the whole thing as an illusion. The first answer is never right. The second sometimes is. But ML isn’t an illusion – it’s a real capability with a fair number of appropriate applications. I want to cover – from our hands-on, practical perspective – where we’ve used ML, why we used ML and show a case-study of some of the results.

 

Just what is Machine Learning?

In its most parochial form, ML is really nothing more than a set of (fairly mature) statistical techniques dressed up in new clothes.

Here’s a wonderful extract from the class notes of a Stanford University expert on ML: (http://statweb.stanford.edu/~tibs/stat315a/glossary.pdf)

Machine Learning vs Statistics

It’s pretty clear why we should all be talking ML not statistics! And seriously, wasn’t data science enough of a salary upgrade for statisticians without throwing ML into the hopper?

Unlike big data, I have no desire in this case to draw any profound definitional difference between ML and statistics. In my mind, I think of ML as being the domain of neural networks, deep learning and Support Vector Machines (SVMs). Statistics is the stuff we all know and love like regression and factor analysis and p values. That’s a largely ad hoc distinction (and it’s particularly thin on the unsupervised learning front), but I think it mostly captures what people are thinking when they talk about these two disciplines.

 

What Problems Have We Tried to Solve with ML

At a high-level, we’ve tackled three types of problems with ML (as I’ve casually defined it): improving data quality, shopper type classification, and optimal store path analysis.

Data quality is by far the least sexy of these applications, but it’s also the area where we’ve done the most work and where the everyday application of our platform takes actual advantage of some ML work.

When we setup a client instance on DM1, there’s a number of highly specific configurations that control how data gets processed. These configurations help guide the platform in key tasks like distinguishing Associate electronic devices from shopper devices. Why is this so important? Well, if you confuse Associates with shoppers, you’ll grossly over-count shoppers in the store. Equally bad, you’ll miss out on a real treasure trove of Associate data including when Associate/Shopper interactions occur, the ratio of Shoppers to Associates (STARs), and the length and outcome from interactions. That’s all very powerful.

If you identify store devices, it’s easy enough to signature them in software. But we wanted a system that would do the same work without having to formally identify store devices. Not only does this make it a lot easier to setup a store, it fixes a ton of compliance issues. You may tell Associates not to carry their own devices on the floor, but if you think that rule is universally followed your kidding yourself. So even if you BLE badge employees, you’re still likely picking up their personal phones as shopper devices. By adding behavioral identification of Associates, we make the data better and more accurate while minimizing (in most cases removing) operational impact.

We use a combination of rule-based logic and ML to classify Associate behavior on ALL incoming devices. It turns out that Associates behave quite differently in stores than shoppers. They spend more time. Go places shoppers can’t. Show up more often. Enter at different times. Exit at different times. They’re different. Some of those differences are easily captured in simple IF-then programming logic – but often the patterns are fairly complex. They’re different, but not so easily categorized. That’s where the ML kicks in.

We also work in a lot of electronically dense environments. So we not only need to identify Associates, we need to be able to pick-out static devices (like display computers, endless aisle tablets, etc.). That sounds easy, and in fact it is fairly easy. But it’s not quite as trivial as it sounds; given the vagaries of positioning tech, a static device is never quite static. We don’t get the same location every time – so we have to be able to distinguish between real movement and the type of small, Brownian motion we get from a static device.

Fixing data quality is never all that exciting, but in the world of shopper journey measurement it’s essential. Without real work to improve the data – work that ML happens to be appropriate for – the data isn’t good enough.

The second use we’ve found for machine learning is in shopper classification. We’re building a generalized shopper segmentation capability into the next release of DM1. The idea is pretty straightforward. For years, I’ve championed the notion of 2-tiered segmentation in digital analytics. That’s just a fancy name for adding a visit-type segmentation to an existing customer segmentation. And the exact same concept applies to stores.

As consultants, we typically built highly customized segmentation schemes. Since Digital Mortar is a platform company, that’s not a viable approach for us. Instead, what we’ve done is taken a set of fairly common in-store behavioral patterns and generalized their behavioral signatures. These patterns include things like “Clearance Shoppers”, “Right-Rail Shoppers”, “Single Product Focused Shoppers”, “Product Returners”, and “Multi-Product Browsers”. By mapping store elements to key behavior points, any store can then take advantage of this pre-existing ML-driven segmentation.

Digital Mortar's DM1 Shopper Segmentation

It’s pretty cool stuff and I’m excited to get it into the DM1 platform.

The last problem we’ve tackled with ML is finding optimal store paths. This one’s more complex – more complex than we’ve been comfortable taking on directly. We have a lot of experience in segmentation techniques – from cluster analysis to random forests to SVMs. We’re pretty comfortable with that problem set. But for optimal path analysis, we’ve been working with DXi. They’re an ML company with a digital heritage and a lot of experience working on event-level digital data. We’ve always said that a big part of what drew us to store journey measurement is how similar the data is to digital journey data and this was a chance to put that idea to the test. We’ve given them some of our data and had them work on some optimal path problems – essentially figuring out whether the store layout is as good as possible.

Why use a partner for this? I’ve written before about how I think Digital Mortar and the DM1 platform fit in a broader analytics technology stack for retail. DM1 provides a comprehensive measurement system for shopper tracking and highly bespoke reporting appropriate to store analytics. It’s not meant to be a general purpose analytics platform and it’s never going to have the capabilities of tools like Tableau or R or Watson. Those are super-powerful general-purpose analytics tools that cover a wide range of visualization, data exploration and analytic needs. Instead of trying to duplicate those solutions we’ve made it really easy (and free) to export the event level data you need to drive those tools from our platform data.

I don’t see DM1 becoming an ML platform. As analysts, we’ll continue to find uses for ML where we think it’s appropriate and embed those uses in the application. But trying to replicate dedicated ML tools in DM1 just doesn’t make a lot of sense to me.

In my next post, I’ll take a deeper dive into that DXi work, give a high-level view of the analytics process, and show some of the more interesting results.

Using Your Existing Store WiFi for Shopper Measurement

The most daunting part of doing shopper measurement isn’t the analytics, it’s the data collection piece. Nobody likes to put new technology in the store; it’s expensive and it’s a hassle. And most stores feel like they have plenty of crap dangling from their ceilings already.

 

If you’re in that camp, but would love to have real in-store shopper measurement, there are three technologies you should consider. The first, and the one I’m going to discuss today, is your existing WiFi access points.

 

Most modern WiFi access points can geo-locate the signals they receive. Now you may be thinking to yourself that the overwhelming majority of shoppers don’t connect to your WiFi. But that’s okay. Phones with their WiFi enabled ping out to your access points on a regular basis even when they don’t connect to your WiFi. And, yes, it’s both possible and acceptable to use that for anonymous measurement.

 

What that means, is that you can use your store’s WiFi to measure the journeys for a significant percentage of your shoppers. Access point tracking is incredibly convenient. Since it’s based off your existing customer WiFi system, you already have the necessary hardware. If your equipment is modern, it’s usually just a matter of flipping a software switch to get geo-location data in the cloud.

 

Providers like Meraki have been gradually improving the positional accuracy of the data and they make it super-easy to enable this and get a full data feed. And if you’re equipment is older or from a vendor that doesn’t do that? It’s not a lost cause. Every reasonably modern WiFi Access Point generates a log file that includes the basic data necessary for positional triangulation. It’s not as convenient as the cloud-based feeds that come with the best systems, but if you don’t mind doing a little bit of traditional IT file wrangling, it can work almost as well. We’ll do the heavy lifting on the positioning.

 

The biggest downside to traditional WiFi measurement has been the lack of useful analytics. Working from the raw feed is very challenging for an enterprise (harder than just installing new devices) and the reporting and analytics you get out of the box from WiFi vendors is…well…about what you’d expect from WiFi vendors. Let’s just say their business isn’t analytics.

 

That’s where our DM1 platform really makes a huge difference. DM1 is an open, shopper analytics platform. It’s built to ingest ANY detailed, geo-located data stream. It can take data from your mobile app users. It can take data from dedicated measurement video cameras. It can take data from iViu passive network sniffers. Really, any measurement system that creates timestamped shopper/device and x,y coordinates can be easily ingested.

 

Your existing WiFi Access Point data fits that bill.

 

Imagine being able to take your WiFi geolocation data and with the flip of switch and no hardware install be able to do full-store pathing:

DM1 Digital Mortar store analytics full shopper path analytics

 

Full in-store funnels:

Digital Mortar Store Analytics DM1 Funnel Analysis for retail analytics and shopper tracking

 

Even cooler, because DM1 uses statistical methods to identify Associate devices, we’ll automatically parse that WiFi data to identify shoppers and associates. That lets you track associate presence and intraday STARs for any section of the store. No changes to store operations. No compliance issues. You can even do a path analysis on the shopper journey by salesperson or sales team:

DM1 Retail Analytics digital mortar full store path analytics and associate interactions

 

How cool is that!

 

And remember what I said about other data sources? DM1 can simultaneously ingest your mobile app user data and your WiFi data and let you track each as separate segments. You get the extra detail and positional accuracy for all your mobile shoppers along with the ability to rapidly swap views and see how the broader population of smartphone users is navigating your store.

 

Coupling DM1 to WiFi geo-location data really is the easiest, cheapest way to give serious, enterprise-class in-store shopper measurement a try.

 

And the Fine Print

If you’re wondering if there are drawbacks to WiFi measurement, the answer is yes. We see it as a great, no-pain way to get started with shopper analytics. But there are strong reasons why, to get really good measurement, you’ll need to migrate at least some stores to dedicated measurement collection. WiFi’s positional accuracy suffers in comparison to dedicated measurement devices like iViu’s or camera-based solutions. And it also measures fewer shoppers. Even compared to other means of electronic detection, you’ll lose a significant number of phones – especially IOS devices.

 

If you were reading closely, you’ll remember that I said there were three technologies to consider if you want to do shopper journey measurement without adding in-store hardware. WiFi is the easiest and the most widespread of these. But there are slam-dunk solutions for mobile app measurement that I’ll cover in my next post. And if you have relatively modern security cameras, there’s even a software-based solution that can help you turn that data into grist for the DM1 mill. That’s a solution we’ve been hoping for since day 1 – and it’s finally starting to become a reality.

The Role of General Purpose BI & Data Viz Tools for In-Store Location Analytics and Shopper Measurement

One of the most important questions in analytics today is the role for bespoke measurement and analytics versus BI and data visualization tools. Bespoke measurement tools provide end-to-end measurement and analytics around a particular type of problem. Google Analytics, Adobe Analytics, our own DM1 platform are all examples of bespoke measurement solutions. Virtually every industry vertical has them. In health care, there are products like GSI Health and EQ Health that are focused on specific health-care problems. In hospitality, there are solutions like IDeaS and Kriya that focus on revenue management. At the same time, there are a range of powerful, general purpose tools like Tableau, Spotfire, Domo, and Qlik that can do a very broad range of dashboarding, reporting and analytic tasks (and do them very well indeed). It’s always fair game to ask when you’d use one or the other and whether or not a general purpose tool is all you need.

 

It’s a particularly important question when it comes to in-store location analytics.  Digital analytics tools  grew up in a market where data collection was largely closed and at a time when traditional BI and Data Viz tools had almost no ability to manage event-level data. So almost every enterprise adopted a digital analytics solution and then, as they found applications for more general-purpose tools, added them to the mix. With in-store tracking, many of the data collection platforms are open (thank god). So it’s possible to directly take data from them.

 

Particularly for sophisticated analytics teams that have been using tools like Tableau and Qlik for digital and consumer analytics, there is a sense that the combination of a general purpose data viz tool and a powerful statistical analysis tool like R is all they really need for almost any data set. And for the most part, the bespoke analytics solutions that have been available are shockingly limited – making the move to tools like Tableau an easy decision.

 

But our DM1 platform changes that equation. It doesn’t make it wrong. But I think it makes it only half-right. For any sophisticated analytics shop, using a general purpose data visualization tool and a powerful stats package is still de rigueur. For a variety of reasons, though, adding a bespoke analytics tool like DM1 also makes sense. Here’s why:

 

Why Users Level of Sophistication Matters

The main issue at stake is whether or not a problem set benefits from bespoke analytics (and, equally germane, whether bespoke tools actually deliver on that potential benefit). Most bespoke analytics tools deliver some combination of table reports and charting. In general, neither of these capabilities are delivered as well as general purpose tools do the job. Even very outstanding tools like Google Analytics don’t stack up to tools like Tableau when it comes to these basic data reporting and visualization tasks. On the other hand, bespoke tools sometimes make it easier to get that basic information – which is why they can be quite a bit better than general purpose tools for less sophisticated users. If you want simple reports that are pre-built and capture important business-specific metrics in ways that make sense right off the bat, then a bespoke tool will likely be better for you. For a reasonably sophisticated analytics team, though, that just doesn’t matter. They don’t need someone else to tell them what’s important. And they certainly don’t have a hard time building reports in tools like Tableau.

 

So if the only value-add from a bespoke tool is pre-built reports, it’s easy to make the decision. If you need that extra help figuring out what matters, go bespoke. If you don’t, go general purpose.

 

But that’s not always the only value in bespoke tools.

 

 

Why Some Problems Benefit from Bespoke

Every problem set has some unique aspects. But many, many data problems fit within a fairly straightforward set of techniques. Probably the most common are cube-based tabular reporting, time-trended data visualization, and geo-mapping. If your measurement problem is centered around either of the first two elements, then a general purpose tool is going to be hard to beat. They’ve optimized the heck out of this type of reporting and visualization. Geo-mapping is a little more complicated. General purpose tools do a very good job of basic and even moderately sophisticated geo-mapping problems. They are great for putting together basic geo-maps that show overlay data (things like displaying census or purchase data on top of DMAs or zip-codes). They can handle but work less well for tasks that involve more complicated geo-mapping functions like route or area-size optimization. For those kinds of tasks, you’d likely benefit from a dedicated geo-mapping solution.

 

When it comes to in-store tracking, there are 4 problems that I think derive considerable benefit from bespoke analytics. They are: data quality control, store layout visualization and associated digital planogram maintenance, path analysis, and funnel analysis. I’ll cover each to show what’s at stake and why a bespoke tool can add value.

 

 

Data Clean-up and Associate Identification

Raw data streams off store measurement feeds are messy! Well, that’s no surprise. Nearly all raw data feeds have significant clean-up challenges. I’m going to deal with electronic data here, but camera data has similar if slightly different challenges too. Data directly off an electronic feed typically has at least three significant challenges:

 

  • Bad Frame Data
  • Static Device Identification
  • Associate Device Identification

 

There are two types of bad frame data: cases where the location is flawed and cases where you get a single measurement. In the first case, you have to decide whether to fix the frame or throw it away. In the second, you have to decide whether a single frame measurement is correct or not. Neither decision is trivial.

 

Static device identification presents it’s own challenge. It seems like it ought to be trivial. If you get a bunch of pings from the same location you throw it away. Sadly, static devices are never quite static. Blockage and measurement tend to produce some movement in the specific X/Y coordinates reported – so a static device isn’t remotely still. This is a case where our grid system helps tremendously. And we’ve developed algorithms that help us pick out, label and discard static devices.

 

Associate identification is the most fraught problem. Even if you issue employee devices and provide a table to track them, you’ll almost certainly find that many Associates carry additional devices (yes, even if it’s against policy). If you don’t think that’s true, you’re just not paying attention to the data! You need algorithms to identify devices as Associates and tag that device signature appropriately.

 

Now all of these problems can be handled in traditional ETL tools. But they are a pain in the ass to get right. And they aren’t problems that you’ll want to try to solve in the data viz solution. So you’re looking at real IT jobs based around some fairly heavy duty ETL. It’s a lot of work. Work that you have to custom pay for. Work that can easily go wrong. Work that you have to stay on top of or risk having garbage data drive bad analysis. In short, it’s one of those problems it’s better to have a vendor tackle.

 

 

Store Layout Visualization

The underlying data stream when it comes to in-store tracking is very basic. Each data record contains a timestamp, a device id, and X,Y,Z coordinates. That’s about it. To make this data interesting, you need to map the X,Y,Z coordinates to the store. To do that involves creating (or using) a digital planogram. If you have that, it’s not terribly difficult to load that data into a data viz tool and use it as the basis for aggregation. But it’s not a very flexible or adaptable solution. If you want to break out data differently than in those digital planograms, you’ll have to edit the database by hand. You’ll have to create time-based queries that use the right digital layouts (this is no picnic and will kill the performance of most data viz tools), and you’ll have to build meta-data tables by hand. This is not the kind of stuff that data visualization tools are good at, and trying to use them this way is going to be much harder – especially for a team where a reasonable, shareable workflow is critical.

 

Contrast that to doing the same tasks in DM1.

 

Digital Mortars DM1 retail analytics and shopper tracking - digital planogram capabilityDM1 provides a full digital store planogram builder. It allows you create (or modify) digital planograms with a point and click interface. It tracks planograms historically and automatically uses the right one for any given date. It maintains all the meta-data around a digital planogram letting you easily map to multiple hierarchies or across multiple physical dimensions. And it allows you to seamlessly share everything you build.

 

Digital Mortars DM1 retail analytics and shopper tracking - store layout and heatmapping visualizationOnce you’ve got those digital planograms, DM1’s reporting is tightly integrated. It’s just seamless to display metrics across every level of metadata right on the digital planogram. What’s more, our grid model makes the translation of individual measurement points into defined areas seamless and repeatable at even fine-grained levels of the store. If you’re relying on pre-built planograms, that’s just not available. And keep in mind that the underlying data is event-based. So if you want to know how many people spent more than a certain amount of time at a particular area of the store, you’ll have to pre-aggregate a bunch of data to use it effectively in a tool like Tableau. Not so in DM1 where every query runs against the event data and the mapping to the digital planogram and subsequent calculation of time spent is done on the fly, in-memory. It’s profoundly more flexible and much, much faster.

 

 

Path Analysis

Pathing is one of those tasks that’s very challenging for traditional BI tools. Digital analytics tools often distinguished themselves by their ability to do comprehensive pathing: both in terms of performance (you have to run a lot of detailed data) and visualization (it’s no picnic to visualize the myriad paths that represent real visitor behavior). Adobe Analytics, for example, sports a terrific pathing tool that makes it easy to visualize paths, filter and prune them, and even segment across them. Still, as nice as digital pathing is, a lot of advanced BI teams have found that it’s less useful than you might think. Websites tend to have very high cardinality (lots of pages). That makes for very complex pathing – with tens of thousands or even hundreds of thousands of slightly variant paths adding up to important behaviors. Based on that experience, when we first built DM1, we left pathing on the drawing board. But it turns out that pathing is more limited in a physical space and, because of that, actually more interesting. So our latest DM1 release includes a robust pathing tool based on the types of tools we were used to in digital.

Digital Mortars DM1 retail analytics and shopper tracking - Full Path Analysis

With the path analysis, you start from any place in the store and you can see how people got there and where they went next. Even better, you can keep extending that view by drilling down into subsequent nodes. You can measure simple footpath, or you can look at paths in terms of engagement spots (DM1 has two different metrics that represent increasing levels of engagement) and you can path at any level of the store: section, department, display…whatever.

And, just like the digital analytics tools, you can segment the paths as well. We even show which paths had the highest conversion percentages.

 

Sure, you could work some SQL wizardry and get at something like this in a general purpose Viz tool. But A) it would be hard. B) it would slow. And C), it wouldn’t look as good or work nearly as well for data exploration.

 

 

Funnel Analysis

Digital Mortars DM1 funnel analytics for retail and shopper tracking

When I demo DM1, I always wrap-up by showing the funnel visualization. It shows off the platforms ability to do point to point to point analysis on a store and fill in key information along the way. Funnel analysis wraps up a bunch of stuff that’s hard in traditional BI. The visualization is non-standard. The metrics are challenging to calculate, the data is event-driven and can’t be aggregated into easy reporting structures, and effective usage requires the ability to map things like engagement time to any level of meta-data.

Digital Mortar's DM1 retail analytics shopper tracking funnel analytics

In the funnels here, you can see how we can effectively mix levels of engagement: how long people spent at a given meta-data defined area of the store, whether or not they had an interaction, whether they visited (for any amount of time) a totally different area of the store, and then what they purchased. The first funnel describes Section conversion efficiency. The second looks at the cross-over between Mens/Womens areas of the store.

And the third traces the path of shoppers who interacted with Digital Signage. No coding necessary and only minutes to setup.

 

That’s powerful!

 

As with path analysis, an analyst can replicate this kind of data with some very complicated SQL or programmatic logic. But it’s damn hard and likely non-performance. It’s also error-prone and difficult to replicate. And, of course, you lose the easy maintainability that DM1’s digital planograms and meta-data provide. What might take days working in low-level tools takes just a few minutes with the Funnel tool in DM1.

 

 

Finally, Don’t Forget to Consider the Basic Economics

It usually costs more to get more. But there are times and situations where that’s not necessarily the case. I know of large-scale retailers who purchase in-store tracking data feeds. And the data feed is all they care about since they’re focused on using BI and stats tools. Oddly, though, they often end up paying more than if they purchased DM1 and took our data feed. Odd, because it’s not unusual for that data feed to be sourced by the exact same collection technology but re-sold by a company that’s tacking on a huge markup for the privilege of giving you unprocessed raw data. So the data is identical. Except even that’s not quite right. Because we’ve done a lot of work to clean-up that same data source and when we process it and generate our data feed, the data is cleaner. We throw out bad data points, analyze static and associate devices and separate them, map associate interactions, and map the data to digital planograms. Essentially all for free. And because DM1 doesn’t charge extra for the feed, it’s often cheaper to get DM1 AND feed than just somebody else’s feed. I know. It makes no sense. But it’s true. So even if you bought DM1 and never opened the platform, you’d be saving money and have better data. It would be a shame not to use the software but…it’s really stupid to pay more for demonstrably less of the same thing.

 

Bottom Line

I have a huge amount of respect for the quality and power of today’s general purpose data visualization tools. You can do almost anything with those tools. And no good analytics team should live without them. But as I once observed to a friend of mine who used Excel for word processing, just because you can do anything in Excel doesn’t mean you should do everything in Excel! In store analytics, there are real reasons why a bespoke analytics package will add value to your analytics toolkit. Will any bespoke solution replace those data viz tools? Nope. Frankly, we don’t want to do that.

 

I know that DM1’s charting and tabular reporting are no match for what you can do easily in those tools. That’s why DM1 comes complete with a baked-in, no extra charge data feed of the cleaned event-level data and a corresponding visitor-level CRM feed. We want you to use those tools. But as deep analytics practitioners who are fairly expert in those tools, we know there’s some things they don’t make as easy as we’d like. That’s what DM1 is designed to do. It’s been built with a strong eye on what an enterprise analyst (and team) needs that wouldn’t be delivered by an off-the-shelf BI or data viz tool.

 

We think that’s the right approach for anyone designing a bespoke analytics or reporting package these days. Knowing that we don’t need to replace a tool like Tableau makes it easier for us to concentrate on delivering features and functionality that make a difference.

A Deeper Dive in How To Use Digital Mortar’s DM1

Over the last year, we’ve released a string of videos showing DM1 in action. These are marketing videos, meant to show off the capabilities of the platform and give people sense of how it can be used. Last week, though, we pushed a set of product How-To videos out to our YouTube channel. These videos are designed to walk new users through aspects of the product and are also designed to support users of our Sandbox. For quite awhile we’ve had a cloud-based Sandbox that partners can use to learn the product. In the next month or so, we’re going to take that Sandbox to the next level and make it available on the Google Cloud as part of a test drive. That means ANYONE will be able to roll their own DM1 instance for 24 hours – complete with store data from our test areas.

The videos are designed to help users go into the Sandbox and experiment with the product productively.

There are four videos in the initial set and here’s a quick look at each:

Dashboards: When I demo the product, I don’t actually spend much time showing the DM1 Dashboard. Sometimes I don’t show it at all since I tend to focus on the more interesting analytic stuff. But the Dashboard is the first thing you see when you open the product – and it’s also the (built-in) reporting view that most non-analysts are going to take advantage of. The Dashboard How-to walks through the (very simple) process of creating Panels (reports) and Alerts in the Dashboard and shows each type of viz and alert. Alerts, in particular, are interesting. Using Alerts, you can choose to always show a KPI, or have it pop only when a metric exceeds some change or threshold. From my marketing videos, you probably wouldn’t even realize DM1 has this capability, but it’s actually pretty cool.

https://t.co/LIHTgMCpeQ

Workbench: This is a quick tour of the entire Analytics Workbench. Most of this is stuff you do see in my other videos since this is where I tend to spend time. But the How-To video walks through the Left-Navigation options in the Workbench more carefully than I usually do in Marketing Videos and also shows Viz types like the DayMap that I often give short shrift.

https://t.co/lM553x5XNw

Store Configuration: Digital Planograms are at the heart of DM1 and they underlie ALL the reporting in the Analytics Workbench (and are flat out the Viz in the Layout view). We’ve built a very robust point-and-click Configuration tool for building those Planograms. It’s a huge part of the product and a major differentiator. There’s nothing else like it out there. But because it’s more plumbing than countertop, I usually don’t show it at all in marketing videos. The How To vid shows how you can open, edit and save an existing digital planogram and how easy it is to create a new one.

https://t.co/I5O66H6g5K

Metadata: The store configurator maps the store and allows you to assign any part of the store to….well that’s where metadata comes in. DM1’s Admin interface includes a meta-data builder where you describe the Sections, Departments, Displays, Functions, Team Areas, etc. that matter to you. Meta-data is what makes basic locational data come alive. And DM1’s very robust capability let’s you define unlimited hierarchies, unlimited levels per hierarchy, and unlimited categories per level. What’s the word of the day around metadata? Unlimited. It’s pretty powerful but it’s really pretty easy to do as well and the How To vid gives you a nice little taste. And holy frig – I forgot to mention that not everyone on my team thought I should say “holy frig” in this video – but I left it in anyway.

https://t.co/YENzD6TMqC

It’s really capabilities like the Metadata builder and the Store Configurator that make DM1 true enterprise analytics. They provide the foundational elements that let you manage complex store setups and generate consistently interesting analytic reporting. Even if you’re not a user yet, check em out. If nothing else, you’ll be ready for a Test-Drive!