Tag Archives: store measurement

Four Fatal Flaws with In-Store Tracking

I didn’t start Digital Mortar because I was impressed with the quality of the reporting and analytics platforms in the in-store customer tracking space. I didn’t look at this industry and say to myself, “Wow – here’s a bunch of great platforms that are meeting the fundamental needs in the space at an enterprise level.” Building good analytics software is hard. And while I’ve seen great examples of SaaS analytics platforms in the digital space, solutions like Adobe and Google Analytics took many years to reach a mature and satisfying form. Ten years ago, GA was a toy and Adobe (Omniture SiteCatalyst at the time) managed to be both confusing and deeply under-powered analytically. In our previous life as consultants, we had the opportunity to use the current generation of in-store customer journey measurement tools. That hands-on experience convinced me that this data is invaluable. But it also revealed deep problems with the way in-store measurement is done.

When we started building a new SaaS in-store measurement solution here at Digital Mortar, these are the problems in the technology that we wanted to solve:

Lack of Journey Measurement

Most of today’s in-store measurement systems are setup as, in essence, fancy door counters. They start by having you draw zones in the store. Then they track how many people enter each zone and how long they spend there (dwell time).

This just sucks.

It’s like the early days of digital analytics when all of our tracking was focused on the page view. We kept counting pages and thinking it meant something. Till we finally realized that it’s customers we need to understand, not pages. With zone counting, you can’t answer the questions that matter. What did customers look at first? What else did customers look at when they shopped for something specific? Did customers interact with associates? Did those interactions drive sales? Did customer engagement in an area actually drive sales? Which parts of the store were most and least efficient? Does that efficiency vary by customer type?

If you’re not asking and answering questions about customers, you’re not doing serious measurement. Measurement that can’t track the customer journey across zones just doesn’t cut it. Which brings me to…

Lack of Segmentation

My book, Measuring the Digital World, is an extended argument for the central role of behavioral segmentation in doing customer analytics. Customer demographics and relationship variables are useful. But behavior – what customers care about right now – will nearly always be more important. If you’re trying to craft better omni-channel experiences, drive integrated marketing, or optimize associate interactions, you must focus on behavioral segmentation. The whole point of in-store customer tracking is to open up a new set of critically important customer behaviors for analysis and use. It’s all about segmentation.

Unfortunately, if you can’t track the customer journey (as per my point above), you can’t segment. It’s just that simple. When a customer is nothing more than a blip in the zone, you have no data for behavioral segmentation. Of course, even if you track the customer journey, segmentation may be deeply limited in analytic tools. You could map the improvement of Adobe or Google Analytics by charting their gradually improving segmentation capabilities. From limited filtering on pre-defined variables to more complex, query-based segmentation to the gradual incorporation of sophisticated segmentation capabilities into the analyst’s workbench.

You can have all the fancy charts and visualizations in the world, but without robust segmentation, customer analytics is crippled.

Lack of Store Context

When I introduce audiences to in-store customer tracking, I often use a slide like this:

In-store Customer Analytics

The key point is that the basic location data about the customer journey is only meaningful when its mapped to the actual store. If you don’t know WHAT’S THERE, you don’t have interesting data. The failure to incorporate “what’s there” into their reporting isn’t entirely the fault of in-store tracking software. Far too many retailers still rely on poor, paper-based planograms to track store setups. But “what’s there” needs to be a fundamental part of the collection and the reporting. If data isn’t stored, aggregated, trended and reported based on “what’s there”, it just won’t be usable. Which brings me to…

Use of Heatmaps

Heatmaps sure look cool. And, let’s face it, they are specifically designed to tackle the problem of “Store Context” I just talked about. Unfortunately, they don’t work. If you’ve ever tried to describe (or just figure out) how two heat-maps differ, you can understand the problem. Dialog like: “You can see there’s a little more yellow here and this area is a little less red after our test” isn’t going to cut it in a Board presentation. Because heat-maps are continuous, not discrete, you can’t trend them meaningfully. You can’t use them to document specific amounts of change. And you can’t use them to compare customer segments or changed journeys. In fact, as an analyst who’s tried first hand to use them, I can pretty much attest that you can’t actually use heat-maps for much of anything. They are the prettiest and most useless part of in-store customer measurement systems. If heat-maps are the tool you have to solve the problem of store context, you’re doomed.

These four problems cripple most in-store customer journey solutions. It’s incredibly difficult to do good retail analytics when you can’t measure journeys, segment customers, or map your data effectively onto the store. And the ubiquity of heat-maps just makes these problems worse.

But the problems with in-store tracking solutions don’t end here. In my next post, I’ll detail several more critical shortcomings in the way most in-store tracking solutions are designed. Shortcomings that ensure that not only can’t the analyst effectively solve real-world business problems with the tool, but that they can’t get AT THE DATA with any tools that might be able to do better!

Want to know more about how Digital Mortar can drive better store analytics? Drop me a line.

Taking In-Store Measurement…Out of the Store

In my last few posts, I explained what in-store journey analytics is, described the basics of the technology and the data collection used, and went into some detail about its potential business uses. Throughout, and especially in that last part around business uses, I wrote on the assumption that this type of measurement is all about retail stores. After all, brick & mortar stores are the primary focus of Digital Mortar AND of nearly every company in the space. But here’s the thing, this type of measurement is broadly applicable to a wide variety of applications where customer movement though a physical environment is a part of the experience. Stadiums, malls, resorts, cruise ships, casinos, events, hospitals, retail banks, airports, train stations and even government buildings and public spaces can all benefit from understanding how physical spaces can be optimized to drive better customer or user experiences.

In these next few posts, I’m going to step outside the realm of stores and talk about the opportunities in the broader world for customer journey tracking. I’ll start by tackling some of the differences between the tracking technologies and measurement that might be appropriate in some of these areas versus retail, and then I’m going to describe specific application areas and delve a little deeper into how the technology might be used differently than in traditional retail. While the underlying measurement technology can be very similar, the type of reporting and analytics that’s useful to a stadium or resort is different than what makes sense for a mall store.

Since I’m not going to cover every application of customer journey tracking outside retail in great detail, I’ll start with some general principles of location measurement based upon industry neutral things like the size of the space and the extent to which the visitors will opt-in to wifi or use an app.

Measuring BIG Spaces versus little ones

With in-store journey tracking, you have three or four alternatives when choosing the underlying measurement collection technology. Cameras, passive wifi, opt-in wifi and bluetooth, and dedicated sniffers are all plausible solutions. With large spaces like stadiums and airports, it’s often too expensive to provide comprehensive camera coverage. It can even be too expensive to deploy custom measurement devices (like sniffers). That’s especially true in environments where the downtime and wiring costs can greatly exceed the cost of the hardware itself.

So for large spaces, wifi tracking often becomes the only realistic technology for deploying a measurement system. That’s not all bad. While out-of-the-box wifi is the least accurate measurement technology, most large spaces don’t demand fine-grained resolution. In a store, a 3 meter circle of error might place a customer in a completely different section of the store. In an airport, it’s hard to imagine it would make much difference.

Key Considerations Driven by Size of Location:

  • How much measurement accuracy to do you need?
  • How expensive will measurement specific equipment and installation be and is it worth the cost?
  • Are there special privacy considerations for your space or audience?

Opt-in vs. Anonymous Tracking

Cameras, passive wifi and sniffers can all deliver anonymous tracking. Wifi, Bluetooth and mobile apps all provide the potential for opt-in tracking. There are significant advantages to opt-in based tracking. First, it’s more accurate. Particularly in out-of-the-box passive wifi, the changes in IoS to randomize MAC addresses have crippled straightforward measurement and made reasonably accurate customer measurement a challenge. When a user connects to your wifi or opens an app, you can locate them more frequently and more precisely and their phone identity is STABLE so you can track them over time. If your primary interest is in understanding specific customers better for your CRM, tracking over-time populations or you have significant issues with the privacy implications of anonymized passive tracking, then opt-in tracking is your best bet. However, this choice is dependent on one further fact: the extent to which your customers will opt-in. For stadiums and resorts, log-in rates are quite high. Not so much at retail banks. Which brings us to…

Key Considerations for Opt-In Based Tracking

  • Will a significant segment of your audience opt-in?
  • Are you primarily interested in CRM (where opt-in is critical) or in journey analytics (which can be anonymous)?

How good is the sample?

Some technologies (like camera) provide comprehensive coverage by default. Most other measurement technologies inherently take some sample. Any form of signal detection will start with a sample that includes only people with phones. That isn’t much of a sample limitation though it will exclude most smaller children. Passive methods further restrict the population to people with wifi turned on. Most estimates place the wifi-activated rate at around 80%. That’s a fairly high number and it seems unlikely that this factor introduces significant sample bias. However, when you start factoring in things like Android user or App downloader or wifi user, you’re often introducing significant reductions in sample size AND adding sample biases that may or may not be difficult to control for. App users probably aren’t a  representative sample of, for example, the likelihood of a shopper to convert in a store. But even if they are a small percentage of your total users, they are likely perfectly representative of how long people spend queuing in lines at a resort. One of the poorly understood aspects of measurement science is that the same sample can be horribly biased for some purposes but perfectly useful for others!

Key Considerations for Sampling

  • Does your measurement collection system bias your measurement in important ways?
  • Are people who opt-in a representative sample for your measurement purposes?

The broad characteristics that define what type of measurement system is right for your needs are, of course, determined by what questions you need to answer. I’ll take a close look at some of the business questions for specific applications like sports stadiums next time. In general, though, large facilities by their very nature need less fine-grained measurement than smaller ones. For most applications outside of retail, being able to locate a person within a 3 meter circle is perfectly adequate. And while the specific questions being answered are often quite specific to an application area, there is a broad and important divide between measurement that’s primarily focused on understanding patterns of movement and analysis that’s focused on understanding specific customers. When your most interested in traffic patterns, then samples work very well. Even highly biased samples will often serve. If, on the other hand, you’re looking to use customer journey tracking to understand specific customers or customer segments (like season-ticket holders) better, you should focus on opt-in based techniques. In those situations, identification trumps accuracy.

If you have questions about the right location-based measurement technology solution for your business, drop us a line at info@digitalmortar.com

Next up, I’ll tackle the surprisingly interesting world of stadium/arena measurement.

Why do we need to track customers when we know what they buy?

Digital Mortar is committed to bringing a whole new generation of measurement and analytics to the in-store customer journey. What I mean by that “new generation” is that our approach embodies more complete and far more accurate data collection. I mean that it provides far more interesting and directive reports. And I mean that our analytics will make a store (or other physical space) work better. But how does that happen and why do we need to track customers inside the store when we know what they buy? After all, it’s not as if traditional stores are unmeasured. Stores have, at minimum, PoS data and store merchandising and operations data. In other words, we know what we had to sell, we know how many people we used to sell it, and we know how much (and what and what profit) we actually sold.

That stuff is vital and deeply explanatory.

It constitutes the data necessary to optimize assortment, manage (to some extent) staffing needs, allocate staff to areas, and understand which categories are pulling their weight. It can even, with market basket analysis, help us understand which products are associated in customer’s shopping behaviors and can form the basis for layout optimization.

We come from a digital analytics background – analyzing customer experience on eCommerce sites we often had a similar situation. The back-office systems told us which products were purchased, which were bought together, which categories were most successful. You didn’t need a digital analytics solution to tell you any of that. So if you bought, implemented and tried to use a digital analytics solution and those were your questions…well, you were going to be disappointed. Not because a digital analytics solution couldn’t provide answers, it just couldn’t provide better answer than you already had.

It’s the same with in-store tracking systems; which is why when we’re building our system, evaluating reports or doing analysis for clients at Digital Mortar, I find myself using the PoS test. The PoS test is just this pretty simple question: does using the customer in-store journey to answer the question provide better, more useful information than simply knowing what customers bought?

When the answer yes, we build it. But sometimes the answer is no – and we just leave well enough alone.

Let me give you some examples from real-life to show why the PoS test can help clarify what In-Store tracking is for. Here’s three different reports based on understanding the in-store customer journey:

#1: There are regular in-store events hosted by each location. With in-store tracking, we can measure the browsing impact of these events and see if they encourage people to shop products.

#2: There are sometimes significant category performance differences between locations. With in-store tracking, we can measure whether the performance differences are driven by layout, by traffic type, by weather or by area shop per preferences.

#3: Matching staffing levels to store traffic can be tricky. Are there times when a store is understaffed leaving sales, literally, on the table? With in-store tracking we can measure associate / customer rations, interactions and performance and we can identify whether and how often lowered interaction rates lost sales.

I think all three of these reports are potentially interesting – they’re perfectly reasonable to ask for and to produce.

With #1, however, I have to wonder how much value in-store tracking will add beyond PoS data. I can just as easily correlate PoS data to event times to see if events drive additional sales. What I don’t know is whether event attendees browse but don’t buy. If I do this analysis with in-store tracking data, the first question I’ll get is “But did they buy anything?” If, on the other hand, I do the analysis with PoS data, I’m much less likely to hear “But did they browse the store?” So while in-store tracking adds a little bit of information to the problem, it’s probably not the best or the easiest way to understand the impact of store events. We chose not to include this type of report in our base report set, even though we do let people integrate and view this type of data.

Question #2 is quite different. The question starts with sales data. We see differences in category sales by store. So more PoS data isn’t going to help. When you want to know why sales are different (by day, by store, by region, etc.), then you’ll need other types of data. Obviously, you’ll need square footage to understand efficiency, but the type of store layout data you can bring to bear is probably even more critical than measures of efficiency. With in-store tracking you can see how often a category functions as a draw (where customers go first), how it gets traffic from associated areas, how much opportunity it had, and how well it actually performed. Along with weather and associate interaction data, you have almost every factor you’re likely to need to really understand the drivers of performance. We made sure this kind of analytics is easy in our tool. Not just by integrating PoS data, but by making sure that it’s possible to understand and compare how store layouts shape category browsing and buying.

Question #3 is somewhere in between. By matching staffing data to PoS data, I can see if there are times when I look understaffed.  But I’m missing significant pieces of information if I try to optimize staff using only PoS data. Door-counting data can take this one step further and help me understand when interaction opportunities were highest (and most underserved). With full in-store journey tracking, I can refine my answers to individual categories / departments and make sure I’m evaluating real opportunities not, for example, mall pass throughs. So in-store journey tracking deepens and sharpens the answer to Staffing Gaps well beyond what can be achieved with only PoS data or even PoS and door-counting data. Once again, we chose to include staff optimization reports (actually a whole bunch of them) in the base product. Even though you can do interesting analysis with just PoS data, there’s too much missing to make decision-makers informed and confident enough to make changes. And making changes is what it’s all about.

 

We all know the old saying about everything looking like a nail when your only tool is a hammer. But the truth is that we often fixate on a particular tool even when many others are near to hand. You can answer all sorts of questions with in-store journey tracking data, but some of those questions can be answered as well or better using your existing PoS or door-counting data. This sort of analytics duplication isn’t unique to in-store tracking. It’s ubiquitous in data analytics in general. Before you start buying systems, using reports or delving into a tool, it’s almost always worth asking if it’s the right/easiest/best data for the job. It just so happens that with in-store tracking data, asking how and whether it extends PoS data is almost always a good place to start.

In creating the DM tool, we’ve tried to do a lot of that work for you. And by applying the PoS test, we think we’ve created a report set that helps guide you to the best uses of in-store tracking data. The uses that take full advantage of what makes this data unique and that don’t waste your time with stuff you already (should) know.