Tag Archives: store measurement

An Easy Introduction to In-Store Measurement and Retail Analytics with DM1

My last post made the case that investing in store measurement and location analytics is a good move from a career perspective. The reward? Becoming a leader in a discipline that’s poised to grow dramatically. The risk? Ending up with a skill set that isn’t much in demand. For most people, though, risk/reward is only part of the equation. There are people who will expend the years and the effort to become a lawyer even without liking the law – simply on the basis of its economic return. I’m not a fan of that kind of thinking. To me, it undervalues human time and overvalues the impact of incremental prosperity. So my last and most important argument was simple: in-store measurement and location analytics is fun and interesting.

But there’s not a ton of ways you can figure out if in-store measurement is your cup of tea are there?

So I put together another video using our DM1 platform that’s designed to give folks a quick introduction to basic in-store measurement.

It’s a straightforward, short (3 minute) introduction to basic concepts in store-tracking with DM1 – using just the Store Layout tool.

The video walks through three core tasks for in-store measurement: understanding what customer’s do in-store, evaluating how well the store itself performed, and drilling into at least one aspect of performance drivers with a look at Associate interactions.

The first section walks through a series of basic metrics in store location analytics. Starting with where shoppers went, it shows increasingly sophisticated views that cover what drew shoppers into the store, how much time shoppers spend in different areas, and which parts of the store shoppers engaged with most often:

retail analytics: measuring store efficiency and conversion with DM1

The next section focuses on measures of store efficiency and conversion. It shows how you can track basic conversion metrics, analyze how proximity to the cash-wrap drives impulse conversion, and analyze unsuccessful visits in terms of exit and bounce points.

DM1 Layout Overview Video

Going from what to why is probably the hardest task in behavioral analytics. And in the 3rd section, I do a quick dive into a set of Associate metrics to show how they can help that journey along. Understanding where associates ARE relative to shoppers (this is where the geo-spatial element is critical), when and where Associates create lift, and whether your deployment of Associates is optimized for creating lift can be a powerful part of explaining shopper success.

retail analytics with dm1 - analyzing associate performance, STARs and lift with DM!

The whole video is super-quick (just 3 minutes in total) and unlike most of what I’ve done in the past, it doesn’t require audio. There’s a brief audio introduction (about 15 seconds) but for the rest, the screen annotations should give you a pretty good sense of what’s going on if you prefer to view videos in quiet mode.

I know you’re not going to learn in-store measurement in 3 minutes. And this is just a tiny fraction of the analytic capability in a product like DM1. It’s more of an amuse bouche – a little taste –  to see if you find something enjoyable and interesting.

I’m going to be working through a series of videos intended to serve that purpose (and also provide instructional content for new DM1 users). As part of that, I’m working on a broader overview right now that will show-off more of the tools available. Then I’m going to work on building a library of instructional vids for each part of DM1 – from configuring a store to creating and using metadata (like store events) to a deep-dive into funnel-analytics.

I’d love to hear what you think about this initial effort!

Check it out:

Analyzing the In-Store Journey as a Funnel with DM1

Visualizing the customer journey in the context of the store is the foundation for analyzing in-store data. The metrics and the store context provide a framework for translating customer measurement data into something that is immediately understandable as a shopper’s journey. But visualizing information is just the first step in making it actionable. Understanding the data is, of course, essential. But you can understand data quite well and still have no idea what to do with it. In fact, that’s a problem we see all the time with analytics. And while it’s a problem that no technology solution can solve entirely (since there are always business and organizational issues to be tackled),  there are analytic and reporting techniques that can really help. We’ve built a number of them into DM1, starting with in-store funnel analytics.

The idea behind a conversion funnel is simple. The customer journey is chopped up into discrete steps based on increasing likelihood to purchase. If we analyze the journey by those discrete steps, we can work to optimize the flow from one step to the next. Improve the flow between any funnel step and the next, and the chance is excellent that you’ll improve the overall funnel conversion as well. Funnels give you a specific place to start. They let you figure out which parts of the overall customer journey are already working well and which aren’t. They let you focus on specific areas with the confidence that if you can improve performance you’ll make a significant difference. And they make it possible to easily measure success. All you have to measure is the number of people moving from one step to the next.

Funnels are THE paradigm for analytics and optimization in eCommerce. In fact, it was largely on their ability to help merchants understand and improve eCommerce funnels that digital analytics solutions first gained traction. And to this day, eCommerce testing and analytics practitioners almost always work by breaking down the customer journey into funnel steps and then working to optimize each step. While the measurement of funnels is itself interesting, I think the real value in funnel analysis is the process it supports. That ability to target specific aspects of the journey, figure out which ones are the most broken, and then test possible improvements is at the heart of so much of the continuous improvement that makes digital players successful.

One of our big goals with Digital Mortar is to bring the in-store funnel paradigm and the discipline of continuous improvement to the store. DM1 delivers on the technology and analytic part of that program.

With DM1, you can start a funnel at any place in the store and at any stage in the customer journey. But the most natural place to start is with a shopper entering the store. As you can see, DM1 lets you choose any area of the store you’ve defined and lets you pick from a range of engagement metrics.

Retail Analytics - In-Store Shopper Funnel DM1

 

Nearly 84 thousand shoppers entered the store in October. Since that’s where the measurement starts, this first step of the funnel doesn’t have any fallout. Everyone I measured, by definition, entered the store. It’s worth noting – and I get asked this a lot – that you CAN track Retail Analytics - In-Store Shopper Funnelpass-by traffic if you setup the measurement system appropriately. Doing so allows you to extend the funnel outside the store!

I could build a store-wide funnel, looking at conversion across the whole store. But it’s usually more interesting and actionable to focus a bit. So my funnel is going to focus on a specific section of the store – Team Gear.Retail Analytics - In-Store Shopper Funnel Linger and Consideration

Adding “Visits to Team Gear” to the funnel, I can see that around 15 thousand shoppers – about 18% of store visitors – visited Team Gear. It took the average visitor about 2 minutes before entry to reach Team Gear. Which makes sense because this area is pretty front of store

But one of the real complexities to in-store measurement is that since shoppers are navigating a physical environment they often pass-thru areas without being interested in them. That doesn’t happen much in digital.

I want to know how many people SHOPPED in Team Gear out of the folks who had the opportunity. And I caRetail Analytics - In-Store Shopper Funnel falloutn see that by selecting Lingers as my metric in the next funnel step. These last two steps illustrate a powerful metric in store measurement that’s simply never been available before. Stores have been able to measure conversion (checkouts/door entries) at the macro level, but at the area level this gets reduced to sales per square foot.

That isn’t reflective of the real opportunity a square foot provides. By measuring where shoppers actually WENT and where they SHOPPED, we have a real KPI of how well a section is performing given its opportunity.

Only about 1 in 7 shoppers who passed through Team Gear actually Shopped there. That’s a problem I’d probably want to tackle.

From here, I can add Fitting Room and CashWrap to the funnel. At every step along the way I can see how many shoppers I’m losing from the total opportunity. I can also see how much time is passing and how many stops the shopper made in-between.

In the end, I have a customer funnel for Team Gear that runs from Store Entry to Cash-Wrap that looks like this:

Retail Analytics - In-Store Shopper Funnel and Funnel Analytics

Any start place. Any level of engagement. Any steps in between. DM1 builds the funnels you need to support analytics and testing.

Pretty cool.

There’s no doubt in my mind that the picture of the shopper journey that DM1 provides drives better understanding. But as I said earlier, analytics isn’t improvement. It’s a way to drive improvement.

The funnel paradigm works less because of it’s analytics potential than because of the process it helps define. In-store funnels focus optimization efforts and make them easily measurable. Whether I tackle the step with the highest abandonment rate, try to build the initial opportunity, or attempt to remove distractions between key steps, funnel analysis helps guide my reasoning about what to test in the store and provides a fully baked way to measure whether store changes drove the desired behavior.

Retail Analytics: Store Visualization and DM1

Location analytics isn’t really about where the shopper was. After all, a stream of X,Y coordinates doesn’t tell us much about the shopper. The interesting fact is what was there – in the store – where the shopper was. To answer most questions about the shopper’s experience (what they were interested in, what they might have bought but didn’t, whether they had sales help or not, and what they passed but didn’t consider), we have to understand the store. In my last post, I explained why the most common method of mapping behavior to the store – heatmaps – doesn’t work very well. Today, I’m going to tackle how DM1 does it differently and (in my humble opinion) much better.

Here are the seven requirements I listed for Store Visualization and where and why heatmaps come up short:

Store Visualization: Heatmaps and retail analytics

Designing DM1’s store visualization, I started with the idea that its core function is to represent how an area of the store is performing. Not a point. An area. That’s an important distinction. Heatmaps function rather like a camera exposure. There’s an area down there somewhere of course – but it’s only at the tiny level of the pixel. That’s great for a photograph where the smaller the pixel the better, but analytically those points are too small to be useful. Besides, store measurement isn’t like taking a picture. The smaller the pixel the more accurate the photo. But our measurement capture systems aren’t accurate enough to pinpoint a specific location in the store. Instead, they generate a location with a circle of error that, depending on the system being used, can actually be quite large. It doesn’t make a lot of sense to pretend that measurement is happening at a pixel location when the circle of error on the measurement is 5 feet across!

This got me thinking along the lines of the grid system used in classic board games I played as a kid. If you ever played those games, you know what I’m talking about. The board was a map (of the D-Day beaches or Gettysburg or all of Europe) and overlaid on the map was a (usually hexagonal) grid system that looked like this:

BoardGame

Units occupied grid spaces and their movement was controlled by grid spaces. The grid became the key to the game – with the map providing the underlying visual metaphor. This grid overlay is obviously artificial. Today’s first person shooter games don’t need or use anything like it, but strategy games like Civ still do. Why? Because it’s a great way to quantize spatial information about things like how far a unit can move or shoot, the distance to the enemy, the direction of an attack, the density of units in a space and much, much more.

DM1 takes this grid concept and applies it to store visualization. Picture a store:

store journey analytics

Now lay a grid over it:

Visualizing Store Data

And you can take any place the shopper spends time and map it to a grid-coordinates:

Mapping customer data to the store

And here’s where it really gets powerful. Because not only can you now map every measurement ping to a quantifiable grid space, you can attach store meta-data to the grid space in a deterministic and highly maintainable way. If we have a database that describes GridPoint P14 as being part of Customer Service on a given day, then we know exactly what a shopper saw there. Even better, by mapping actual traffic and store meta-data to grid-points, we can reliably track and trend those metrics over time. No matter how the shape or even location of a store area changes, our trends and metrics will be accurate. So if grid-point P14 is changed from Customer Service to Laptop Displays, we can still trend Customer Service traffic accurately – before, after and across the change.

That’s how DM1 works.

Here’s a look at DM1 displaying a store at the Section level:

Retail Analytics: Store Visualization in DM1

In this case, the metric is visits and each section is color-coded to represent how much foot traffic the section got. These are fully quantified numbers. You can mouse over any area and get the exact counts and metrics for it. Not that you don’t need a separate planogram to match to the store. The understanding of what’s there is captured right along side the metric visualization. Now obviously, Section isn’t the grid level for the store. We often need to be much more fine-grained. In DM1, you can drill-down to the actual grid level to get a much more detailed view:

Retail Analytics: Store Detail in DM1

How detailed? As detailed as your collection system will support. We setup the grid in DM1 to match the appropriate resolution of your system. You’re not limited to drilling down, though. You can also drill up to levels above a Section. Here’s a DM1 view at the Department level:

Retail Analytics: Store Meta Data and Levels in DM1

In fact, with DM1, you have pretty much complete flexibility in how you describe the store. You can define ANY level of meta-data for each grid-point and then view it on the store. Here, for example, is where promotions were placed in the store:

Retail Analytics: Store Merchandising Data Overlay

DM1 also takes advantage of the Store Visualization to make it easy to compare stores – head to head or the same store over time. The Comparison views shows two stores viewed (in this example) at the Section Level and compared by Conversion Efficiency:

Retail Analytics: Store Comparison in DM1

It takes only a glance to instantly see which Sections perform better and which worse at each store. That’s a powerful viz!

In DM1, pretty much ANY metric can be mapped on the store at ANY meta-data level. You can see visits, lingers, linger rate, avg. time, attributed conversions, exits, bounces, Associate interactions, STARs ratio, Interaction Success Rate and so much more (almost fifty metrics) – mapped to any logical level of the store; from macro-levels like Department or Floor all the way down the smallest unit of measurement your collection system can support. Best of all, you define those levels. They aren’t fixed. They’re entirely custom to the way you want to map, measure and optimize your stores.

And because DM1 keeps an historical database of the layouts and meta-data over time, it provides simple, accurate and easily intelligible trending over time.

I love the store visualization capability in DM1 and I think it’s a huge advance compared to heat-maps. As an analyst, I can tell you there’s just no comparison in terms of how useful these visualizations are. They do so much more and do it so much better that it hardly seems worth comparing them to the old way of doing things. But here it is anyway:

DM1 Retail Analytics Store Visualization Advantages

DM1’s store visualization is one powerful analytic hammer. But as good as they are, this type of store visualization doesn’t solve every problem. In my next post, I’ll show how DM1 uses another powerful visual paradigm for mapping and understanding the in-store funnel!

[BTW – if you want to see how DM1 Store Visualization actually works, check out these live videos of DM1 in Action]

Four Fatal Flaws with In-Store Tracking

I didn’t start Digital Mortar because I was impressed with the quality of the reporting and analytics platforms in the in-store customer tracking space. I didn’t look at this industry and say to myself, “Wow – here’s a bunch of great platforms that are meeting the fundamental needs in the space at an enterprise level.” Building good analytics software is hard. And while I’ve seen great examples of SaaS analytics platforms in the digital space, solutions like Adobe and Google Analytics took many years to reach a mature and satisfying form. Ten years ago, GA was a toy and Adobe (Omniture SiteCatalyst at the time) managed to be both confusing and deeply under-powered analytically. In our previous life as consultants, we had the opportunity to use the current generation of in-store customer journey measurement tools. That hands-on experience convinced me that this data is invaluable. But it also revealed deep problems with the way in-store measurement is done.

When we started building a new SaaS in-store measurement solution here at Digital Mortar, these are the problems in the technology that we wanted to solve:

Lack of Journey Measurement

Most of today’s in-store measurement systems are setup as, in essence, fancy door counters. They start by having you draw zones in the store. Then they track how many people enter each zone and how long they spend there (dwell time).

This just sucks.

It’s like the early days of digital analytics when all of our tracking was focused on the page view. We kept counting pages and thinking it meant something. Till we finally realized that it’s customers we need to understand, not pages. With zone counting, you can’t answer the questions that matter. What did customers look at first? What else did customers look at when they shopped for something specific? Did customers interact with associates? Did those interactions drive sales? Did customer engagement in an area actually drive sales? Which parts of the store were most and least efficient? Does that efficiency vary by customer type?

If you’re not asking and answering questions about customers, you’re not doing serious measurement. Measurement that can’t track the customer journey across zones just doesn’t cut it. Which brings me to…

Lack of Segmentation

My book, Measuring the Digital World, is an extended argument for the central role of behavioral segmentation in doing customer analytics. Customer demographics and relationship variables are useful. But behavior – what customers care about right now – will nearly always be more important. If you’re trying to craft better omni-channel experiences, drive integrated marketing, or optimize associate interactions, you must focus on behavioral segmentation. The whole point of in-store customer tracking is to open up a new set of critically important customer behaviors for analysis and use. It’s all about segmentation.

Unfortunately, if you can’t track the customer journey (as per my point above), you can’t segment. It’s just that simple. When a customer is nothing more than a blip in the zone, you have no data for behavioral segmentation. Of course, even if you track the customer journey, segmentation may be deeply limited in analytic tools. You could map the improvement of Adobe or Google Analytics by charting their gradually improving segmentation capabilities. From limited filtering on pre-defined variables to more complex, query-based segmentation to the gradual incorporation of sophisticated segmentation capabilities into the analyst’s workbench.

You can have all the fancy charts and visualizations in the world, but without robust segmentation, customer analytics is crippled.

Lack of Store Context

When I introduce audiences to in-store customer tracking, I often use a slide like this:

In-store Customer Analytics

The key point is that the basic location data about the customer journey is only meaningful when its mapped to the actual store. If you don’t know WHAT’S THERE, you don’t have interesting data. The failure to incorporate “what’s there” into their reporting isn’t entirely the fault of in-store tracking software. Far too many retailers still rely on poor, paper-based planograms to track store setups. But “what’s there” needs to be a fundamental part of the collection and the reporting. If data isn’t stored, aggregated, trended and reported based on “what’s there”, it just won’t be usable. Which brings me to…

Use of Heatmaps

Heatmaps sure look cool. And, let’s face it, they are specifically designed to tackle the problem of “Store Context” I just talked about. Unfortunately, they don’t work. If you’ve ever tried to describe (or just figure out) how two heat-maps differ, you can understand the problem. Dialog like: “You can see there’s a little more yellow here and this area is a little less red after our test” isn’t going to cut it in a Board presentation. Because heat-maps are continuous, not discrete, you can’t trend them meaningfully. You can’t use them to document specific amounts of change. And you can’t use them to compare customer segments or changed journeys. In fact, as an analyst who’s tried first hand to use them, I can pretty much attest that you can’t actually use heat-maps for much of anything. They are the prettiest and most useless part of in-store customer measurement systems. If heat-maps are the tool you have to solve the problem of store context, you’re doomed.

These four problems cripple most in-store customer journey solutions. It’s incredibly difficult to do good retail analytics when you can’t measure journeys, segment customers, or map your data effectively onto the store. And the ubiquity of heat-maps just makes these problems worse.

But the problems with in-store tracking solutions don’t end here. In my next post, I’ll detail several more critical shortcomings in the way most in-store tracking solutions are designed. Shortcomings that ensure that not only can’t the analyst effectively solve real-world business problems with the tool, but that they can’t get AT THE DATA with any tools that might be able to do better!

Want to know more about how Digital Mortar can drive better store analytics? Drop me a line.

Taking In-Store Measurement…Out of the Store

In my last few posts, I explained what in-store journey analytics is, described the basics of the technology and the data collection used, and went into some detail about its potential business uses. Throughout, and especially in that last part around business uses, I wrote on the assumption that this type of measurement is all about retail stores. After all, brick & mortar stores are the primary focus of Digital Mortar AND of nearly every company in the space. But here’s the thing, this type of measurement is broadly applicable to a wide variety of applications where customer movement though a physical environment is a part of the experience. Stadiums, malls, resorts, cruise ships, casinos, events, hospitals, retail banks, airports, train stations and even government buildings and public spaces can all benefit from understanding how physical spaces can be optimized to drive better customer or user experiences.

In these next few posts, I’m going to step outside the realm of stores and talk about the opportunities in the broader world for customer journey tracking. I’ll start by tackling some of the differences between the tracking technologies and measurement that might be appropriate in some of these areas versus retail, and then I’m going to describe specific application areas and delve a little deeper into how the technology might be used differently than in traditional retail. While the underlying measurement technology can be very similar, the type of reporting and analytics that’s useful to a stadium or resort is different than what makes sense for a mall store.

Since I’m not going to cover every application of customer journey tracking outside retail in great detail, I’ll start with some general principles of location measurement based upon industry neutral things like the size of the space and the extent to which the visitors will opt-in to wifi or use an app.

Measuring BIG Spaces versus little ones

With in-store journey tracking, you have three or four alternatives when choosing the underlying measurement collection technology. Cameras, passive wifi, opt-in wifi and bluetooth, and dedicated sniffers are all plausible solutions. With large spaces like stadiums and airports, it’s often too expensive to provide comprehensive camera coverage. It can even be too expensive to deploy custom measurement devices (like sniffers). That’s especially true in environments where the downtime and wiring costs can greatly exceed the cost of the hardware itself.

So for large spaces, wifi tracking often becomes the only realistic technology for deploying a measurement system. That’s not all bad. While out-of-the-box wifi is the least accurate measurement technology, most large spaces don’t demand fine-grained resolution. In a store, a 3 meter circle of error might place a customer in a completely different section of the store. In an airport, it’s hard to imagine it would make much difference.

Key Considerations Driven by Size of Location:

  • How much measurement accuracy to do you need?
  • How expensive will measurement specific equipment and installation be and is it worth the cost?
  • Are there special privacy considerations for your space or audience?

Opt-in vs. Anonymous Tracking

Cameras, passive wifi and sniffers can all deliver anonymous tracking. Wifi, Bluetooth and mobile apps all provide the potential for opt-in tracking. There are significant advantages to opt-in based tracking. First, it’s more accurate. Particularly in out-of-the-box passive wifi, the changes in IoS to randomize MAC addresses have crippled straightforward measurement and made reasonably accurate customer measurement a challenge. When a user connects to your wifi or opens an app, you can locate them more frequently and more precisely and their phone identity is STABLE so you can track them over time. If your primary interest is in understanding specific customers better for your CRM, tracking over-time populations or you have significant issues with the privacy implications of anonymized passive tracking, then opt-in tracking is your best bet. However, this choice is dependent on one further fact: the extent to which your customers will opt-in. For stadiums and resorts, log-in rates are quite high. Not so much at retail banks. Which brings us to…

Key Considerations for Opt-In Based Tracking

  • Will a significant segment of your audience opt-in?
  • Are you primarily interested in CRM (where opt-in is critical) or in journey analytics (which can be anonymous)?

How good is the sample?

Some technologies (like camera) provide comprehensive coverage by default. Most other measurement technologies inherently take some sample. Any form of signal detection will start with a sample that includes only people with phones. That isn’t much of a sample limitation though it will exclude most smaller children. Passive methods further restrict the population to people with wifi turned on. Most estimates place the wifi-activated rate at around 80%. That’s a fairly high number and it seems unlikely that this factor introduces significant sample bias. However, when you start factoring in things like Android user or App downloader or wifi user, you’re often introducing significant reductions in sample size AND adding sample biases that may or may not be difficult to control for. App users probably aren’t a  representative sample of, for example, the likelihood of a shopper to convert in a store. But even if they are a small percentage of your total users, they are likely perfectly representative of how long people spend queuing in lines at a resort. One of the poorly understood aspects of measurement science is that the same sample can be horribly biased for some purposes but perfectly useful for others!

Key Considerations for Sampling

  • Does your measurement collection system bias your measurement in important ways?
  • Are people who opt-in a representative sample for your measurement purposes?

The broad characteristics that define what type of measurement system is right for your needs are, of course, determined by what questions you need to answer. I’ll take a close look at some of the business questions for specific applications like sports stadiums next time. In general, though, large facilities by their very nature need less fine-grained measurement than smaller ones. For most applications outside of retail, being able to locate a person within a 3 meter circle is perfectly adequate. And while the specific questions being answered are often quite specific to an application area, there is a broad and important divide between measurement that’s primarily focused on understanding patterns of movement and analysis that’s focused on understanding specific customers. When your most interested in traffic patterns, then samples work very well. Even highly biased samples will often serve. If, on the other hand, you’re looking to use customer journey tracking to understand specific customers or customer segments (like season-ticket holders) better, you should focus on opt-in based techniques. In those situations, identification trumps accuracy.

If you have questions about the right location-based measurement technology solution for your business, drop us a line at info@digitalmortar.com

Next up, I’ll tackle the surprisingly interesting world of stadium/arena measurement.

Why do we need to track customers when we know what they buy?

Digital Mortar is committed to bringing a whole new generation of measurement and analytics to the in-store customer journey. What I mean by that “new generation” is that our approach embodies more complete and far more accurate data collection. I mean that it provides far more interesting and directive reports. And I mean that our analytics will make a store (or other physical space) work better. But how does that happen and why do we need to track customers inside the store when we know what they buy? After all, it’s not as if traditional stores are unmeasured. Stores have, at minimum, PoS data and store merchandising and operations data. In other words, we know what we had to sell, we know how many people we used to sell it, and we know how much (and what and what profit) we actually sold.

That stuff is vital and deeply explanatory.

It constitutes the data necessary to optimize assortment, manage (to some extent) staffing needs, allocate staff to areas, and understand which categories are pulling their weight. It can even, with market basket analysis, help us understand which products are associated in customer’s shopping behaviors and can form the basis for layout optimization.

We come from a digital analytics background – analyzing customer experience on eCommerce sites we often had a similar situation. The back-office systems told us which products were purchased, which were bought together, which categories were most successful. You didn’t need a digital analytics solution to tell you any of that. So if you bought, implemented and tried to use a digital analytics solution and those were your questions…well, you were going to be disappointed. Not because a digital analytics solution couldn’t provide answers, it just couldn’t provide better answer than you already had.

It’s the same with in-store tracking systems; which is why when we’re building our system, evaluating reports or doing analysis for clients at Digital Mortar, I find myself using the PoS test. The PoS test is just this pretty simple question: does using the customer in-store journey to answer the question provide better, more useful information than simply knowing what customers bought?

When the answer yes, we build it. But sometimes the answer is no – and we just leave well enough alone.

Let me give you some examples from real-life to show why the PoS test can help clarify what In-Store tracking is for. Here’s three different reports based on understanding the in-store customer journey:

#1: There are regular in-store events hosted by each location. With in-store tracking, we can measure the browsing impact of these events and see if they encourage people to shop products.

#2: There are sometimes significant category performance differences between locations. With in-store tracking, we can measure whether the performance differences are driven by layout, by traffic type, by weather or by area shop per preferences.

#3: Matching staffing levels to store traffic can be tricky. Are there times when a store is understaffed leaving sales, literally, on the table? With in-store tracking we can measure associate / customer rations, interactions and performance and we can identify whether and how often lowered interaction rates lost sales.

I think all three of these reports are potentially interesting – they’re perfectly reasonable to ask for and to produce.

With #1, however, I have to wonder how much value in-store tracking will add beyond PoS data. I can just as easily correlate PoS data to event times to see if events drive additional sales. What I don’t know is whether event attendees browse but don’t buy. If I do this analysis with in-store tracking data, the first question I’ll get is “But did they buy anything?” If, on the other hand, I do the analysis with PoS data, I’m much less likely to hear “But did they browse the store?” So while in-store tracking adds a little bit of information to the problem, it’s probably not the best or the easiest way to understand the impact of store events. We chose not to include this type of report in our base report set, even though we do let people integrate and view this type of data.

Question #2 is quite different. The question starts with sales data. We see differences in category sales by store. So more PoS data isn’t going to help. When you want to know why sales are different (by day, by store, by region, etc.), then you’ll need other types of data. Obviously, you’ll need square footage to understand efficiency, but the type of store layout data you can bring to bear is probably even more critical than measures of efficiency. With in-store tracking you can see how often a category functions as a draw (where customers go first), how it gets traffic from associated areas, how much opportunity it had, and how well it actually performed. Along with weather and associate interaction data, you have almost every factor you’re likely to need to really understand the drivers of performance. We made sure this kind of analytics is easy in our tool. Not just by integrating PoS data, but by making sure that it’s possible to understand and compare how store layouts shape category browsing and buying.

Question #3 is somewhere in between. By matching staffing data to PoS data, I can see if there are times when I look understaffed.  But I’m missing significant pieces of information if I try to optimize staff using only PoS data. Door-counting data can take this one step further and help me understand when interaction opportunities were highest (and most underserved). With full in-store journey tracking, I can refine my answers to individual categories / departments and make sure I’m evaluating real opportunities not, for example, mall pass throughs. So in-store journey tracking deepens and sharpens the answer to Staffing Gaps well beyond what can be achieved with only PoS data or even PoS and door-counting data. Once again, we chose to include staff optimization reports (actually a whole bunch of them) in the base product. Even though you can do interesting analysis with just PoS data, there’s too much missing to make decision-makers informed and confident enough to make changes. And making changes is what it’s all about.

 

We all know the old saying about everything looking like a nail when your only tool is a hammer. But the truth is that we often fixate on a particular tool even when many others are near to hand. You can answer all sorts of questions with in-store journey tracking data, but some of those questions can be answered as well or better using your existing PoS or door-counting data. This sort of analytics duplication isn’t unique to in-store tracking. It’s ubiquitous in data analytics in general. Before you start buying systems, using reports or delving into a tool, it’s almost always worth asking if it’s the right/easiest/best data for the job. It just so happens that with in-store tracking data, asking how and whether it extends PoS data is almost always a good place to start.

In creating the DM tool, we’ve tried to do a lot of that work for you. And by applying the PoS test, we think we’ve created a report set that helps guide you to the best uses of in-store tracking data. The uses that take full advantage of what makes this data unique and that don’t waste your time with stuff you already (should) know.