Tag Archives: DM1

The Really Short Introduction to DM1 and In-Store Measurement

Take a minute (okay – a minute and a half) to check out this video overview of our DM1 store measurement platform. It’s the shortest and crispest introduction we’ve produced so far.

As more than one famous writer/philosopher has remarked, “If I had more time, it would have been shorter.” Brevity, like wit, takes work. And practice. We haven’t achieved wit, but we’re getting close to brevity:

I also really like the video’s flow. It starts with a very short intro into the basic concept of store measurement and then introduces the platform with the Digital Planogram tool – the Configurator. When you get right down to it, this capability is the single most important part of the platform. Digital representations of the store are critical to every report and analysis DM1 delivers. And the ability to rapidly create, adjust and maintain those digital maps is essential to making the tool work.

When we first released DM1 the configurator lagged behind some of the reporting tools – not very friendly and a little prone to bugginess. Its grown into quite a good tool – a pleasure to use and capable of handling even very complex store layouts pretty easily.

From the configurator, the video flows into the Layout tool – which just maps metrics right onto those digital planograms. Not only does this show how effortlessly you move from a map of the store to a metric, but I really like the way the video works through a small set of metrics to show how easy the visual interpretation is.

Once you’ve got a feel for basic metrics in the Store Layout, the next logical step is to tackle journey. And the next two sections highlight funnel and path analysis. Both of these tools help transition thinking from a static view of store performance to a focus on shopper journey. Funnels tell you how effective the store is in moving shoppers down an engagement path. Path helps you understand which in-store paths are popular and which drive conversion. After this, it’s a quick look at the data exploration capabilities of the platform – and the ability to build reports around whatever problem you choose to tackle. Finally, it wraps up with a sample of the dashboards.

Truth to tell, I’ve sometimes done this same presentation in almost the reverse order – starting with Dashboards and ending with configuration. It’s plausible that way too, but I think this works better for analysts. Because while dashboards are the first view for end-users of DM1, for analysts their task really starts with store mapping, proceeds through various levels of analysis, and ends with wrapping a nice, neat bow around the data for others. That’s that way this video proceeds and that makes the structure more compelling and natural if that’s the way you tend to think.

Check it out.

 

Hey, unless you’re a very fast reader, you’ve already spent more time on this post than the you will on the video!

 

 

Machine Learning and Optimizing the Store

My previous post covered the first half of my presentation on Machine Learning (ML) and store analytics at the Toronto Symposium. Here, I’m going to work through the case study on using ML to derive an optimal store path. For that analysis, we used our DM1 platform to source, clean, map and aggregate the data and then worked with a data science partner (DXi) on the actual analysis.

Why this problem?

Within DM1 we feel pretty good about the way we’ve built out visualizations of the store data that are easy to use and surprisingly powerful. The Full Path View, Funnel View and Store Layout View  all provide really good ways to explore shopper path in the store.

But for an analyst, exploring data and figuring out a model are utterly different tasks. A typical store presents a nearly infinite number of possible paths – even when the paths are aggregated up to section level. So there’s no way to just explore the paths and find optimal ones.

Even at the most basic level of examining individual shopper paths, deciding what’s good and bad is really hard. Here’s two shopper paths in a store:

Which is better? Does either have issues? It’s pretty hard to know.

 

Why Machine Learning?

Optimal store pathing meets the basic requirements for using supervised ML – we have a lot of data and we have a success criteria (checkout). But ML isn’t worth deploying on every problem that has a lot of data and success criteria. I think about it this way – if I can get want I want by writing simple algorithmic code, then I don’t need ML. In other words, if I can write (for example) a sort and then some simple If-Then rules that will identify the best path or find problem path points, then that’s what we’ll do. If, for example, I just wanted to identify sections that didn’t convert well, it would be trivial to do that. I have a conversion efficiency metric, I sort by it (Ascending) and then I take the worst performers. Or maybe I have a conversion threshold and simply pick any Section that performs worse. Maybe I even calculate a standard deviation and select any section that is worse than 1 standard deviation below the average Section conversion efficiency. All easy.

But none of those things are really very useful when it comes to finding poor path performance in a robust fashion.

So we tried ML.

 

The Analysis Basics

The analysis was focused on a mid-sized apparel store with around 25 sections. We had more than 25,000 shopper visits. Which may not seem like very much if you’re used to digital analytics, but is a pretty good behavior base for a store. In addition to the basic shopper journey, we also had Associate interaction points (and time of interaction), and whether or not the shopper converted. The goal was to find potential store layout problems and understand which parts of the store drove to (or subtracted from) overall conversion efficiency.

Preparing the Data

The first step in any analysis (once you know what you want) is usually data preparation.

Our data starts off as a stream of location events. Those location events have an X,Y, Z coordinates that are offset from a zero point in the store. In the DM1 platform, we take that data and map it against a digital planogram capability that keeps a full, historical record of the store. That tells us what shoppers actually looked and where they spent time. This is the single most critical step in turning the raw data into something that’s analytically useful.

Since we also track Associates, we can track interaction points by overlaying the Associate data stream on top of the shopper stream. This isn’t perfect – it’s easy to miss short interactions or be confused by a crowded store – but particularly when it’s app to app tracking it works pretty well. Associate interaction points are hugely important in the store (as the subsequent analysis will prove).

Step 3 is knowing whether and when a shopper purchased. Most of the standard machine learning algorithms require having a way to determine if a behavior pattern was successful or not – that’s what they are optimizing too. We’re using purchase as our success metric.

The underlying event data gets aggregated into a single row per shopper visit. That row contains a visit identifier, a start and stop time, an interaction count, a first interaction time, a last interaction time, the first section visited, the time spent in each section and, of course, our success metric – a purchase flag.

That’s it.

The actual analytic heavy lifting was done by DXi on their machine learning platform. They use an ensemble approach – throwing the kitchen sink at the problem by using 25+ different algorithms to identify potential winners/losers (if you’d like more info or an introduction to them, drop me a line and I’ll connect you).

 

Findings

Here’s some of the interesting stuff that surfaced, plucked from the Case-Study I gave at the Symposium:

One of the poorest performing sections – unpicked by a single DXi ML algorithm as important – sits right smack dab in the middle of the store. That central position really surprised us. Yes, as you’ll see in a moment, the store has a successful right rail pattern – but this was a fairly trafficked spot with good sightlines and easy flow into high-value areas of the store.

Didn’t work well though. And that’s definitely worth thinking about from a store layout perspective.

One common browsing behavior for shoppers is a race-track pattern – navigating around the perimeter of the store. There’s a good example of that on the right-side image I showed earlier:

The main navigation path through the store is the red rectangle (red because this shopper spent considerable time there) – and you can see that while the shopper frequently deviated from that main path that their overall journey was a circuit around the store.

The ML algo’s don’t know anything about that – but they did pick out the relevant sections in the analyzed store along that starting path as really important for conversion.

We took that to mean that the store is working well for that race-track shopper type. An important learning.

For this particular store, casual shoes was picked as important by every ML algorithm – making it the most important section of the store. It also had the largest optimal time value – and clearly rewarded more time with higher conversion rates. Shoes, of course, is going to be this way. It’s not a grab and go item. So there’s an element of the obvious here – something you should expect when you unleash ML on a dataset (and hey – most analytics projects will, if they work at all, vacillate between the interesting and the obvious). But even compared to other types of shoe – this section performed better and rewarded more time spent – so there is an apples-to-apples part of this comparison as well.

The next finding was an interesting one and illustrates a bit of the balance you need to think about between the analyst and the algorithm. The display in question was located fairly close to cash-wrap on a common path to checkout. It didn’t perform horribly in the ML – some of the DXi algorithms did pick it as important for conversion. On the other hand, it was one of the few sections with a negative weighting to time spent – so more time spent means less likely conversion. We interpreted that combination as indicating that the section’s success was driven by geography not efficiency. It’s kind of like comparing Saudi Arabia vs. U.S. Shale drillers. Based purely on the numbers, Saudi Arabia looks super efficient and successful with the lowest cost per barrel of oil extracted in the world. But when you factor in the geographic challenges, the picture changes completely. SA has the easiest path to oil recovery in the world. Shale producers face huge and complex technical challenges and still manage to be price competitive. Geography matters and that’s just a core fact of in-store analytics.

Our take on the numbers when we sifted through the DXi findings was that this section was actually underperforming. It might take a real A/B test to prove that, but regardless I think it’s a good example of how an analyst has to do more than run an algorithm. It’s easy to fool even very sophisticated algorithms with strong correlations and so much of our post-analysis ANALYSIS was about understanding how the store geography and the algorithm results play together.

In addition to navigation findings like these, the analysis also included the impact of Associates on conversion. In general, the answer we got was the more interactions the merrier (at the cash register). Not every store may yield the same finding (and it’s also worth thinking about whether a single conversion optimization metric is appropriate here – in my Why Analytics Fails talk I argue for the value in picking potentially countervailing KPIs like conversion and shopper satisfaction as dual optimization points).

Even after multiple interactions, additional interactions had a positive impact on sales.

This should be obvious but I’ll hearken back to our early digital analytics days to make a point. We sometimes found that viewing more pages on a Website was a driver of conversion success. But that didn’t mean chopping pages in half (as one client did) so that that the user had to consume more pages to read the same content was a good strategy.

Just because multiple Associate interactions in a store with a normal interaction strategy created lift, it doesn’t mean that, for example, having your Associates tackle customers (INTERACTIOOOON!!!) as they navigate the floor will boost conversion.

But in this case, too much interaction was a legitimate concern. And the data indicates that – at least as measured by conversion rates – the concern did not manifest itself in shopper turn-off.

If you’re interested in getting the whole deck – just drop me a note. It’s a nice intro into the kind of shopper journey tracking you can do with our DM1 platform and some of the ways that machine learning can be used to drive better practice. And, as I mentioned, if you’d like to check out the DXi stuff – and it’s interesting from a pure digital perspective too – drop me a line and I’ll introduce you.

Mobile Apps, Geo-Location and Shopper Analytics

The hardest part about doing enterprise shopper journey measurement and analytics is data collection. Putting new hardware in the store is no joke – and yet it’s often necessary to get the measurement you want. Still, often isn’t the same as always. Last week I talked about how you can get surprisingly powerful store measurement by taking data from your existing store WiFi and flowing it into our DM1 platform. Store Wifi gives you broad population coverage (no, shoppers don’t have to connect) but it isn’t very accurate positionally. On the other end of the measurement spectrum is geo-locating your mobile app users. It’s another way – and a good one – to get fascinating measurement about how shoppers navigate your store.

 

Geo-locating your mobile app users is easy and quite inexpensive. It can be done with no additional hardware in the store. It’s very accurate and, by feeding the data to DM1, you can get powerful and detailed analytics on what your mobile app users are doing in-store. When you add geo-location to your Mobile App (it just takes a few lines of code), it sends you a stream of positional data that tells you exactly where a shopper was throughout their in-store journey. Our DM1 platform ingests that stream, aggregates it, and provides you the store analytics to understand paths, funnels, usage, interactions, and much more.

That’s why, when I speak on geo-location analytics, I steal the line from Lenox Financial and describe mobile app geo-location as the biggest no brainer in the history of earth.

 

There’s only one real drawback to shopper measurement via mobile app and it’s the obvious one – it’s limited to the population of your mobile app users. For most retailers, that’s a small and totally non-random segment of their population.

 

Before I discuss the implications of that, here’s what you need to know about getting this kind of app-tracking to work and integrating it with Digital Mortar’s platform.

 

We’re all mobile phone users and we all know that our phones position us. Most of us could barely navigate our home city without Google or Waze or Apple Maps. I remember being in Venice and wondering how ANYONE ever got around there before GPS. It’s like the old D&D game – a maze of twisty passages, all alike. I imagine people just got lost a lot and that was probably part of the fun.

 

We also know that the built-in outdoor GPS positioning on the phone is pretty accurate but not super-precise. When you use it for walking you can often see just how dislocated that little blue-dot is from where your actually standing. And it can take some real mental work to figure out exactly where you are and when to turn if – as in places like Venice – you’re not navigating long straight blocks.

 

Indoor wayfinding has its own set of challenges. Indoor spaces by their very nature are more tightly packed so there’s a higher premium on positional accuracy. But indoor spaces are also more challenging from a measurement standpoint because signals are routinely blocked, distorted or mirrored. And, of course, indoor space are often importantly three dimensional. Outdoor mapping doesn’t have to worry about floors – but in buildings, knowing what floor you’re on is fundamental.

 

Fortunately, your typical smart phone these days has a whole grab bag of sensors that can be used for better indoor wayfinding. Good indoor wayfinding systems take advantage of the whole array of phone sensors – starting with GPS positioning but adding WiFi, BlueTooth signals, radio signals, magnetic fields, the inertial sensor platform and even barometric pressure.

 

This works pretty well since most environments these days are signal rich. It’s also very easy to improve the performance of indoor way-finding if you find that there are inside areas where positional accuracy isn’t great. In most cases, dropping a beacon or two will solve the problem.

 

Typically, indoor wayfinding systems work as code libraries. You put their code into your mobile app and make a few simple function calls. From a developer perspective, this type of integration is simple and straightforward. What’s more, unlike say digital analytics tagging where you need to tie measurement messaging tightly to the functionality, the geo-location libraries (at least when used for measurement) function almost as a stand-alone element of your App. So it’s trivial for developers to integrate the code – and it requires minimal design cycles. Compared to adding good digital analytics tagging to your App, it’s a breeze.

 

With a 3rd Party library in your App, there’s only two other things you need to do. The first is to fingerprint your location – this is essentially a calibration and mapping step where you translate the signals into site location. It’s not hard, but if you really want a turnkey setup, Digital Mortar can do this for you – it takes less than a day and involves no disruption of the site. It doesn’t even have to be done after hours.

 

The last step is to provision a feed from the 3rd Party Cloud instance (or your own cloud instance if you’re using a non-turnkey library that just sources the data to your servers) to our DM1 platform. Most providers provide a good, event-level feed as part of their core service. So all you have to do is turn it on. It’s not that much harder in the DIY world.

 

Keep in mind that most geo-location service providers are thinking about messaging, indoor way-finding and other interactive uses for their service – not analytics. So the analytics you’ll get out of the box is mostly non-existent or even less compelling that what you’d get from a WiFi vendor (and, as I mentioned last week, that aint great).

 

That’s what DM1 is for. Because there is no better source of data for our platform. The beauty of fully-configured mobile app services is that the positional accuracy is terrific. The event stream can be generated at a pre-determined frequency – so we’re not dependent on the somewhat random ping rates that come with other forms of electronic tracking. That means we can capture a full, accurate, and very detailed customer journey.

 

Even better, the nature of mobile apps is that they can provide a true omni-channel join. So you can take DM1’s CRM-based feed and integrate with your customer digital behavior to create a full journey customer database. Our CRM feed includes the customer id you pass us (usually a hashed identifier), basic visit information (visit time, length, and flags for purchase and interaction), and the time spent in each area of the store. Adding that to your customer record is powerful. And yes, it’s just for your mobile app users. But often, those are your very best customers.

 

Plus, there are important applications where the biases inherent in a mobile app sample aren’t particularly damaging. If, for example, you want to know how long customers are queuing at cash-wrap it’s perfectly possible to use mobile app data. When they are standing in line, they are there for the same amount of time as everyone else. And how mobile app users shop the store and take advantage of omni-channel experiences is, let’s just say, quite interesting and valuable.

That being said, it’s like any other case where you’re working with a non-random sample. You can’t assume that all your shoppers behave the way your mobile population does – and if you try to make those kinds of extrapolations, you’re going to get it wrong.

 

That’s why, though a mobile app feed might be the primary customer source you feed into DM1, it’s more likely that you’ll combine a mobile app feed with a full customer feed from iViu, WiFi or camera.

In-store shopper measurement technology compared reviews

In DM1, we keep each feed as a separate segment. With a little bit of a code tweak to your mobile app, we can also integrate your mobile app data directly with the iViu feed so there’s no double counting. But most times, you’ll work with them as separate populations.

 

Either way, you get the full power of DM1’s analytics on the mobile app shopper data. Pathing, funnels, store layout, segmentation, etc. etc.:

Digital Mortars DM1 - Shopper measurement and geo-location analytics. Path Analytics, Funnel Analysis

Finally, this is also one of the best ways to collect and integrate Associate tracking. DM1 provides full Associate measurement functionality allowing you to understand when and where you’re under or over staffed in the store. Adding geo-location to your associate devices is just as easy as it is on the shopper side – and this is something you can do even if you’re not heavily invested in customer-facing mobile apps.

 

 

So if you’re suitably excited, the next question ought to be – where do you get this and how much does it cost?

 

There are tons of options for adding geo-location measurement to your app. The easiest and most fully-baked come from providers like IndoorAtlas and Radar. Hey, even my old digital analytics friends at Adobe and Google do this. The most full-service systems include the code libraries, platforms for fingerprinting, and robust cloud feeds. They make going from App setup to DM1 analytics a walk in the park. There are plenty of DIY alternatives as well – many open-sourced and free.

 

The full-service platform vendors typically charge you per location based on broad square footage ranges. It’s quite inexpensive – though the out-of-the-box pricing models tend to work better for single, very large locations than for large numbers of mid-sized stores. Most of these companies seem to engage in enterprise pricing – meaning that the price you pay is largely a function of whatever you can negotiate. And if you’d prefer, we can provide developer support integrating an open-source solution into your App. It probably won’t be quite as robust, but if your primary goal is measurement it will more than get the job done.

 

From the standpoint of integrating with DM1, it’s pretty much out of the box. If we don’t support the feed already, we’ll create the integration as part of getting you setup – no charge. It’s not too hard because the data streams are pretty much identical – identifier, timestamp, x, y coordinates. There really isn’t much else to it.

 

The measurement costs are trivial compared to what you spend on App development and small compared to what you spend on digital analytics app measurement and analysis. The data is extremely robust and – in a field plagued by bad data – quite accurate. The omni-channel join possibilities are like adding hot fudge sauce to an already delicious sundae. Paired with DM1, you can measure and optimize exactly how this critical and growing customer segment uses the store. You can study how digital and store behaviors interact. And you have an excellent data source for overall store navigation and store usage that you can pair with other data sources or use as is.

 

Okay…it may not be the biggest no-brainer in the history of earth. But adding geo-location and DM1 analytics to your mobile app is definitely the biggest no-brainer in shopper measurement.

The Role of General Purpose BI & Data Viz Tools for In-Store Location Analytics and Shopper Measurement

One of the most important questions in analytics today is the role for bespoke measurement and analytics versus BI and data visualization tools. Bespoke measurement tools provide end-to-end measurement and analytics around a particular type of problem. Google Analytics, Adobe Analytics, our own DM1 platform are all examples of bespoke measurement solutions. Virtually every industry vertical has them. In health care, there are products like GSI Health and EQ Health that are focused on specific health-care problems. In hospitality, there are solutions like IDeaS and Kriya that focus on revenue management. At the same time, there are a range of powerful, general purpose tools like Tableau, Spotfire, Domo, and Qlik that can do a very broad range of dashboarding, reporting and analytic tasks (and do them very well indeed). It’s always fair game to ask when you’d use one or the other and whether or not a general purpose tool is all you need.

 

It’s a particularly important question when it comes to in-store location analytics.  Digital analytics tools  grew up in a market where data collection was largely closed and at a time when traditional BI and Data Viz tools had almost no ability to manage event-level data. So almost every enterprise adopted a digital analytics solution and then, as they found applications for more general-purpose tools, added them to the mix. With in-store tracking, many of the data collection platforms are open (thank god). So it’s possible to directly take data from them.

 

Particularly for sophisticated analytics teams that have been using tools like Tableau and Qlik for digital and consumer analytics, there is a sense that the combination of a general purpose data viz tool and a powerful statistical analysis tool like R is all they really need for almost any data set. And for the most part, the bespoke analytics solutions that have been available are shockingly limited – making the move to tools like Tableau an easy decision.

 

But our DM1 platform changes that equation. It doesn’t make it wrong. But I think it makes it only half-right. For any sophisticated analytics shop, using a general purpose data visualization tool and a powerful stats package is still de rigueur. For a variety of reasons, though, adding a bespoke analytics tool like DM1 also makes sense. Here’s why:

 

Why Users Level of Sophistication Matters

The main issue at stake is whether or not a problem set benefits from bespoke analytics (and, equally germane, whether bespoke tools actually deliver on that potential benefit). Most bespoke analytics tools deliver some combination of table reports and charting. In general, neither of these capabilities are delivered as well as general purpose tools do the job. Even very outstanding tools like Google Analytics don’t stack up to tools like Tableau when it comes to these basic data reporting and visualization tasks. On the other hand, bespoke tools sometimes make it easier to get that basic information – which is why they can be quite a bit better than general purpose tools for less sophisticated users. If you want simple reports that are pre-built and capture important business-specific metrics in ways that make sense right off the bat, then a bespoke tool will likely be better for you. For a reasonably sophisticated analytics team, though, that just doesn’t matter. They don’t need someone else to tell them what’s important. And they certainly don’t have a hard time building reports in tools like Tableau.

 

So if the only value-add from a bespoke tool is pre-built reports, it’s easy to make the decision. If you need that extra help figuring out what matters, go bespoke. If you don’t, go general purpose.

 

But that’s not always the only value in bespoke tools.

 

 

Why Some Problems Benefit from Bespoke

Every problem set has some unique aspects. But many, many data problems fit within a fairly straightforward set of techniques. Probably the most common are cube-based tabular reporting, time-trended data visualization, and geo-mapping. If your measurement problem is centered around either of the first two elements, then a general purpose tool is going to be hard to beat. They’ve optimized the heck out of this type of reporting and visualization. Geo-mapping is a little more complicated. General purpose tools do a very good job of basic and even moderately sophisticated geo-mapping problems. They are great for putting together basic geo-maps that show overlay data (things like displaying census or purchase data on top of DMAs or zip-codes). They can handle but work less well for tasks that involve more complicated geo-mapping functions like route or area-size optimization. For those kinds of tasks, you’d likely benefit from a dedicated geo-mapping solution.

 

When it comes to in-store tracking, there are 4 problems that I think derive considerable benefit from bespoke analytics. They are: data quality control, store layout visualization and associated digital planogram maintenance, path analysis, and funnel analysis. I’ll cover each to show what’s at stake and why a bespoke tool can add value.

 

 

Data Clean-up and Associate Identification

Raw data streams off store measurement feeds are messy! Well, that’s no surprise. Nearly all raw data feeds have significant clean-up challenges. I’m going to deal with electronic data here, but camera data has similar if slightly different challenges too. Data directly off an electronic feed typically has at least three significant challenges:

 

  • Bad Frame Data
  • Static Device Identification
  • Associate Device Identification

 

There are two types of bad frame data: cases where the location is flawed and cases where you get a single measurement. In the first case, you have to decide whether to fix the frame or throw it away. In the second, you have to decide whether a single frame measurement is correct or not. Neither decision is trivial.

 

Static device identification presents it’s own challenge. It seems like it ought to be trivial. If you get a bunch of pings from the same location you throw it away. Sadly, static devices are never quite static. Blockage and measurement tend to produce some movement in the specific X/Y coordinates reported – so a static device isn’t remotely still. This is a case where our grid system helps tremendously. And we’ve developed algorithms that help us pick out, label and discard static devices.

 

Associate identification is the most fraught problem. Even if you issue employee devices and provide a table to track them, you’ll almost certainly find that many Associates carry additional devices (yes, even if it’s against policy). If you don’t think that’s true, you’re just not paying attention to the data! You need algorithms to identify devices as Associates and tag that device signature appropriately.

 

Now all of these problems can be handled in traditional ETL tools. But they are a pain in the ass to get right. And they aren’t problems that you’ll want to try to solve in the data viz solution. So you’re looking at real IT jobs based around some fairly heavy duty ETL. It’s a lot of work. Work that you have to custom pay for. Work that can easily go wrong. Work that you have to stay on top of or risk having garbage data drive bad analysis. In short, it’s one of those problems it’s better to have a vendor tackle.

 

 

Store Layout Visualization

The underlying data stream when it comes to in-store tracking is very basic. Each data record contains a timestamp, a device id, and X,Y,Z coordinates. That’s about it. To make this data interesting, you need to map the X,Y,Z coordinates to the store. To do that involves creating (or using) a digital planogram. If you have that, it’s not terribly difficult to load that data into a data viz tool and use it as the basis for aggregation. But it’s not a very flexible or adaptable solution. If you want to break out data differently than in those digital planograms, you’ll have to edit the database by hand. You’ll have to create time-based queries that use the right digital layouts (this is no picnic and will kill the performance of most data viz tools), and you’ll have to build meta-data tables by hand. This is not the kind of stuff that data visualization tools are good at, and trying to use them this way is going to be much harder – especially for a team where a reasonable, shareable workflow is critical.

 

Contrast that to doing the same tasks in DM1.

 

Digital Mortars DM1 retail analytics and shopper tracking - digital planogram capabilityDM1 provides a full digital store planogram builder. It allows you create (or modify) digital planograms with a point and click interface. It tracks planograms historically and automatically uses the right one for any given date. It maintains all the meta-data around a digital planogram letting you easily map to multiple hierarchies or across multiple physical dimensions. And it allows you to seamlessly share everything you build.

 

Digital Mortars DM1 retail analytics and shopper tracking - store layout and heatmapping visualizationOnce you’ve got those digital planograms, DM1’s reporting is tightly integrated. It’s just seamless to display metrics across every level of metadata right on the digital planogram. What’s more, our grid model makes the translation of individual measurement points into defined areas seamless and repeatable at even fine-grained levels of the store. If you’re relying on pre-built planograms, that’s just not available. And keep in mind that the underlying data is event-based. So if you want to know how many people spent more than a certain amount of time at a particular area of the store, you’ll have to pre-aggregate a bunch of data to use it effectively in a tool like Tableau. Not so in DM1 where every query runs against the event data and the mapping to the digital planogram and subsequent calculation of time spent is done on the fly, in-memory. It’s profoundly more flexible and much, much faster.

 

 

Path Analysis

Pathing is one of those tasks that’s very challenging for traditional BI tools. Digital analytics tools often distinguished themselves by their ability to do comprehensive pathing: both in terms of performance (you have to run a lot of detailed data) and visualization (it’s no picnic to visualize the myriad paths that represent real visitor behavior). Adobe Analytics, for example, sports a terrific pathing tool that makes it easy to visualize paths, filter and prune them, and even segment across them. Still, as nice as digital pathing is, a lot of advanced BI teams have found that it’s less useful than you might think. Websites tend to have very high cardinality (lots of pages). That makes for very complex pathing – with tens of thousands or even hundreds of thousands of slightly variant paths adding up to important behaviors. Based on that experience, when we first built DM1, we left pathing on the drawing board. But it turns out that pathing is more limited in a physical space and, because of that, actually more interesting. So our latest DM1 release includes a robust pathing tool based on the types of tools we were used to in digital.

Digital Mortars DM1 retail analytics and shopper tracking - Full Path Analysis

With the path analysis, you start from any place in the store and you can see how people got there and where they went next. Even better, you can keep extending that view by drilling down into subsequent nodes. You can measure simple footpath, or you can look at paths in terms of engagement spots (DM1 has two different metrics that represent increasing levels of engagement) and you can path at any level of the store: section, department, display…whatever.

And, just like the digital analytics tools, you can segment the paths as well. We even show which paths had the highest conversion percentages.

 

Sure, you could work some SQL wizardry and get at something like this in a general purpose Viz tool. But A) it would be hard. B) it would slow. And C), it wouldn’t look as good or work nearly as well for data exploration.

 

 

Funnel Analysis

Digital Mortars DM1 funnel analytics for retail and shopper tracking

When I demo DM1, I always wrap-up by showing the funnel visualization. It shows off the platforms ability to do point to point to point analysis on a store and fill in key information along the way. Funnel analysis wraps up a bunch of stuff that’s hard in traditional BI. The visualization is non-standard. The metrics are challenging to calculate, the data is event-driven and can’t be aggregated into easy reporting structures, and effective usage requires the ability to map things like engagement time to any level of meta-data.

Digital Mortar's DM1 retail analytics shopper tracking funnel analytics

In the funnels here, you can see how we can effectively mix levels of engagement: how long people spent at a given meta-data defined area of the store, whether or not they had an interaction, whether they visited (for any amount of time) a totally different area of the store, and then what they purchased. The first funnel describes Section conversion efficiency. The second looks at the cross-over between Mens/Womens areas of the store.

And the third traces the path of shoppers who interacted with Digital Signage. No coding necessary and only minutes to setup.

 

That’s powerful!

 

As with path analysis, an analyst can replicate this kind of data with some very complicated SQL or programmatic logic. But it’s damn hard and likely non-performance. It’s also error-prone and difficult to replicate. And, of course, you lose the easy maintainability that DM1’s digital planograms and meta-data provide. What might take days working in low-level tools takes just a few minutes with the Funnel tool in DM1.

 

 

Finally, Don’t Forget to Consider the Basic Economics

It usually costs more to get more. But there are times and situations where that’s not necessarily the case. I know of large-scale retailers who purchase in-store tracking data feeds. And the data feed is all they care about since they’re focused on using BI and stats tools. Oddly, though, they often end up paying more than if they purchased DM1 and took our data feed. Odd, because it’s not unusual for that data feed to be sourced by the exact same collection technology but re-sold by a company that’s tacking on a huge markup for the privilege of giving you unprocessed raw data. So the data is identical. Except even that’s not quite right. Because we’ve done a lot of work to clean-up that same data source and when we process it and generate our data feed, the data is cleaner. We throw out bad data points, analyze static and associate devices and separate them, map associate interactions, and map the data to digital planograms. Essentially all for free. And because DM1 doesn’t charge extra for the feed, it’s often cheaper to get DM1 AND feed than just somebody else’s feed. I know. It makes no sense. But it’s true. So even if you bought DM1 and never opened the platform, you’d be saving money and have better data. It would be a shame not to use the software but…it’s really stupid to pay more for demonstrably less of the same thing.

 

Bottom Line

I have a huge amount of respect for the quality and power of today’s general purpose data visualization tools. You can do almost anything with those tools. And no good analytics team should live without them. But as I once observed to a friend of mine who used Excel for word processing, just because you can do anything in Excel doesn’t mean you should do everything in Excel! In store analytics, there are real reasons why a bespoke analytics package will add value to your analytics toolkit. Will any bespoke solution replace those data viz tools? Nope. Frankly, we don’t want to do that.

 

I know that DM1’s charting and tabular reporting are no match for what you can do easily in those tools. That’s why DM1 comes complete with a baked-in, no extra charge data feed of the cleaned event-level data and a corresponding visitor-level CRM feed. We want you to use those tools. But as deep analytics practitioners who are fairly expert in those tools, we know there’s some things they don’t make as easy as we’d like. That’s what DM1 is designed to do. It’s been built with a strong eye on what an enterprise analyst (and team) needs that wouldn’t be delivered by an off-the-shelf BI or data viz tool.

 

We think that’s the right approach for anyone designing a bespoke analytics or reporting package these days. Knowing that we don’t need to replace a tool like Tableau makes it easier for us to concentrate on delivering features and functionality that make a difference.

A Deeper Dive in How To Use Digital Mortar’s DM1

Over the last year, we’ve released a string of videos showing DM1 in action. These are marketing videos, meant to show off the capabilities of the platform and give people sense of how it can be used. Last week, though, we pushed a set of product How-To videos out to our YouTube channel. These videos are designed to walk new users through aspects of the product and are also designed to support users of our Sandbox. For quite awhile we’ve had a cloud-based Sandbox that partners can use to learn the product. In the next month or so, we’re going to take that Sandbox to the next level and make it available on the Google Cloud as part of a test drive. That means ANYONE will be able to roll their own DM1 instance for 24 hours – complete with store data from our test areas.

The videos are designed to help users go into the Sandbox and experiment with the product productively.

There are four videos in the initial set and here’s a quick look at each:

Dashboards: When I demo the product, I don’t actually spend much time showing the DM1 Dashboard. Sometimes I don’t show it at all since I tend to focus on the more interesting analytic stuff. But the Dashboard is the first thing you see when you open the product – and it’s also the (built-in) reporting view that most non-analysts are going to take advantage of. The Dashboard How-to walks through the (very simple) process of creating Panels (reports) and Alerts in the Dashboard and shows each type of viz and alert. Alerts, in particular, are interesting. Using Alerts, you can choose to always show a KPI, or have it pop only when a metric exceeds some change or threshold. From my marketing videos, you probably wouldn’t even realize DM1 has this capability, but it’s actually pretty cool.

https://t.co/LIHTgMCpeQ

Workbench: This is a quick tour of the entire Analytics Workbench. Most of this is stuff you do see in my other videos since this is where I tend to spend time. But the How-To video walks through the Left-Navigation options in the Workbench more carefully than I usually do in Marketing Videos and also shows Viz types like the DayMap that I often give short shrift.

https://t.co/lM553x5XNw

Store Configuration: Digital Planograms are at the heart of DM1 and they underlie ALL the reporting in the Analytics Workbench (and are flat out the Viz in the Layout view). We’ve built a very robust point-and-click Configuration tool for building those Planograms. It’s a huge part of the product and a major differentiator. There’s nothing else like it out there. But because it’s more plumbing than countertop, I usually don’t show it at all in marketing videos. The How To vid shows how you can open, edit and save an existing digital planogram and how easy it is to create a new one.

https://t.co/I5O66H6g5K

Metadata: The store configurator maps the store and allows you to assign any part of the store to….well that’s where metadata comes in. DM1’s Admin interface includes a meta-data builder where you describe the Sections, Departments, Displays, Functions, Team Areas, etc. that matter to you. Meta-data is what makes basic locational data come alive. And DM1’s very robust capability let’s you define unlimited hierarchies, unlimited levels per hierarchy, and unlimited categories per level. What’s the word of the day around metadata? Unlimited. It’s pretty powerful but it’s really pretty easy to do as well and the How To vid gives you a nice little taste. And holy frig – I forgot to mention that not everyone on my team thought I should say “holy frig” in this video – but I left it in anyway.

https://t.co/YENzD6TMqC

It’s really capabilities like the Metadata builder and the Store Configurator that make DM1 true enterprise analytics. They provide the foundational elements that let you manage complex store setups and generate consistently interesting analytic reporting. Even if you’re not a user yet, check em out. If nothing else, you’ll be ready for a Test-Drive!

A Year in Store Analytics

It’s been a little more than a year now for me in store analytics and with the time right after Christmas and the chance to see the industry’s latest at NRF 2018, it seems like a good time to reflect on what I’ve learned and where I think things are headed.

Let’s start with the big broad view…

The Current State of Stores

Given the retail apocalypse meme, it’s obvious that 2017 was a very tough year. But the sheer number of store closings masked other statistics – including fairly robust in-store spending growth – that tell a different story. There’s no doubt that stores saddled with a lot of bad real-estate and muddied brands got pounded in 2017. I’ve written before that one of the unique economic aspects of online from a marketplace standpoint is the absence of friction. That lack of friction makes it possible for one player (you know who) to dominate in a way that could never have happened in physical retail. At the same time, digital has greatly reduced overall retail friction. And that reduction means that shoppers are not inclined to shop at bad stores just to achieve geographic convenience. So the unsatisfying end of the store market is getting absolutely crushed – and frankly – nothing is going to save it. Digital has created a world that is very unforgiving to bad experience.

On the other hand, if you can exceed that threshold, it seems pretty clear that there is a legitimate and very significant role for physical stores. And then the key question becomes, can you use analytics to make stores an asset.

So let’s talk about…

The Current State of In-Store Customer Analytics

It’s pretty rough out there. A lot of companies have experimented with in-store shopper measurement using a variety of technologies. Mostly, those efforts haven’t been successful and I think there are two reasons for that. First, this type of store analytics is new and most of the stores trying it don’t have dedicated analytics teams who can use the data. IT led projects are great for getting the infrastructure in the store, but without dedicated analytics the business value isn’t going to materialize. I saw that same pattern for years in web analytics before the digital analytics function was standardized and (nearly always) located on the business side. Second, the products most stores are using just suck. I really do feel for any analyst trying to use the deeply flawed, highly aggregated data that gets produced and presented by most of the “solutions” out there. They don’t give analysts enough access to the data to be able to clean it, and they don’t to a very good job cleaning it themselves. And even when the data is acceptable, the depth of reporting and analytics isn’t.

So when I talk to company’s that have invested in existing non Digital Mortar store analytics solutions, what I mostly hear is a litany of complaints and failure. We tried it, but it was too expensive. We didn’t see the value. It didn’t work very well.

I get it. The bottom line is that for analytics to be useful, the data has to be reasonably accurate, the analytics platform has to provide reasonable access to the data and you must have resources who can use it. Oh – and you have to be willing to make changes and actually use the data.

There’s a lot of maturing to do across all of these dimensions. It’s really just this simple. If you are serious about analytics, you have to invest in it. Dollars and organizational capital. Dollars to put the right technology in place and get the people to run it. Organizational capital to push people into actually using data to drive decisions and aggressively test.

Which brings me to….

What to invest in

Our DM1 platform obviously. But that’s just one part of bigger set of analytics decisions. I wrote pretty deeply before the holidays on the various data collection technologies in play. Based on what I saw at NRF, not that much has changed. I did see some improvement in the camera side of the house. Time of Flight cameras are  interesting and there are at least a couple of camera systems now that are beginning to do the all-important work of shopper stitching across zones. For small footprint stores there are some viable options in the camera world worth considering. I even saw a couple of face recognition systems that might make point-to-point implementations for analytics practical. Those systems are mostly focused on security though – and integration with analytics is going to be work.

I haven’t written much about mobile measurement, but geo-location within mobile apps is – to quote the Lenox mortgage guy – the biggest no-brainer in the history of earth. It’s not a complete sample. It’s not even a good sample. But it’s ridiculously easy to drop code into your mobile app to geo-locate within the store. And we can take that tracking data and run it into DM1 – giving you detailed, powerful analytics on one of the most important shopper segments you have. It costs very little. There’s no store side infrastructure or physical implementation – and the data is accurate, omni-joinable and super powerful. Small segment nirvana.

The overall data collection technology decision isn’t simple or straightforward for anyone. We’ve actually been working with Capgemini to integrate multiple technologies into their Innovation Center so that we can run workshops to help companies get a hands-on feel for each and – I hope – help folks make the right decision for their stores.

People is the biggest thing. People is the most expensive thing. People is the most important thing. It doesn’t matter how much analytic technology you bring to the table – people are the key to making it work. The vast majority of stores just don’t have store-side teams that understand behavioral data. You can try to create that or you can expand the brief of your digital or omni-channel teams and re-christen them behavioral analytics teams. I like option number two. Why not take advantage of the analytics smarts you actually have? The data, as I’ve said many times before, is eerily similar. We’ve been working hard to beef up partnerships and our own professional services to help too. But while you can use consultants to get a serious analytic effort off the ground, over time you need to own it. And that means deciding where it lives in your organization and how it fits in.

Which I know sounds a lot like…

Everything old is new again

I make no bones about the fact that I dived into store measurement because I thought the lessons of digital analytics mostly applied. In the year sense, I’ve found that to be truer than I knew and maybe even truer than I’d like. Many of the challenges I see in store analytics are the ones we spent more than decade in digital analytics gradually solving. Bad data quality and insufficient attention to making it right. IT organizations focused on collection not use. A focus on site/store measurements instead of shopper measurement.

Some of the problems are common to any analytic effort of any sort. An over-willingness to invest in technology not people (yeah – I know – I’m a technology vendor now I shouldn’t be saying this!). A lack of willingness to change operational patterns to be driven by analytics and measurement and a corresponding challenge actually using analytics. Far too many people willing to talk the talk but unable or unwilling to walk the walk necessary to do analytics and to use it. These are hard problems and it’s only select companies that will ever solve them.

Through it all I see no reason to change the core beliefs that drove me to start Digital Mortar. Shopper analytics is critical to doing retail well. In a time of disruption and innovation, it can drive massive competitive advantage if an organization is willing to embrace it seriously. But that’s not easy. It takes organizational commitment, some guts, good tools and real smarts.

Digital Mortar can provide a genuinely good tool. We can help with the smarts. Guts and commitment? That’s up to you!

The State of Store Tracking Technology

The perfect store tracking data collection would be costless, lossless, highly-accurate, would require no effort to deploy, would track every customer journey with high-precision, would differentiate associates and shoppers and provide shopper demographics along with easy opt-out and a minimal creep factor.

We’re not in a perfect world.

In my last post, I summarized in-store data collection systems across the dimensions that I think matter when it comes to choosing a technology: population coverage, positional accuracy, journey tracking, demographics, privacy, associate data collection and separation, ease of implementation and cost. At the top of this post, I summarized how each technology fared by dimension.

In-store tracking technologies rated

As you can see, no technology wins every category, so you have to think about what matters most for your business and measurement needs.

Here’s our thinking about when to use each technology for store tracking:

Camera: Video systems provide accurate tracking for the entire population along with shopper demographics. On the con-side, they are hard to deploy, very expensive, provide sub-standard journey measurement and no opt-out mechanism. From our perspective, camera makes the most sense in very small foot-print stores or integrated into a broader store measurement system where camera is being used exclusively for total counting and demographics.

WiFi: If only WiFi tracking worked better what a wonderful world it would be. It’s nearly costless and there’s almost no effort to deploy. It can differentiate shoppers and Associates and it provides an opt-out mechanism. Unfortunately, it doesn’t provide the accuracy necessary to useful measurement in most retail situations. If you’re an airport or an arena or a resort, you should seriously consider WiFi tracking. But for most stores, the problems are too severe to work around. With store WiFi, you lose tracking on your iPhone shoppers and you get less coverage on all devices. Worse, the location accuracy isn’t good enough to place shoppers in a reasonable store location. It’s easy to fool yourself about this. It’s free. It’s easy. What could go wrong? But keep two things in mind. First, bad data is worse than no data. Making decisions on bad data is a surefire way to screw up. Second, most of the cost of analytics is people not technology. When you give your people bad tools and bad data, they spend most of their time trying to compensate. It just isn’t worth it.

Passive Sniffer (iViu): There’s a lot to like with this system and that’s why they are – by far – our most common go to solution in traditional store settings. iViu devices provide full journey measurement with good enough accuracy. They cover most of the population and what they miss doesn’t feel significantly biased. The devices are inexpensive and easy to install, so full-fleet measurement is possible and PoC’s can be done very inexpensively. They do a great job letting us differentiate and measure Associates and they provide a reasonable opt-out mechanism for shoppers. Even if this technology doesn’t win in most categories, it provides “good-enough” performance in almost every category.

Combining Solutions

This isn’t necessarily an all or nothing proposition. You can integrate these technologies in ways that (sorta) give you the best of both worlds. We often recommend camera-on-entry, for example, even when we’re deploying an iViu solution. Why? Well, camera-on-entry is cheap enough to deploy, it provides demographics, and it provides a pretty accurate total count. We can use that total to understand how much of the population we’re missing with electronic detection and, if the situation warrants it, we can true-up the numbers based on the measured difference.

In addition, we see real value in camera-based display tracking. Without a very fine-grained RFID mesh, electronic systems simply can’t do display interaction tracking. Where that’s critical, camera is the right point solution. In fact, that’s part of what we demoed at the Capgemini Applied Innovation Exchange last week. We used iViu devices for the overall journey measurement and Intel cameras for display interaction measurement.

Similarly, in large public spaces we sometimes recommend a mix of WiFi and iViu or camera. WiFi provides the in-place full journey measurement that would be too expensive to get at any other way. But by deploying camera at choke-points or iViu in places where we need more accurate positional data, we can significantly improve overall collection and measurement without incurring unreasonable costs.

Summing Up

In a very real sense, we have no dog in this hunt. Or perhaps it’s more accurate to say we back every dog in this hunt We don’t make hardware. We don’t make more money on one system than another. We just want the easiest, best path to getting the data we need to drive advanced analytics. Both camera systems and WiFi have the potential to be better store tracking solutions with improvements in accuracy and cost. We follow technology developments closely and we’re always hoping for better, cheaper, faster solutions. And there are times right now when using existing WiFi or deploying cameras is the right way to go. But in most retail situations, we think the iViu solution is the right choice.

And the fact that their data flows seamlessly into DM1 in both batch and – with Version 2 – real-time modes? From your perspective, that should be a big plus.

Open data systems are a huge advantage when it comes to planning out your data collection strategy. And finding the right measurement software to drive your analytics is – when you get right down to it – the decision that really matters.

And the good news? That’s the easiest decision you’ll ever have to make. Because there’s really nothing else out there that’s even remotely competitive to DM1.