08.06.2023

If you use mobile location data, it’s time for a mid-course correction

A loaf of bread cost somewhere around 12 cents back in 1950. No, this isn’t a lesson in economics — not exactly — but bear with me. When we compare that 12 cents to today’s average cost of around $2.50, we don’t make the mistake of thinking that bread was worth any less back then.

If you use mobile location data, it’s time for a mid-course correction

Rather, we consider price fluctuations, recognizing that consumer behavior, inflation, the impact of new technological efficiencies and other factors have affected the price of our sandwich loaf. We know this because economists regularly update and revise their forecasts, essentially recalibrating the baseline used to measure the dollar value of goods and services.

In the business of mobile location data, we might be better off if we had a few economists around. Surely they’d understand and recognize the need to step back every once in awhile and modify the baselines we use to measure the true value of consumer foot traffic. In the absence of economists, marketers should demand such assessments and adjustments from their data vendors, to ensure they’re making decisions based upon valid data.

The act of recalibration, taking a break to evaluate where we are before we plow ahead, is a key step that’s often overlooked. Clearly, price fluctuations and location data are very different. However, the analogy serves to illuminate the importance of revising data baselines to ensure that the numbers we use to make data-centric business decisions are valid and take into consideration factors such as artificial growth and historical pattern shifts.

The mobile location data tsunami is growing

If you use mobile location data, it’s time for a mid-course correction

Why? We’re in the midst of a massive mobile data tsunami. More devices, cheaper data plans, more robust device capabilities and an onslaught of data-heavy websites and apps means the amount of information generated by our interactions with mobile devices is growing exponentially.

According to a November 2017 report from Ericsson, mobile data traffic in North America was expected to hit 7.1 gigabytes per month per smartphone by the end of 2017. The telco predicted that number will soar to 48GB by the end of 2023.

With this expanding influx of mobile traffic come shifting usage patterns. However, despite this massive wave of additional data and changing patterns, most location data companies fail to step back and assess where they are and whether their models for analyzing the data remain relevant.

These increases in data generation and the abundance of pattern variations that come along with it can have a huge effect on the models used to analyze consumer foot traffic, making regular recalibration of data models imperative. Performing this assessment and making adjustments should be common practice to make certain the location data we use to build models to gauge foot traffic in-store and restaurant locations reflects today’s reality.

For marketers, consultancies and even hedge fund managers who use location data to measure historical changes in the amount of consumer traffic seen in commercial locations, this is particularly important.

Let’s say Under Armour wants to evaluate the health of the retailers it partners with to sell its gear. The company might compare location data showing visits to Dick’s Sporting Goods over the past six months with information showing consumer visits to Foot Locker locations. Yet, if Under Armour’s location data provider is using stale models, the brand risks basing key business decisions on misleading information.

Preventing faulty historical comparisons

Here’s why: A great deal of the location data employed to measure foot traffic at retailers such as Foot Locker or Dick’s is gathered via mobile apps. However, there are several variables we must consider. For one thing, while the same number of people may be frequenting these locations, a larger percentage of them may have smartphones today than in the past.

But that’s not all. There is more and more location data reflecting consumer foot traffic as device usage grows, yet we don’t always know how data providers’ systems have changed.

Are they ingesting more data simply because there is more usage among the same number of devices represented by the same number of apps as six months ago, or is there more data because they have partnered with more app providers?

If partners have added apps to their own set of app publisher partners, it may seem as though there’s an increase in the volume of consumer visits when, in actuality, those numbers could reflect artificial growth.

The recalibration process also ensures that the data used to build models reflects evolving app usage patterns. As app publishers change features or introduce new apps, as older apps fall from favor, and as more users adopt app categories such as podcast apps, the usage pattern landscape morphs. When’s the last time you played Angry Birds or Candy Crush Saga?

Usage patterns vary widely across the country, too. For instance, while people on the East Coast — where spring weather can fluctuate throughout the day — might open their weather apps multiple times in a 24-hour stretch, Southern Californians expecting sun may not open their weather apps for a week. That doesn’t mean, of course, that Californians aren’t visiting store locations. Interactions with traffic apps also vary regionally. User patterns in traffic-obsessed Los Angeles look a lot different from, say, patterns among folks in Auburn, Alabama.

By recalibrating baselines to account for app usage pattern variations, we have a better handle on the consumer traffic data used for location and ad attribution reports, ensuring they reflect what’s happening in real life. Regular mid-course corrections may never be standard for the location data industry, but I’d guess some economists would agree they ought to be.

Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *