White House report on Big Data and Differential Pricing
White House report examines how companies are using big data and analytics to charge different prices to different customers (price discrimination), looks at both benefits and risks, and concludes that many concerns can be addressed by existing anti-discrimination and consumer protection laws.
In February, White House released a report discussing how companies are using big data to charge different prices to different customers - a practice known as price discrimination or differential pricing.
In marketing, collecting and understanding the customer behavior information is a core principle and companies have developed a wide variety of tools to do so. The reason for that could be if sellers know exactly what a customer is willing to pay or what product they like by prediction analysis, they can set prices and expand the size of the market.
Big data has lowered the costs of collecting customer-level information, and making it easier for sellers to identify new customer segments and to target those populations with customized marketing and pricing plans. Given sufficient data, sellers can try to predict how buyers will behave in response to different prices and pricing schemes. For example, a 2014 recent study by Benjamin Shiller estimates the increase in profits if Netflix were to use behavioral data for personalized pricing. He finds that differential pricing based on demographics (whereby Netflix would adjust prices based on a customer’s race, age, income, geographic location, and family size) could increase profit by 0.8 percent, while using 5,000 web browsing variables (such as the amount of time a user typically spends online or whether she has recently visited Wikipedia or IMDB) could increase profits by as much as 12.2 percent.
Discrimination and Anti-discrimination
More precisely, the third-degree price discrimination occurs when sellers charge different prices to different demographic groups, as with discounts for senior citizens.
Big data naturally raises concerns among groups that have historically been victims of discrimination. Given hundreds of variables to choose from, it is easy to imagine that statistical models could be used to hide more explicit forms of discrimination by generating customer segments that are closely correlated with race, gender, ethnicity, or religion. Moreover, the term “price discrimination” may lead to concerns about economic injustice, even if the profit motive is different from, and in many cases fundamentally inconsistent with, the sort of prejudice that our anti-discrimination laws seek to prohibit.
Knowing that disparate impact occurs when some practice has an adverse impact on a protected group, even if the practice was not intended to be discriminatory. Big data helps to distinguish between disparate treatment and disparate impact. So it can further help to understand how big data to affect historically disadvantaged groups. The premise of big data is that when marketers have a wide variety of behavioral data to choose from, they will find imperfect proxies such as race or religion to be less useful. In other words, big data aims to reduce the rate of “false positive” cases that potentially make disparate treatment a problem.
Big data also provides new tools for detecting problems, both before and perhaps after a discriminatory algorithm is used on real consumers. For example, it is often straightforward to conduct statistical tests for disparate impact by asking whether the prices generated by a particular algorithm are correlated with variables such as race, gender or ethnicity. Put differently, in markets where it is important to prevent disparate impact, big data can be used to enforce existing anti-discrimination laws more effectively, thereby obviating the need for broader restrictions on its use.