Chevron icon It indicates an expandable section or menu, or sometimes previous / next navigation options. HOMEPAGE

Users are more concerned about their data privacy, but many companies are still hesitant to invest in data protection

Young Woman Using Smart Phone
Users are becoming more aware of data collection and are picking companies based on their privacy policies. Isabel Pavia/Getty Images

  • Customers are growing more concerned about the privacy of their data online. 
  • Companies are often unmotivated to protect data because leaks hurt consumers more than companies. 
  • But, businesses should know that not all privacy measures are costly and some will not impact revenue. 
Advertisement

As consumers in today's digital world, we're used to giving away huge amounts of personal data. We enter our age and credit card number when we register for an online service; We allow companies to track what we click on and buy; We often broadcast our geographical location.

In theory, much of these data are intended to help firms provide better, more personalized service. But as customers become increasingly aware of the risks of their information being stolen by hackers or misused or sold to third parties, they're looking for stronger privacy protections, said Ruslan Momot, a visiting assistant professor of operations at Kellogg.

So how should regulators and firms approach preserving privacy?

In two papers, Momot and colleagues examine the issue. They find that it's not enough for policymakers to choose between requiring safeguards against breaches or restricting the amount of data that companies can gather. "You need to regulate two sides of the company's data strategy, both data protection and data collection," Momot said, who is on leave from his position as assistant professor of operations management at HEC Paris.

Advertisement

The good news: Not all privacy measures are costly to the bottom line. One protective measure, which involves adding "noise" to the output of personalization algorithms, is unlikely to substantially cut company revenue in many circumstances.

"Preserving privacy is not that costly," he said.

Momot argues that firms should not resist pressure to better protect their customers' data. In fact, beefing up privacy measures could not only help companies comply with new privacy laws and regulation but could also bring in more business.

"Companies should actually embrace this because this might become a competitive edge," he said.

Advertisement

Growing concerns 

Amazon's product recommendations, based on the data it's collected about us, are often useful. And, of course, Uber wouldn't work if we didn't share our location.

But a stream of news about hackers breaching companies' databases, as well as growing uncertainty around how information is used and whether it is being shared with third parties, has put customers on edge.

Momot points to a couple of indicators that customers are becoming more guarded. A Pew Research Center survey performed in 2019 found that 81% of participants felt that the risks of companies' data collection outweighed the benefits and 52% had recently decided to avoid a product or service due to concerns about giving away personal information.

This year, Apple started requiring apps to ask users for permission to track their activity and share those data with other apps. Among US users who made a choice during the first three weeks after this new feature became available, 94% of them elected not to allow tracking, according to one industry analysis.

Advertisement

The downside of network effects 

So what can regulators do to respond to this growing anxiety?

In one study, Momot and his collaborators, Itay Fainmesser at Johns Hopkins Carey Business School and Andrea Galeotti at London Business School, began by investigating how businesses might, or might not, be incentivized to invest in data privacy.

The team developed a mathematical model of the parties involved in the data market. This included a company, its users, and so-called "adversaries" who wanted to access consumers' data for harmful purposes. This could include hackers and criminals, as well as entities like governments — basically anyone whose possession of data could make users uncomfortable.

The model predicted that as the firm began gathering data, user activity increased: customers were benefiting from more personalized service. At that stage, the size of the company's database was small, so it didn't hold much allure for adversaries. But as the company amassed more and more information, the data trove became more attractive to hackers and other third parties. Privacy risks started to outweigh benefits, and user activity dropped.

Advertisement

One key point, Momot said, is that privacy risk, at its core, turns out to be driven by negative network effects.

"Network effects" refers to the idea that a user's decision to participate in an activity on a platform depends partly on how many other people are using it. Companies such as Facebook have relied heavily on this phenomenon. If a person is the only one in their social circle on the site, it's not particularly useful; but as more people sign up, the service becomes more beneficial to each person.

Privacy risks, however, are driven by negative network effects. The larger the number of users, the bigger the company's database, and the more lucrative a target it becomes for adversaries to attack.

Network effects "brought these companies a big chunk of business," Momot said. But in the realm of data privacy, "they are working in the opposite way."

Advertisement

Internalizing risks 

The team then used the model to explore the types of regulations that would effectively protect consumers.

First, they examined a hypothetical scenario where policymakers set requirements for data protection but didn't limit data collection. As one might expect, companies gathered more personal information than they needed. Conversely, if regulators restricted data collection but ignored data protection, firms didn't guard customers' data strongly enough.

The problem: a data leak simply wouldn't affect a company as much as it affected customers. "Companies don't internalize the privacy risks that the consumers are facing," Momot said. While a firm might lose some users after a data breach, many large companies are monopolies that enjoy positive network effects. If most of a customer's friends remain on Facebook, that person can't get the same benefits by moving to another social-media site.

The team concluded that policymakers must regulate both data protection and collection. Protection might be required in the form of certain encryption techniques or antivirus software.

Advertisement

Collection could be restricted in a couple of ways. Regulators could impose liability fines on companies whose data were leaked, with the amount reflecting how much users were harmed. Or policymakers could tax data collection, thus discouraging firms from gathering personal information indiscriminately.

Personalized services 

In this study, Momot collaborated with Yanzhe (Murray) Lei at Queen's University and Sentao Miao at McGill University to explore how a particular privacy measure would hit a company's bottom line.

The researchers focused on firms that provide personalized service to users, based on how other users have behaved in the past. For instance, the company might store customers' demographic information and purchasing behavior in a database and run algorithms to predict the products that similar people would want or the prices that people with similar backgrounds are likely to pay.

The problem is that this strategy puts users' personal information at risk — even if hackers don't directly breach the database.

Advertisement

For example, hackers might register thousands of fake users, entering slightly varying demographic details for each one. They could then monitor how the offered product choices or prices change if one piece of a user's profile, such as gender, is altered — essentially giving them a window into how the algorithm works. If hackers then get access to the algorithms' output for real users, they can reverse-engineer that information to figure out each person's characteristics.

So the team explored how firms providing personalized services to users could do that without compromising users' data privacy. They chose to use a common privacy standard called differential privacy, which means that a system's output, such as a product recommendation, does not depend on the data of any individual customer. (Companies such as Apple, Google and Microsoft use this standard, Momot explained.)

The researcher's strategy involved adding some "noise" to data that hackers might obtain.

In one variation of this technique, companies would add noise to the output of personalization algorithms. Let's say that a health insurance company determined that the ideal monthly premium for a user's policy was $326. The firm would then perform the digital equivalent of flipping a coin; if it landed heads, the software would add a small pre-calculated amount, such as $1, to the price. Similarly, a shopping website might present a slightly different product or assortment of products to a customer than the optimal one — for instance, brown instead of black shoes.

Advertisement

The downside, of course, is that the companies are deliberately deviating from their optimal decisions, such as the price to offer or a product to recommend, which may reduce the chances of customers buying a product, and consequently reduce company revenue.

But when the researchers implemented this privacy-protection approach in a mathematical model, they found that such deviations, if done right (based on the algorithms developed by the team), do not cut firms' profits much — as long as the company had a large database of past user behavior. "This reduction in revenue is not that large," Momot said. Having extensive records allowed the personalization algorithm to make reasonably accurate predictions and, consequently, decisions. So fudging the results a bit didn't have a dramatic effect on profits.

To test the idea on real-world data, the team calibrated their model on a dataset of about 208,000 auto-loan applications and about 45,000 resulting loans from 2002 to 2004. They found that if the company had 1,000 data points about past users, it reached 80% of its maximum possible profit when no privacy protections were in place. (Reaching 100% profit would require an algorithm that could perfectly predict consumer behavior.) When noise was added to its algorithms' output, that figure was 76%. And the difference shrank as the database grew. If the firm had 6,000 data points, the profit gap was 2 percentage points.

Prioritizing privacy 

Understanding privacy issues is complex. While there are some existing companies, such as Skyflow, that help firms with data privacy and compliance, Momot envisions more will soon be created to give companies a set of tools to better handle their users' data.

Advertisement

Overall, user-privacy protection is not an issue that companies should avoid. Even if regulations don't require firms to step up consumer protections, doing so may give companies an advantage over competitors with more lax protocols.

"As users become more and more aware, they start to choose companies based on whether the companies are preserving privacy," Momot said.

Some companies may put data privacy on the back burner and hope it doesn't become a major issue. But that focus needs to shift, he said.

"Along with maximizing revenues and profits, this should be one of the first priorities," Momot said.

Read the original article on Kellogg Insight. Copyright 2021.

Previously published in Kellogg Insight. Reprinted with permission of the Kellogg School of Management.

Security Privacy Opinion
Advertisement
Close icon Two crossed lines that form an 'X'. It indicates a way to close an interaction, or dismiss a notification.

Jump to

  1. Main content
  2. Search
  3. Account