Marketing automation: Utopia or dystopia?

Firms must take consumer psychology into account and resist the temptation to maximise short-term profits at the cost of consumers

Published: Oct 6, 2021 10:41:41 AM IST
Updated: Oct 6, 2021 11:08:16 AM IST

Precise targeting boosts companies’ profitability, while letting consumers enjoy convenience and offers fitting their needs

Image: Shutterstock

From segmentation to pricing, virtually all processes involved in marketing can now be automated. The ability to track individuals’ behaviour online and to merge data sources increasingly allows marketers to target consumers at a granular level. Thanks to machine learning-based algorithms, individuals can receive tailored product offers and advertisements – all in real time.

Such precise targeting boosts companies’ profitability, while letting consumers enjoy convenience and offers fitting their needs. However, it may also lead to negative economic and psychological consequences for consumers. The question becomes, how to make sure that marketing automation doesn’t create a dystopia?

Profit maximisation

Companies maximise profits when they sell their product or service at the high end of what every customer is willing to pay. In the past, marketers couldn’t easily ascertain individual willingness-to-pay (WTP), a situation that frequently allowed consumers to obtain good value for their money. Today, machine learning-based prediction algorithms can provide ever more accurate estimates of a consumer’s WTP.

In one experiment, recruitment company ZipRecruiter.com saw it could boost its profits by more than 80 percent by adopting algorithm-based individualised pricing, using more than a hundred consumer variables. Uber reportedly uses machine learning to set route- and time-of-day-specific prices. Uber could easily use customers’ ride histories and other personal data, to personalise prices even further.

Read More

These developments can be alarming for consumers. While personalised pricing may benefit consumers with a lower WTP who might otherwise be priced out of the market, many consumers are likely to end up paying prices closer to their WTP.

Low compensation for personal data

Typically, consumers freely give the information necessary to infer their preferences and WTP. But shouldn’t they be compensated for the downsides of personalisation? For their part, companies argue that consumers are rewarded with better offers and free services like YouTube videos, social networking, etc.

In research I conducted with INSEAD’s Daniel Walters and Geoff Tomaino, consumers were found to systematically underprice their private data when they bartered it away for goods or services as opposed to selling it for money. Take users of social media platforms. They “pay” for these services with private data, which the platforms use to generate advertising profits. Our experiments suggest that consumers undervalue their private data in such non-monetary exchange settings, despite knowing how profitable social media platforms are. This uneven exchange of value likely contributes to the extraordinary valuations of dominant tech firms.

Loss of autonomy

We all value being autonomous in our choices, free from external influence. But such autonomy requires privacy. Without privacy, we become predictable. Algorithms can then easily predict anything from our risk of credit default to our probability of purchasing certain products.

Further experiments I conducted with Wharton’s Rom Schrift and Yonat Zwebner showed that consumers act as if they experience a threat to their autonomy when they understand that algorithms can predict their choices. When participants learnt that an algorithm could predict their choices, they chose less preferred options to re-establish their sense of autonomy. To maximise acceptance of prediction algorithms, marketers will need to frame them such that they don’t threaten consumers’ perceived autonomy.

Algorithms as a black box

The complexity of algorithms often makes them hard to explain. In addition, many cannot be made transparent for competitive reasons. Regulators worry – and consumers get upset – when they can’t understand why an algorithm does what it does, e.g. when it blocks a desired financial transaction or grants a specific credit limit.

GDPR Articles 13 through 15 require firms to provide customers with “meaningful information about the logic involved” in such automated decisions. In another set of experiments, informing rejected consumers about the goals of an algorithm was just as meaningful to them as knowing how the algorithm arrived at its negative assessment. Consumers derived a sense of fairness from understanding the purpose of the algorithm.

How to mitigate the dystopia associated with automated marketing

Preventing dystopian outcomes is typically the purview of regulators, but companies must put in place policies to address consumer concerns as well. Marketing automation poses complex challenges that call for an array of solutions. These include data privacy regulations, mechanisms to ensure efficient prices for personal data and the deployment of fair privacy policies by companies. The following measures should also have a mitigating effect.

Regulation to support both privacy and competition

To enhance market efficiency (by preventing the collection of personal data without adequate compensation to consumers), regulators need to both protect consumer privacy and encourage competition. This poses a conundrum: Policymakers must safeguard innovation and competition among data-driven businesses so companies can’t monopolise their markets too easily. But fostering competition requires sharing consumers’ personal data between companies, implying less privacy (witness Apple's iOS requirement that apps obtain user permission to be tracked across other apps, impacting, among others, Facebook’s targeting ability). This paradox requires a fine balancing act. A solution might be to give consumers legal ownership of their data and create mechanisms for them to sell or rent their data to foster competition.

Transparency about data

Instead of opposing regulators’ efforts, firms should give consumers more say over their own data. Transparency about the collection and use of personal data can help restore consumers’ faith in automated marketing routines. Losing some control over consumer data may limit price discrimination opportunities but will protect brands and profits in the long term.

Frame algorithms in a positive light

Even if algorithms sometimes breed mistrust, they can be more efficient and accurate than humans and improve our lives. However, companies need to address consumers’ and regulators’ concerns when designing them; else they risk triggering great resistance. Rather than emphasising that algorithms can predict what a consumer will do, marketers should present them as tools helping consumers make choices consistent with their preferences. Algorithm transparency can further reduce scepticism. Otherwise, explaining the goals of algorithms can go a long way towards reducing fears associated with AI-driven decisions.

Avoiding a marketing automation dystopia is in the best interest of all market participants – at least in the long term. With that horizon in mind, companies must take consumer psychology into account and resist the temptation to maximise their short-term profits at the expense of consumers.

This article is an adaptation of an original piece published in the NIM Marketing Intelligence Review.

Klaus Wertenbroch is the Novartis Chaired Professor of Management and the Environment and a Professor of Marketing at INSEAD. He directs the Strategic Marketing Programme, one of INSEAD’s Executive Education programmes.

[This article is republished courtesy of INSEAD Knowledge, the portal to the latest business insights and views of The Business School of the World. Copyright INSEAD 2024]

X