Digital procurement: For lasting value, go broad and deep

McKinsey Insights & Publications
To get the most from procurement digitization, leaders must raise their ambitions along with their skills.
If stakeholders move away from subsidizing the renewable-energy market, developers would be exposed to wholesale prices. Renewables players thus need to position themselves strategically in their approach to long-term merchant risk.
Are your information technology, cybersecurity, and risk professionals working together as a championship team to neutralize cyberthreats and protect business value?
For leading insurance carriers to keep up with customers and competitors, they must prioritize digital transformation—and they can begin by implementing technologies in four areas.
The influence of Gen Z—the first generation of true digital natives—is expanding.
MIT Sloan Management Review

Recommendation engines influence the choices we make every day — what book to read next, which song to download, which person to date.

At their best, smart systems serve buyers and sellers alike: Consumers save the time and effort of wading through the vast possibilities of the digital marketplace, and businesses build loyalty and drive sales through differentiated experiences.

But, as with many other new technologies, digital recommendations are also a source of unintended consequences. Our research shows that recommendations do more than just reflect consumer preferences — they actually shape them. If this sounds like a subtle distinction, it is not. Recommendation systems have the potential to fuel biases and affect sales in unexpected ways. Our findings have important implications for recommendation engine design, not just in the music industry — the basis of our study — but in any setting where retailers use recommendation algorithms to improve customer experience and drive sales.

Consumer Choice in a Crowded Marketplace

E-commerce has dramatically affected consumer choice. Unconstrained by physical limitations of the brick-and-mortar model, businesses can offer virtually unlimited selections of products online, giving consumers access not only to popular items but to obscure, niche ones as well. There are both more needles and more hay. As consumers face a radically wider set of options, they must exercise greater care in evaluating potential products for purchase or consumption. Experience-based (or taste-based) goods such as music, books, and movies are particularly complex: Consumers must spend time experiencing them before they know if they like them. Even if the sticker prices for goods aren’t high, or the goods are included as part of subscription services, the time consumers must spend to evaluate each of them is valuable. Worse, the sunk cost of evaluation time is unrecoverable: Consumers can’t unlisten, unread, or unwatch goods that turn out to be a poor fit.

In this context, sophisticated algorithms capable of making effective personalized recommendations provide sizable benefits. They reduce search and evaluation time, drive sales, and introduce new items to consumers. Some 30% of Amazon’s page views result from recommendations,1 more than 80% of the content watched by Netflix subscribers comes through personalized recommendations,2 and more than 40 million Spotify listeners can now access personalized playlists generated by its “Discover Weekly” module, which leads to more than half of the monthly listens for over 8,000 artists.3

More Than Just a Recommendation

For the consumer, the way systems arrive at personal recommendations is relatively easy to understand. Based on a customer’s past activities and stated preferences, these systems present new options: queues of potential items of interest, like Netflix’s “Other movies you may enjoy” and Amazon’s “Customers who bought this also bought.” As our research shows, however, such personalized recommendations do more than introduce consumers to new products; they also shape their future preferences and behaviors in unexpected ways. (See “Related Research.”)

Related Research G. Adomavicius, J.C. Bockstedt, S.P. Curley, and J. Zhang, “Effects of Online Recommendations on Consumers’ Willingness to Pay,” Information Systems Research 29, no. 1 (March 2018): 84-102.

We looked at how personalized recommendations influence preferences and willingness to pay for a common experience-based digital good: music. As in other industries involving experience-based goods, new models (such as Spotify and Apple Music) are disrupting the music industry. Digital distribution channels, including paid subscriptions, on-demand streaming, and digital downloads, are currently about 80% of the U.S. music market. Regardless of the distribution channel, algorithms and recommendation engines significantly affect the digital consumption of music, as recommendations add value in identifying unknown songs that are more likely to strike a chord with the consumer. Surprisingly, recommendation systems alter how much consumers are willing to pay for a product that they just listened to. Consumers don’t just prefer what they have experienced and know they enjoy; they prefer what the system said they would like. This is surprising since consumers shouldn’t need a system to tell them how much they enjoyed a song they just heard. The advent of recommendation systems may leave us questioning our own taste. We move from asking ourselves, “Do I like this?” to asking, “Should I like this?”

Researching Music Consumers

Our findings are based on three laboratory experiments with a total of 169 consumers of music: college students. In the first experiment, participants listened to songs and told us how much they would pay for each song. We randomly assigned recommendation ratings to selected songs. We presented these ratings (between 1 and 5 stars) as a recommendation system’s predictions of their preference for each song. If they desired, participants could listen to song samples to reduce their uncertainty about how well they liked the music. Participants were unaware that we randomly generated the recommendation ratings and believed the ratings were calculated on the basis of their preferences from past data. Recommendations significantly altered willingness to pay, with a 1-star increase in recommendation rating creating an average 12% to 17% increase in willingness to pay. This result is compelling, since the random recommendations were unrelated to the participants’ actual preferences.

The same effects exist for real recommendations that contain errors. In our second experiment, we used real song recommendations derived from a widely used, state-of-the-art algorithm. But we intentionally introduced random error in the predicted ratings, ranging from -1.5 stars to +1.5 stars. Again, participants were unaware of the recommendation manipulation and could listen to song samples. An intentional boost in the real recommendation rating by 1 star increased willingness to pay by 10% to 13%, on average.

In the first two experiments, participants could listen to 30-second song samples (since sampling is a common practice on retail music sites), but listening was not mandatory. Thus, the changes in willingness to pay could stem from participants using the ratings in lieu of listening themselves, but we don’t know to what degree. Although important and useful to know, it’s unsurprising that recommendations affect willingness to pay when consumers know less about a product; that is, after all, the explicit purpose of recommendation systems. In our third experiment, as a way to explore the more interesting and possibly problematic effect when consumers have familiarity with a product, we required participants to listen to all song samples before they indicated their willingness to pay. Again, randomly generated recommendation ratings significantly affected consumers’ willingness to pay. We saw an approximate 8% to 12% increase in willingness to pay for each 1-star increase in shown recommendation ratings. The effect of recommendations on willingness to pay remains strong even immediately after mandatory consumption of the recommended item, when much less preference uncertainty should exist for the consumer.

Consequences for Consumers and Retailers

For consumers, recommendation engines have a potential dark side — they can manipulate preferences in ways consumers don’t realize. After all, the details underlying recommendation algorithms are far from transparent. Faulty recommendation engines that inaccurately estimate consumers’ true preferences stand to pull down willingness to pay for some items and increase it for others, regardless of the likelihood of actual fit. This may tempt less ethical organizations to inflate recommendations artificially. Even aside from the disreputable practice of direct manipulation, random error is a real problem for all recommendation systems. For example, the best-performing recommendation systems in the $1 million Netflix Prize competition, using the latest machine learning developments in recommendation algorithms at the time, were off in their rating predictions on average by 20% of the rating scale (that is, an error of about 0.8 on a scale of 1 to 5 stars).

Both over- and underestimation are problems. Inflated ratings induce consumers to buy products they might not consider otherwise and could leave consumers disappointed from unmet expectations. Deflated ratings potentially turn off consumers from products they may have otherwise purchased. Mistakes hurt in both directions.

And the effects persist beyond dissatisfaction with a single purchase. They compound over time. After consumers experience a product, their feedback (like product ratings or purchases) influences future personalized predictions. Biased feedback can contaminate the system and lead to a vicious cycle of bias — the online retail equivalent of squealing audio feedback. Designers could also get an artificially inflated view of prediction accuracy, compromising their ability to improve systems. Even worse, unscrupulous agents could use such vulnerabilities to manipulate recommendation systems.

Given that perfect prediction is not possible, retailers and managers must be aware of the potential discord from unintended side effects of their recommendations. Our findings highlight the importance of reducing bias in recommendation systems, for example, through innovations in algorithm and user interface design and through human oversight, as an ongoing priority for the future.

HBR.org
JOHN WESSELS/Contributor/Getty Images

Last month, a rebel attack in Beni, the epicenter of the ongoing Ebola outbreak near the eastern border of the Democratic Republic of Congo (DRC), once again halted the efforts of response teams working to contain the virus. With over 10 major episodes of violence since the outbreak was declared in August, insecurity and community mistrust has made it difficult to gauge the true extent of Ebola’s spread. Though the outbreak could still be limited, cases appear to be increasing — especially in Beni, where cases have doubled in recent weeks — with 80% of new infections arising among people with no link to “known transmission chains” (where everyone who is infected is known and you can track who has been exposed with some accuracy). This means that we might only be seeing the tip of an iceberg of hidden transmissions and the outbreak could spiral out of control and spread into neighboring countries. Given this danger, the current strategy for containing the disease needs to be adjusted.

Eastern DRC has been home to one of the deadliest and most intractable conflicts in modern history; over 50 armed groups are still active in the region. Originally formed to protect their communities, many of these rebel militias have become entangled in the messy web of politics, shifting allegiances, and underhanded mining deals that fuel the conflict.

This backdrop and the inability of the government or international agencies to assure basic safety, much less basic needs, has entrenched a distrust of formal institutions in the population. These dynamics have been made more complicated by the fact that DRC is supposed to hold elections in December that have already been delayed twice since 2016.

Given that outbreaks can grow quickly and exponentially, definitive action is needed now.

The current plan for stopping this outbreak is based on contact tracing (the identification and monitoring of people who were exposed to Ebola-infected individuals for the 21 days during which they may develop infection) and “ring” vaccination (immunizing these contacts and those close to them with an experimental Ebola vaccine). This approach efficiently contained an Ebola outbreak in western DRC just a few months ago but requires a comprehensive and precise understanding of who is infected and who their contacts are — something that necessitates having unimpeded daily access to their communities for months.

That has not been possible this time around: Areas affected by violence have been inaccessible for days at a time. Therefore, while contact tracing and ring vaccination should continue where transmissions can be tracked, mass vaccination of larger portions of populations should be considered in areas where that is not possible such as Beni, which has a population of about 230,000. Expanding vaccinations in this manner could immediately halt the spread of the disease.

While such a mass vaccination sounds ambitious, the World Health Organization (WHO) and others have executed much larger national campaigns  in over 40 low-income countries, including DRC, where millions of children were immunized against polio or measles within a single week. These campaigns were also implemented successfully during conflicts in Somalia, Afghanistan, and Liberia. Though a mass-vaccination effort targets an entire population, it need only reach the proportion required for “herd immunity” — immunizing enough people so that the virus cannot spread. Early studies of the Ebola vaccine found that it might be possible to achieve herd immunity by vaccinating as little as 42% of the population.

To be successful, the mass vaccination effort would require the buy-in of the communities and the Ebola response teams being able to securely access the areas in question for the day or two it would take to immunize everyone. Promisingly, a recent study showed that even communities with high levels of distrust appear to be open to vaccination.

Anthropologists are already on the ground working tirelessly to engage community leaders and armed groups. In areas not amenable to outreach, a neutral “white helmet” security force, ideally drawn from the African Union or other countries without past involvement in the DRC conflict, should be deployed with the sole mission of securing vaccination efforts. It should be made abundantly clear to the population that this force has no allegiance to any political or institutional actors and is there only to deter violence against responders. At the end of the day, communities and militias do not want their loved ones to die from Ebola and would respect such a presence if they were reassured its mission is strictly medical.

Mass vaccination will also require an adequate supply of the Ebola vaccine. Its manufacturer, Merck, has committed to maintaining a supply of 300,000 doses at all times. Doing so could become difficult if vaccination efforts are expanded, but at the current juncture, the number of people who would need to be vaccinated in order to stunt the outbreak still appears to be within the range of existing stockpiles. Nonetheless, production of the vaccine should be increased and the bottlenecks to doing so should be assessed and cleared to ensure an adequate supply.

It’s true that the Ebola vaccine is still experimental and its health risks are not yet fully known. However, for people living in areas where everyone who is infected is not known, the heightened risk of unknowingly contracting a fatal Ebola infection may, at this point, outweigh the potential danger posed by the vaccine.

After the West African Ebola epidemic spiraled out of control, many wondered why more aggressive measures were not taken sooner. We may be at a similar make-or-break point in this outbreak.

Robert Hanson/Getty Images

Why is it easier to see the best solution to other people’s dilemmas than our own? Whether it’s about someone deciding to pursue a new job, or ask for a raise, or someone simply mulling over which ice cream flavor to choose, we seem to see the best solution with a clarity and decisiveness that is often absent when we face our own quandaries.

People have a different mindset when choosing for others: an adventurous mindset that stands in contrast to the more cautious mindset that rears when people make their own choices. In my research with Yi Liu and Yongfang Liu of East China Normal University in China and Jiangli Jiao of Xinjiang Normal University in China, we looked at how people make decisions for themselves and for others. We were interested in the process and quantity of information a decision maker uses when choosing for others versus choosing for the self. We wanted to know: Is more information searched in the process when people choose for others versus for themselves, and does the way they evaluate that information change based on whom they are choosing for?

To test our hypotheses, we performed eight studies with over a thousand participants. Throughout the series of randomized tests, participants were given a list of restaurants, or job options, or dating profiles — each with detailed information and then participants were asked to make choices for themselves or for someone else based on that information.

What we found was two-fold: Not only did participants choose differently when it was for themselves rather than for someone else, but the way they chose was different. When choosing for themselves, participants focused more on a granular level, zeroing in on the minutiae, something we described in our research as a cautious mindset. Employing a cautious mindset when making a choice means being more reserved, deliberate, and risk averse. Rather than exploring and collecting a plethora of options, the cautious mindset prefers to consider a few at a time on a deeper level, examining a cross-section of the larger whole.

But when it came to deciding for others, study participants looked more at the array of options and focused on their overall impression. They were bolder, operating from what we called an adventurous mindset. An adventurous mindset prioritizes novelty over a deeper dive into what those options actually consist of; the availability of numerous choices is more appealing than their viability. Simply put, they preferred and examined more information before making a choice, and as my previous research has shown, they recommended their choice to others with more gusto.

These findings align with my earlier work with Kyle Emich of University of Delaware on how people are more creative on behalf of others. When we are brainstorming ideas to other people’s problems, we’re inspired; we have a free flow of ideas to spread out on the table without judgment, second-guessing, or overthinking.

Upon reflection, these results should feel familiar. Think about the most recent time you asked for a raise. Many people are initially afraid to ask (employing a cautious mindset); however, these same people are often very supportive in recommending to others (such as their friends or colleagues) that they ask (employing an adventurous mindset). When people recommend what others should do, they come up with ideas and choices and solutions that are more optimistic and action-oriented, focus on more positive information and imagine more favorable consequences. Meanwhile, when making their own choices, people tend to envision everything that could go wrong, leading to doubt and second-guesses.

How can this research be applied? First, we believe that it suggests that everyone should have a mentor, or a blunt friend who can help people see and act on better evidence.

We should also work to distance ourselves from our own problems by adopting a fly-on-the-wall perspective. In this mindset, we can act as our own advisors—indeed, it may even be effective to refer to yourself in the third-person when considering an important decision as though you’re addressing someone else. Instead of asking yourself, “what should I do?” ask yourself “what should you do?”.

Another distancing technique is to pretend that your decision is someone else’s and visualize it from his or her perspective. This can be very easy when thinking of famous exemplars, such as how Steve Jobs would make your decision. By imagining how someone else would tackle your problem, people may unwittingly help themselves.

Perhaps the easiest solution is to let others make our decisions for us. By outsourcing our choices, we can take advantage of a growing market of firms and apps that make it increasingly easier for people to “pitch” their decisions to others. For example, people can have their clothes, food, books, or home decor options chosen for them by others.

Our research underscores a basic human desire: we want to feel like we’ve made a difference. We are wired for connection with others and an interesting part of making decisions for other people is that it is possible to have a bigger impact. Since managers and leaders are tasked with making decisions for others on multiple levels—everything from daily minutiae to personnel conflicts to long-term strategic planning—our results point the way to helping these employees find greater degrees of creativity, effectiveness, and fulfillment in their work.

Comments