Do People Trust Algorithms More Than Companies Realize?

HBR.org
Westend61/Getty Images

Many companies have jumped on the “big data” bandwagon.  They’re hiring data scientists, mining employee and customer data for insights, and creating algorithms to optimize their recommendations.  Yet, these same companies often assume that customers are wary of their algorithms — and they go to great lengths to hide or humanize them.

For example, Stitch Fix, the online shopping subscription service that combines human and algorithmic judgment, highlights the human touch of their service in their marketing.  The website explains that for each customer, a “stylist will curate 5 pieces [of clothing].”  It refers to its service as “your partner in personal style” and “your new personal stylist” and describes its recommendations as “personalized” and “handpicked.”  To top it off, a note from your stylist accompanies each shipment of clothes.  Nowhere on the website can you find the term “data-driven,” even though Stitch Fix is known for its data science approach and is often called the “Netflix of fashion.”

It seems that the more companies expect users to engage with their product or service, the more they anthropomorphize their algorithms.  Consider how companies name their virtual assistants like Siri and Alexa.  And how the creators of Jibo, “the world’s first social robot,” designed an unabashedly adorable piece of plastic that laughs, sings, has one cute blinking eye, and moves in a way that mimics dancing.

But is it good practice for companies to mask their algorithms in this way? Are marketing dollars well-spent creating names for Alexa and facial features for Jibo? Why are we so sure that people are put off by algorithms and their advice?  Our recent research questioned this assumption.

The power of algorithms

First, a bit of background.  Since the 1950s, researchers have documented the many types of predictions in which algorithms outperform humans.  Algorithms beat doctors and pathologists in predicting the survival of cancer patients, occurrence of heart attacks, and severity of diseases.  Algorithms predict recidivism of parolees better than parole boards.  And they predict whether a business will go bankrupt better than loan officers.

According to anecdotes in a classic book on the accuracy of algorithms, many of these earliest findings were met with skepticism.  Experts in the 1950s were reluctant to believe that a simple mathematical calculation could outperform their own professional judgment.  This skepticism persisted, and morphed into the received wisdom that people will not trust and use advice from an algorithm.  That’s one reason why so many articles today still advise business leaders on how to overcome aversion to algorithms.

Do we still see distrust of algorithms today?

In our recent research, we found that people do not dislike algorithms as much as prior scholarship might have us believe.  In fact, people show “algorithm appreciation” and rely more on the same advice when they think it comes from an algorithmic a person.  Across six studies, we asked representative samples of 1,260 online participants in the U.S. to make a variety of predictions. For example, we asked some people to forecast the occurrence of business and geopolitical events (e.g., the probability of North America or the EU imposing sanctions on a country in response to cyber attacks); we asked others to predict the rank of songs on the Billboard Hot 100; and we had one group of participants play online matchmaker (they read a person’s dating profile, saw a photograph of her potential date, and predicted how much she would enjoy a date with him).

In all of our studies, participants were asked to make a numerical prediction, based on their best guess.  After their initial guess, they received advice and had the chance to revise their prediction.  For example, participants answered: “What is the probability that Tesla Motors will deliver more than 80,000 battery-powered electric vehicles (BEVs) to customers in the calendar year 2016?” by typing a percentage from 0 to 100%.

When participants received advice, it came in form of another prediction, which was labeled as either another person’s or an algorithm’s.  We produced the numeric advice using simple math that combined multiple human judgments.  Doing so allowed us to truthfully present the same advice as either “human” or “algorithmic.”  We incentivized participants to revise their predictions — the closer their prediction was to the actual answer, the greater their chances of receiving a monetary bonus.

Then, we measured how much people changed their estimate, after receiving the advice.  For each participant, we captured a percentage from 0% to 100% to reflect how much they changed their estimate from their initial guess.  Specifically, 0% means they completely disregarded the advice and stuck to their original estimate, 50% means they changed their estimate halfway toward the advice, and 100% means they matched the advice completely.

To our surprise, we found that people relied more on the same advice when they thought it came from an algorithm than from other people.  These results were consistent across our studies, regardless of the different kinds of numerical predictions.  We found this algorithm appreciation especially interesting as we did not provide much information about the algorithm.  We presented the algorithmic advice this way because algorithms regularly appear in daily life without a description (called ‘black box’ algorithms); most people aren’t privy to the inner workings of algorithms that predict things affecting them (like the weather or the economy).

We wondered whether our results were due to people’s increased familiarity with algorithms today.  If so, age might account for people’s openness to algorithmic advice.  Instead, we found that our participants’ age did not influence their willingness to rely on the algorithm.  In our studies, older people used the algorithmic advice just as much as younger people.  What did matter was how comfortable participants were with numbers, which we measured by asking them to take an 11-question numeracy test.  The more numerate our participants (i.e., the more math questions they answered correctly on the 11-item test), the more they listened to the algorithmic advice.

Next, we wanted to test whether the idea that people won’t trust algorithms is still relevant today – and whether contemporary researchers would still predict that people would dislike algorithms.  In an additional study, we invited 119 researchers who study human judgment to predict how much participants would listen to the advice when it came from a person vs. algorithm.  We gave the researchers the same survey materials that our participants had seen for the matchmaker study.  These researchers, consistent with what many companies have assumed, predicted that people would show aversion to algorithms and would trust human advice more–the opposite of our actual findings.

We were also curious about whether the expertise of the decision-maker might influence algorithmic appreciation.  We recruited a separate sample of 70 national security professionals who work for the U.S. government.  These professionals are experts at forecasting, because they make predictions on a regular basis.  We asked them to predict different geopolitical and business events and had an additional sample of non-experts (301 online participants) do the same.  As in our other studies, both groups made a prediction, received advice labeled as either human or algorithmic, and then were given the chance to revise their prediction to make a final estimate.  They were informed that the more accurate their answers, the better their chances of winning a prize.

The non-experts acted like our earlier participants – they relied more on the same advice when they thought it came from an algorithm than a person for each of the forecasts.  The experts, however, discounted both the advice from the algorithm and the advice from people.  They seemed to trust their own expertise the most, and made minimal revisions to their original predictions.

We needed to wait about a year to score the accuracy of the predictions, based on whether the event had actually occurred or not. We found that the experts and non-experts made similarly accurate predictions when they received advice from people, because they equally discounted that advice.  But when they received advice from an algorithm, the experts made less accurate predictions than the non-experts, because the experts were unwilling to listen to the algorithmic advice. In other words, while our non-expert participants trusted algorithmic advice, the national security experts didn’t, and it cost them in terms of accuracy. It seemed that their expertise made them especially confident in their forecasting, leading them to more or less ignore the algorithm’s judgment.

Another study we ran corroborates this potential explanation.  We tested whether faith in one’s own knowledge might prevent people from appreciating algorithms. When participants had to choose between relying on an algorithm or relying on advice from another person, we again found that people preferred the algorithm.  However, when they had to choose whether to rely on their own judgment or the advice of an algorithm, the algorithm’s popularity declined. Although people are comfortable acknowledging the strengths of algorithmic over human judgment, their trust in algorithms seems to decrease when they compare it directly to their own judgment. In other words, people seem to appreciate algorithms more when they’re choosing between an algorithm’s judgment and someone else’s than when they’re choosing between an algorithm’s judgment and their own.

Other researchers have found that the context of the decision-making matters for how people respond to algorithms.  For instance, one paper found that when people see an algorithm make a mistake, they are less likely to trust it, which hurts their accuracy.  Other researchers found that people prefer to get joke recommendations from a close friend over an algorithm, even though the algorithm does a better job.  Another paper found that people are less likely to trust advice from an algorithm when it comes to moral decisions about self-driving cars and medicine.

Our studies suggest that people are often comfortable accepting guidance from algorithms, and sometimes even trust them more than other people.  That is not to say that customers don’t sometimes appreciate “the human touch” behind products and services; but it does suggest that it may be not be necessary to invest in emphasizing the human element of a process wholly or partially driven by algorithms. In fact, the more elaborate the artifice, the more customers may feel deceived when learning they were actually guided by an algorithm. Google Duplex, which calls businesses to schedule appointments and make reservations, generated instant backlash because it sounded “too” human and people felt deceived.

Transparency may pay off.  Maybe companies that present themselves as primarily driven by algorithms, like Netflix and Pandora, have the right idea.

Nick Veasey/Getty Images

A study of surgical patients, across 168 hospitals, showed that 23% of patients experience a major complication during their stay. We like to think of complications as atypical events. However, the unfortunate truth is that they are quite common. While most medical complications are easily identified and are treated in a timely manner, not all are recognized soon enough. And delayed intervention means fewer treatment options and poorer outcomes.

My mother, Florence Rothman, was one of these patients whose complications were recognized too late; she died in 2003 in a hospital of avoidable causes. Her deterioration went unnoticed, and my brother and I have spent the last 15 years working to help prevent that next avoidable death.

There is one question that a clinician does not want to have to answer, “Why didn’t we see this patient’s problem sooner?” To deliver better care, doctors and nurses need to fully understand the patient’s current status in order to predict potential problems. Yet, many hospitals in the United States rely on only vital signs as status indicators and do not capitalize on the full complement of available patient information, especially nursing assessments — each nurse’s careful evaluation of his/her patient’s condition that is conveniently recorded in the electronic medical record (EMR). With this data, I believe that it is possible to implement an “unblinking eye”: a 24/7 evaluation of patient status that leverages patient data more completely. In an age of such tremendous technological innovation, health care must step up and change outdated processes to help save lives by integrating and embracing all patient data to identify deterioration sooner.

Insight Center
The Future of Health Care
Sponsored by Medtronic
Creating better outcomes at reduced cost.

By making better use of patient data and predictive models we can identify patients who are “smoldering” (in the words of one nurse): Those who may appear fine but have unseen damaging processes occurring internally. Sepsis can be such a process. It can tragically afflict otherwise well patients and is often identified too late, which is why it is the focus of a major worldwide effort to reduce its death toll.

But making a prediction is not enough. For a prediction to affect patient outcomes, it must meet the criteria that I term the prediction trifecta: It must be correct, timely, and provide new information.

Many prediction models currently in use at hospitals today rely on vital signs to satisfy the first criterion, “correct,” to identify an impending crisis. However, while identifying a patient who is deteriorating is easily done with vital signs alone, it is far more difficult for such a system to meet the next criterion, “timely,” and provide a warning when there are still options available to halt the deterioration.

Systems relying on vital signs rarely meet the third criterion, “new”, which is providing information that is not already known to the physicians and nurses. While vital signs are important and valuable, there are three intrinsic shortcomings in focusing on them for early warning:

Vital signs tend to be lagging indicators. The human body is built to maintain equilibrium with basic, vital operating parameters, and we do so by sacrificing functionality. Appetite fades. Digestion shuts down. Fluid builds in the extremities. All of these effects can happen while we maintain un-alarming vitals, so when the vitals fail, and decompensation is seen, it tends to be too late for effective intervention. Vitals are generally only available to a predictive model when a nurse or a technician enters the data. The nurse is therefore ahead of the model and a model based on vitals rarely ever provides “new” information. Any predictive model based upon vitals alone is therefore unlikely to be either timely or new. Further confounding vital sign-based models, normal variation in patients that are not in trouble tend to swamp the signal from those few who are, leading to high rates of false positives, alert fatigue, and clinician tune-out.

The goal of achieving this prediction trifecta is not to replicate what we see in frantic, hospital TV dramas with nurses and physicians racing to a patient’s bedside with a “code blue.” That patient has about a 17% chance of going home, if he or she is even revived.

Clinicians need predictions to be meaningful and receive them early in the deterioration process. Continuing with the sepsis example: In the time lag from inception to treatment, it’s critical to administer a bolus of fluids and IV antibiotics. One estimate claims an increase in mortality rate for every hour of delayed treatment. Mortality rates for early-detected sepsis are about 5%, but if it is allowed to progress, mortality rates approach 50%.

Nursing Assessments: The Key to Meaningful Prediction

Fortunately, there is another source of physiological data recorded periodically for every patient in the hospital’s EMR system. Nursing assessments, the structured evaluation of a patient’s physiological systems, can identify a patient’s deterioration from sepsis or other conditions and complications before it’s evident in vital signs or laboratory data. Yet, many current prediction models do not include this information.

Nurses conduct what’s termed a “head-to-toe” assessment on each patient, every day, every shift, in every hospital. It includes, for example, cardiac, respiratory, gastrointestinal, neurological, skin, psychosocial, and musculoskeletal assessments. For each evaluation, a nurse interacts with the patient to conduct and document a structured, hands-on review. If all the underlying factors of that assessment are normal, then the nurse deems it passed or met; if one or more of the factors is viewed as abnormal, then the assessment will be failed, or not met. For example, your skin is an organ. The skin nursing assessment reviews changes in skin texture, continuity, and color. In sepsis cases, skin failure may be an early indicator.

Every nursing assessment requires human, clinical judgment and provides both insight into the patient’s current state and predictive power to help identify patients who are at an elevated risk of an adverse outcome. Effective predictive models must include these leading indicators.

As an attempt to prevent what happened to our mother from happening again, my brother Steven and I developed a tool called the Rothman Index (RI), which includes nursing assessments, that can help provide an “unblinking eye” to support clinicians. Through the RI and the help of nursing protocols, Yale New Haven Health System has reduced in-hospital mortality by 20% to 30%, with special benefits in reducing sepsis mortality. Meaningful prediction, hitting the trifecta, has also helped the organization see a reduction in cost per sepsis case.

The inclusion of nursing data in predictive models makes profound sense: The nurse understands the patient’s condition. If we capture that nursing gestalt, and especially if we can do it electronically, we are on our way to reducing that critical time lag between inception of a possibly life-threatening complication, and action.

All models must be tested for their value in providing not only prediction but also for their value in providing meaningful prediction. It’s a concept that was inspired by one life, but as a standard practice, can put us on the path to saving countless others.

Business.com
Unconventional Management Strategies That Work

When you look at successful businesses, you often see a strong team behind that success. From high-level executives to interns, businesses with cooperative and supportive team members tend to perform better than groups where team members are more focused on individual interests. Team cohesion relates directly to the role of managers. Exceptional managers find ways to piece together the different components of teams to create a functional unit that works toward business goals.

In many cases, managers rule in a conventional style. Meetings run a stereotypical way, people hold strict job responsibilities, and there's a clear hierarchy within a company. While sticking to routines isn't always a bad thing, there are less-conventional management strategies and practices that can lead to better team cohesion and performance. We spoke to a handful of successful managers who think creatively to boost team morale and motivation, and they shared insights into how they lead their teams.

Be present outside of work.

On the surface, being involved in your employees' lives outside of work might seem like a risky proposition. While there's certainly a balance needed, it's not a bad idea to be present outside of the workplace.

Linda Ding, the director of strategic marketing at Laserfiche, believes her company's focus on the local community helps build a stronger team. The business, which holds offices in Long Beach, California, created its Laserfiche Cares program to enrich the lives of employees and community members. Through this program, employees volunteer together and help the community.

"I volunteered teaching Mandarin to three groups of students at the YMCA on my birthday," Ding said. "I couldn't think of a better way to celebrate my birthday. It's such a heartwarming experience."

Although Ding volunteered alone, creating the Laserfiche Cares program gives employees an opportunity to do something personal with a clear connection to the company. Ding's positive experience gives her the chance to enjoy an opportunity for personal growth through a company program, and the business's brand benefits from her service to the community.

Volunteering opportunities can be paired with team bonding activities to drive home a company's belief in community and teamwork.

Toeing the line between offering meaningful activities outside of work and making employees spend too much time together poses challenges for managers, but there's certainly merit to team bonding. Asking employees to participate in mandatory team activities once a week might cause employees to resent you, but holding optional team activities once a month can produce beneficial results. Even taking a Friday to hold a team activity gives your team the opportunity to grow closer together.

"The day after our monthly corporate meetings, we regularly participate in offsite team-building exercises," said Kyle Bailey, CEO and founder of NuVinAir. "This has included Topgolf, go-karts, bowling, or an arcade, and things have gotten really fun … and competitive!"

Getting employees to feel more comfortable with each other might be as easy as hosting an office happy hour. Regardless of how you do it, getting everyone together outside the office to gain a stronger connection can benefit your business.

Encourage candid dialogue.

We've all heard the saying, "The truth hurts." This saying can prove itself correct in the workplace when employees or managers are faced with criticism.

When receiving criticism, it tends to be easier to accept and understand it when the critique comes from those close to you. Trust is a critical element of successful work teams, and if you have a close-knit group, it's easier to openly share constructive criticism without hurting feelings.

If your team trusts each other, it's much easier to be candid in the office. Once your team begins to trust each other, it's critically important to be candid. In "Creativity, Inc.," a book written by Pixar and Walt Disney Animation Studios President Ed Catmull, the inner workings of Pixar are shared. Catmull shines a light on Pixar's commitment to candor throughout much of the book.

"You don't want to be at a company where there is more candor in the hallways than in the rooms where fundamental ideas or matters of policy are being hashed out," Catmull writes. "Seek out people who are willing to level with you, and when you find them, hold them close." 

With people frequently withholding their opinions in fear of hurting a colleague's feelings, it can be difficult for people to be candid in meetings, but Catmull suggests this level of transparency is better off in meetings than behind people's backs during side conversations in the hall. A willingness to be candid requires removing personal biases and caring about the good of the team, even if that leads to difficult conversations.

"In a meeting earlier this year, after onboarding a bunch of new people 30 days earlier, I asked, 'Who in the room are you least likely to get along with?' Bailey said. "It led to some pretty honest conversations and produced an incredibly effective result in generating teamwork."

Bailey's tactic might be a bit extreme for your business depending on how much trust your employees share with one another, but his mindset deserves attention. Bailey's group tackles their team dynamic head on in an unconventional way. Instead of beating around the bush, Bailey urges everyone in the room to be open with each other and speak candidly about relationships that may cause friction. By discussing working relationships within a month of hiring people, it's easier to address the team dynamic and make necessary changes than it would be if the new hires spent months entrenched in a different, less compatible team dynamic.

You don't have to ask your employees whom they're least likely to get along with to encourage candid discussions, but placing an emphasis on constructive criticism and honest feedback can improve your team's overall performance. Being an excellent communicator is one trait that makes a great boss. Don't be afraid to share candid remarks with team members.

Make praise a priority.

Praising good work is a standard practice. Making praise a priority within your organization, however, is much less conventional than many realize. According to Gallup research, "only three in 10 U.S. employees strongly agree that in the last seven days they have received recognition or praise for doing good work," and "just one out of five employees strongly agree their performance is managed in a way that motivates them to do outside work." While many managers may feel they properly praise employees, the numbers suggest otherwise.

Giving adequate praise isn't easy. We turned to Susan Kuczmarski, social scientist and leadership expert at Kuczmarski Innovation, for tips on how to better lift the spirits of employees within your organization. Kuczmarski shared seven pieces of praise-related advice:

"Know that praise and recognition can help to make everyone in an organization feel valued." "Personalize praise – match the right kind and amount of praise to each recipient." "Recognize the power of indirect praise." "Use written and other tangible forms of recognition, not just verbal praise, and give one's time too." "Praise on the scene and behind the scenes." "Praise both the effort and the outcome." "Create a system for giving praise and be creative and consistent."

As Kuczmarski's comments illustrate, giving proper praise takes more than an occasional "thank you." Impactful praise requires a commitment to recognizing your employees for successful actions. When employees feel their efforts are valued, they're much more willing to give increased effort. Giving praise is a conventional managerial tactic, but an intense commitment to focused praise in both written and verbal form is a practice most businesses don't follow. If your business decides to make a larger commitment to giving adequate praise, there's a good chance you'll see increased motivation levels among employees.

Implementing unconventional management techniques doesn't mean you need to rely on extravagant tactics to see improved team performance. An unconventional managerial technique can be as simple as reworking how your organization praises employees. By taking seemingly simple managerial tasks and looking at them in an innovative fashion, your business can improve.

MIT Sloan Management Review

More and more consumers are engaging with customer service through digital channels, including websites, email, texts, live chat, and social media. In 2017, only half of customer experiences with companies involved face-to-face or voice-based interactions, and digital interactions are expected to represent two-thirds of customer experiences within the next few years.1 The vast majority of customer service interactions around the world begins in online channels.2

Despite the convenience and speed of such interactions, they lack some of the most important aspects of offline customer service. In-person interactions are rich in nonverbal expressions and gestures, which can signal deep engagement, and an agent’s tone of voice can convey empathy and focus in phone conversations. Over time, these interpersonal touches help companies build and sustain relationships with customers.

But can some of that benefit be captured in the world of digital customer service? We argue that it can — with the right words. Our focus on words is consistent with a growing recognition among businesses that language matters, digitally or otherwise. Apple, for example, has explicit policies detailing which words can and cannot be used, and how they should be used when interacting with customers.3 The use of customer service scripts is also commonplace in service contexts, where employees are encouraged to use specific words when interacting with customers.4

However, we find that most companies are taking a misguided approach in their emails, texts, and social media communications with customers. They’re using words that, while designed to engage customers, can sometimes alienate them.

Our research5 focuses on personal pronouns (I, we, you), which psychologists have linked to critical personal and social outcomes.6 Customer service agents use personal pronouns in nearly every sentence they utter, whether it’s “We’re happy to help you” or “I think we do have something in your size.” Our research shows that simple shifts in employee language can enhance customer satisfaction and purchase behavior.

The Power of Pronouns

Conventional wisdom says that being customer-oriented is critical to customer satisfaction. That’s why phrases like “We’re happy to help you” have become so popular in service settings. Agents are often taught to lean on the pronoun “you” and to avoid saying “I,” and our survey of more than 500 customer service managers and employees shows that they’ve taken those prescriptions to heart. (See “About the Research.”)

About the Research

Our research looks at language in digital customer service interactions. To test conventional wisdom and practices regarding the use of personal pronouns in text-based exchanges, we surveyed more than 500 customer service managers or agents and analyzed more than 1,000 customer service emails from 41 of the top 100 global online retailers. We also conducted controlled experiments with 2,819 North American adult participants in an online panel including managers, agents, and general-population consumers, and lab experiments with undergraduate students.

Our results reveal that service employees not only believe they should, but actually do frequently refer to the customer as “you” and to the company as “we,” and they tend to leave themselves as individuals (“I”) out of the conversation. What’s more, when we compared service agent pronoun use with natural English-language base rates, we found that employees are using far more “we” and “you” pronouns in service settings than people do in almost any other context. Customer service language seems to have evolved into its own kind of discourse.

To find out if this discourse is optimal, we took a subset of the customer service responses we had collected, which showed high use of “we” pronouns, and constructed alternative responses, replacing “we” with “I.” For example, “We are happy to help” easily became “I am happy to help” without changing the basic message. We also removed references to the customer in some responses. For example, “How do the shoes fit you?” became “How do the shoes fit?” We then randomly assigned individuals to read either the company’s response or our edited response and assessed their satisfaction with the company and the agent, as well as their purchase intentions.

We found some surprising results that are inconsistent with current approaches.

Using ‘I’ Conveys Empathy and Action

In all cases, our modified responses with “I” pronouns significantly outperformed the “we” pronouns that real service agents were using. Relative to using “we,” the benefit of using “I” stems from the fact that customers perceive the employee to be (a) more empathetic and (b) more agentic, or acting on the customer’s behalf.

We also examined these language features in a large data set of more than 1,000 customer service email interactions from a large multinational retailer of entertainment and information products. We matched these email interactions with customer purchase data. Econometric analyses revealed the same positive results of using “I” pronouns: A 10% increase in “I” pronoun use by company agents corresponded to a 0.8% increase in customer purchase volume after controlling for other factors. Our analysis suggests that companies could achieve an incremental sales lift of more than 5%, and still fall within natural language norms, by increasing their service agents’ use of “I” pronouns where possible.

Why is “I” a more powerful pronoun in agents’ interactions? After all, saying “I” too much can signal self-centeredness,7 and many leaders are, in fact, criticized for speaking too much about themselves.

However, CEO speeches and corporate earnings reports are not one-on-one interactions, which, as linguists point out, can see the opposite effect:8 When two people are communicating with each other, “I” suggests a personal focus on the issue at hand. Specifically, our research on customer service finds that saying “I” signals that the agent is feeling and acting on the customer’s behalf. For example, telling a customer “I am working on that” conveys a greater sense of ownership than “We are working on that,” which can imply a diffusion of responsibility. Similarly, “I understand the issue” shows more empathy than “We understand the issue.”

Ultimately, customers need to know that the agents with whom they are interacting care and are working on their behalf. Research has consistently shown that customer perceptions of empathy and agency drive satisfaction, sales, and profits,9 and our studies show that “I” fosters these perceptions to a significantly greater degree than “we.”

Using ‘You’ Can Backfire

While “I” is clearly better than “we” when referring to who is providing service, what about using the word “you”? Our studies suggest that service managers and employees believe “you” conveys a customer orientation. We also found that agents use it more frequently than natural language would warrant.10

However, peppering conversations with “you” offers little benefit, because customers are already the implied focus of these interactions. In fact, adding or removing references to “you” (the customer) tended to have no positive effect in our studies. We replicated these results across a total of nine experiments (more than 1,200 participants total, about 55% female, 45% male) using a variety of language stimuli covering a range of typical customer service interactions. In our studies, the use of “you” to refer to the customer as the recipient of the agent’s actions — such as “I can look that up for you” — did nothing to improve satisfaction, purchase intentions, or customer feelings that the agent was acting with either empathy or agency.

Say ‘I’ for Service Success

Customer service agents tend to use the “we” pronoun, but using the “I” pronoun leads to greater customer satisfaction and an increase in purchases.

Chat bubble diagram depicting the difference between how agents actually talk to customers and a better approach

Sometimes, using the word “you” can actually have a negative effect on company and customer outcomes. For example, we found that saying to a customer, “Sorry your product was defective,” rather than “Sorry the product was defective,” resulted in decreased satisfaction and purchase intentions. This result was driven in part by perceptions that the employee wasn’t being accountable (that is, lacked agency), potentially shifting the responsibility or blame toward the customer.

In short, the usual prescriptions and practices of referring to the company as “we” and emphasizing “you,” the customer, fail to reap the benefits that managers expect. It’s more effective when agents speak from a personal, singular perspective — treating customer interactions as one-to-one, rather than many-to-one, dialogues. So front-line service employees should be coached to do that. There are simple language changes that any company can implement. (See “Say ‘I’ for Service Success.”) By making these changes to customer service language, organizations can create more meaningful interactions with their customers — and improve the bottom line.

Comments