Artificial Intelligence Coaches for Sales Agents: Caveats and Solutions

August 1, 2021

|

Xueming Luo, Marco Shaojun Qin, Zheng Fang, and Zhe Que

Link: https://doi.org/10.1177%2F0022242920956676

Artificial intelligence (AI) is increasingly capable of providing firms with sales coaches, leveraging deep learning algorithms and cognitive speech analytics to explore conversations with customers and provide feedback. Modern AI coaches feature higher computation power, scalability, and cost efficiencies than ever before, making them an attractive alternative to human trainers.

But AI coaches have their drawbacks. First, their big-data analytics power can lead to feedback overload. Sales agents may not be able to assimilate the comprehensive training AI coaches provide. And bottom-ranked sales agents such as rookies and inexperienced agents may suffer most. Further, they lack interpersonal skills in communicating, which may result in an aversion to AI among top-ranked sales agents.

The existing sales management literature has largely ignored the potential downsides of AI for agents, particularly those at different points on the performance curve. Studies have primarily suggested a linear relationship between AI coaching and salesforce performance. The literature has also only begun to address the value of AI in assisting, rather than replacing, human sales agents.

This study extends prior literature on the negative impact of AI and customer aversion to computer automation and algorithms. It also highlights a novel AI-human coach collaborative that outperforms either type of trainer alone.

Specifically, the study addresses three research questions regarding AI’s role in effective salesforce coaching:

  1. Which types of agents benefit most and least from AI versus human coaches, and what is the relationship between AI coaching an agent performance?
  2. What is the underlying mechanism driving the impact of AI coaches?
  3. Can AI and human coaches working together improve performance among distinct sales agent types?

The research tests four hypotheses using three randomized field experiments. First, the researchers posit the incremental impact of an AI coach over human coaches takes the form of an inverted-U shape. In other words, middle-ranked agents improve performance by the largest amount, while bottom- and top-ranked agents show limited gains. Second, the U-shaped relationship can be mediated through learning from the coaching feedback. Third, an AI coach offering restricted feedback will have a significant and positive impact on bottom-ranked agents’ sales performance. And fourth, an AI-human coaching team will have a positive impact on all sales agents, including top-ranked performers.

Experiment 1 examines 429 sales agents from a large financial technology firm. The agents are randomly assigned to on-the-job sales training with an AI or human coach. Within each group, agents are categorized according to their previous performance as bottom-, middle-, or top-ranked. Data is collected on performance, demographics, and voice characteristics for one month of the agents’ sales calls.

The results of Experiment 1 confirm the study’s first hypothesis. The incremental impact of the AI coach over the human coach is heterogeneous in an inverted-U shape. Middle-ranked agents improve performance by a significant amount. Bottom- and top-ranked agents show limited incremental gains. The findings suggest the pattern is driven by a learning-based mechanism. Bottom-ranked
agents suffer from information overload when working with an AI coach. Top-ranked agents are skeptical of and are aversive to AI as a trainer.

Experiment 2 focuses on a separate sample of 100 bottom-ranked fintech sales agents. Half of the agents are randomly assigned to the control group and receive AI feedback similar to that in Experiment 1. In other words, the feedback is unrestricted. The remaining treatment group receives less feedback from the AI coach (just one most important feedback suggestion) each day. The treatment group improves performance by 50%, suggesting bottom-ranked agents receiving restricted AI feedback is more effective for improving job performance than unrestricted feedback. Furthermore, the AI coach offering restricted feedback indeed helps reduce information overload, confirming the construct’s mediating effect on bottom-ranked agents’ performance.

Experiment 3 addresses the limitations of AI and human coaches acting alone. A new sample of 451 bottom- and top-ranked agents is randomly assigned to one of three conditions: AI coach, human coach, or AI-human coaching team. The results suggest both bottom- and top-ranked agents in the AI-human condition perform better than their counterparts learning from either humans or AI alone. While bottom-ranked agents see greater improved performance than top-ranked agents when learning from the hybrid coaching team, it appears combining the soft communication skills of human managers and hard data analytics of AI can effectively solve the problems faced by both bottom and top agents.

This research empowers companies to consider introducing AI coaches without suffering from the technology’s traditional drawbacks. Instead of assigning an AI coach to their workforce indiscriminately, managers can use this study’s results to design an optimal program for targeted agents. Companies can also use the results as reason to move beyond the traditional AI-human coaching dichotomy. They can carefully assemble a coaching program around both AI and human trainers to achieve higher workforce productivity and reap more value from their AI investments.