Measuring the Productivity Impact of Generative AI
Customer support agents using an AI tool to guide their conversations saw a nearly 14 percent increase in productivity, with 35 percent improvements for the lowest skilled and least experienced workers, and zero or small negative effects on the most experienced/most able workers,Erik Brynjolfsson, Danielle Li, and Lindsey R. Raymond report in Generative AI at Work (NBER Working Paper 31161).
Using call data from roughly 5,000 agents working for a Fortune 500 software company, the researchers tracked the duration, quality, and outcome of customer support interactions as the company introduced a Generative Pre-trained Transformer (GPT) AI tool. The tool was rolled out to the agents gradually, mostly between November 2020 and February 2021. For a control group, the researchers also collected data from agents who did not receive the tool over 2020 and 2021. The AI tool was intended to support the work of human customer support agents, offering them potential responses to customer queries. The agents could choose to take those suggestions or ignore them and enter their own responses.
With AI assistance, customer service agents could handle more calls per hour and increase their resolution rate.
The researchers find that customer support agents utilizing the AI tool increased the number of customer issues resolved per hour by 13.8 percent. They attribute the increase to three factors: agents, who could participate in multiple chats at once, spent about 9 percent less time per chat, handled about 14 percent more chats per hour, and successfully resolved about 1.3 percent more chats overall. Measures of customer satisfaction showed no significant change, suggesting that the productivity improvements did not come at the expense of interaction quality.
The researchers divide the data by agents’ length of tenure and pre-AI productivity, and find that the benefits of using the AI tool were greatest among less experienced and lower skill workers, who saw gains of 35 percent, with little to no negative effects on top performing/most experienced workers. An agent using the AI tool who had just two months’ tenure at the firm performed as well as an agent with six months’ tenure working without the tool. The researchers suggest that newer and lower skilled workers may have more to learn than higher skilled and more established workers, and that AI tools can help them adopt the skills and behavior of more experienced workers more quickly. Text analysis of agents’ conversations supports this interpretation.
All agents changed their communication patterns after beginning to use the AI tool, but the change among lower-performing agents was greater. This may be because the AI tool based its suggestions on the work style and outputs of the company’s most productive agents, and therefore spread their pattern of behavior to newer and less skilled workers. For instance, the developers of the AI tool found that top performers were able to determine the underlying technical issue, based on a customer’s description, twice as fast as lower performers. The AI tool, trained using the best examples of resolved queries, learned to connect specific query phrases to useful diagnostic questions and potential solutions. The AI tool was also able to give more frequent feedback than a human manager. This gave new hires and lower performers the opportunity to improve faster than they would have without the tool, iterating with each call rather than only following managerial reviews.
The researchers also noticed that customers were more likely to express positive sentiments, and less likely to request help from a supervisor, when interacting with agents using AI assistance than when interacting with those who were not. Perhaps reflecting the improved tenor of the exchanges, attrition rates among agents with access to the AI tool were 8.6 percent lower than the comparable rates for agents without such access.
—Emma Salomon