Corp.’s LinkedIn saw its subscription revenue rise 8 percent after equipping its sales force with artificial intelligence software that not only predicts customers at risk of cancellation, but also explains how it came to its conclusions.

The system, which launched last July and will be described in a LinkedIn blog post on Wednesday, marks a breakthrough for artificial intelligence to “show its work” in a useful way.

AI scientists have no problem designing systems that can accurately predict various business outcomes, they are finding that in order for these tools to be more effective for human operators, the AI ​​may need to explain itself through algorithm.

As startups and cloud giants race to make opaque software easier to understand, the burgeoning field of “explainable artificial intelligence” or XAI has spurred massive investment in Silicon Valley and sparked discussions in Washington and Brussels that regulators want to make sure it’s done Fair and transparent automated decision-making.

AI can perpetuate social biases such as race, gender and culture. Some AI scientists see interpretation as a key part of mitigating these questionable results.

U.S. consumer protection watchdogs, including the Federal Trade Commission, have warned over the past two that investigations into unexplained artificial intelligence are . The EU may pass the Artificial Intelligence Act next year, a comprehensive set of requirements that includes users being able to interpret automated predictions.

Proponents of explainable AI say it could help make AI applications more efficient in areas such as healthcare and sales. Cloud sells explainable AI services that, for example, tell customers which pixels to try to sharpen their systems and soon which training examples are most important in predicting the subject of a photo.

See also  Disney+ Hotstar has 28 million paying users, bringing Disney+ to 94.9 million

But critics say the explanation of why the AI ​​predicts what it does is too unreliable because the AI ​​technology that explains the machine isn’t good enough.

LinkedIn and other companies developing explainable AI acknowledge that every step in the process — analyzing predictions, generating explanations, confirming their accuracy and making them actionable for users — still has room for improvement.

But after two years of trial and error in a relatively low-risk application, LinkedIn says its technology has yielded practical value. It’s evidenced by an 8% increase in renewal bookings for the fiscal year, which was higher than would normally be expected. LinkedIn declined to specify the gains in dollars, but described them as sizable gains.

Previously, LinkedIn salespeople relied on their own intuition and a few automated alerts about customer adoption of the service.

Now, artificial intelligence can handle research and quickly. Dubbed CrystalCandle by LinkedIn, it points to unnoticed trends, and its reasoning helps salespeople hone their strategies to keep high-risk customers onboard and pitch upgrades to others.

LinkedIn said explanation-based recommendations have expanded to its more than 5,000 sales employees, covering recruiting, advertising, marketing and educational products.

Parvez Ahammad, head of machine learning and head of applied science research at LinkedIn, said: “It helps experienced salespeople provide specific insights to them in their conversations with prospects. It also helps new salespeople get started right away. “

To explain or not to explain?

In 2020, for the first time, LinkedIn provided unexplained forecasts. Scores with an accuracy rate of about 80% indicate the likelihood of an customer escalating, remaining stable, or canceling.

See also  Stellantis unit pleads guilty to paying $300 million in U.S. diesel probes

Salespeople weren’t entirely convinced. The teams selling LinkedIn’s Talent Solutions recruiting and recruiting software don’t know how to adjust their strategy, especially when the odds of a client not renewing are no better than flipping a coin.

Last July, they started seeing a short auto-generated paragraph highlighting factors that affected the score.

For example, AI thinks customers are likely to escalate because it added 240 employees in the past year, and candidates responded 146% faster in the last month.

Additionally, an index that measures clients’ overall success with LinkedIn’s recruiting tools has surged 25% over the past three months.

According to these explanations, sales reps now direct customers to training, support and service to improve their experience and keep them spending, said Lekha Doshi, vice president of global operations at LinkedIn.

But some AI experts question whether an explanation is necessary. They could even cause harm, create a false sense of security in AI, or prompt design sacrifices that reduce the accuracy of predictions, the researchers said.

Feifei Li, co-director of Stanford University’s Human-Centered Artificial Intelligence Institute, said the inner workings of people’s use of products like Techno and Google Maps are not clear. In this case, rigorous testing and monitoring dispelled most doubts about its efficacy.

Likewise, Daniel Roy, an associate professor of statistics at the University of Toronto, says that AI systems in general can be considered fair even when individual decisions are difficult to understand.

LinkedIn says it’s impossible to assess the integrity of an algorithm without understanding its mind.

It also insists that tools like CrystalCandle can help AI users in other fields. Doctors can learn why AI predicts someone is at greater risk for a disease, or can tell people why AI suggests they decline a credit card.

See also  Could Crispr be the next killer virus for humans?

The hope, says Ben Kim, an AI researcher at Google, is that explanations reveal whether a system fits the concepts and values ​​that people want to promote.

“I think interpretability ultimately enables the dialogue between machines and ,” she said. “If we really want to achieve human-machine collaboration, we need it.”

© Thomson Reuters 2022