7.3 C
London
Friday, December 13, 2019
Home Lifestyle Business Can chatbots make workers more ethical?

Can chatbots make workers more ethical?

0
239
Can chatbots make workers more ethical?


Convercent’s CEO Patrick Quinlan and his chief ethics officer, Katie Smith, decided to use their ethics and compliance technology to create a new kind of Code of Conduct. Tapping the power of emerging AI like chatbots, predictive analytics, and natural language processing, they developed an interactive Code of Ethics for themselves as a test case.

The first version, says Quinlan, “was like souped-up web content.” Highly interactive, the code led users through clickable sections instead of having to read page after page of a static document. The second version debuted “Finn” (named after an employee’s dog), a chatbot that popped up to respond to questions or allow the employee to report an issue directly in the chat. Finn is depicted as a gender-neutral robot, which Quinlan says was intentional.

The backend featured an analytics dashboard that could alert company leaders to potential issues by notifying them when there was an uptick in activity, like: “30% of the marketing department in New York accessed the sexual harassment page six times in the last four days.”

Although there has been a number of bots, apps, and platforms launched in the last couple of years to combat workplace harassment, discrimination, fraud, and other issues, Convercent’s offering is set apart by its bringing together of each company’s unique code of conduct and its AI component.

Among the first companies to pilot the interactive code was Kimberly-Clark. Kann says that based on employee feedback, the company realized it had an opportunity to substantially improve how policies were organized and stored. Among the concerns were how best to engage and support day-to-day decision-making, and how to provide helpful, real-time information and business-friendly guidance. The result was a multimedia experience that includes policies, training, videos, and interactive search. Their chatbot was branded KayCee (get it?).

[Image: courtesy of Convercent]

Getting Answers

Since the test was rolled out last year using Convercent’s interactive Code, more than 21,000 people at Kimberly-Clark used it across more than 60 countries in 19 languages. It drove a 2.5x increase in Helpline questions and employees spent an average of 3.45 minutes on a page. They also initiated more than 3,000 chats with KayCee.

The chats are critical to the mission of getting employees’ questions answered and their reports dealt with promptly and with care. Quinlan says that 34% of Convercent helpline reports fall into the critical category including harassment, discrimination and abuse of power, fraud and bribery, ethics and compliance violations, wrongful termination and retaliation, or violence

[Image: courtesy of Convercent]

The bot itself generates a fairly basic question designed to buck a traditional industry practice which asked the reporting party to do a lot of categorization of their allegation. “It takes great courage to speak up,” Quinlan observes, “in the midst of anguish,” so it doesn’t help when they’re asked to explain whether they are calling out bribery vs. corruption.” This “Simple Intake” just says “Tell us what happened” to allow for open-ended reporting. The system reads that and can route the report to HR, legal, etc. Convercent saw 70% more text come through in reporting descriptions with this feature across its customer base which stands at roughly 6.7 million globally.

Philip Winterburn, Convercent’s chief strategy officer believes that a chat interface removes a number of barriers both real and perceived when it becomes the first point of entry for employees to raise concerns, ask questions, and have a meaningful dialog.

“Currently our chatbot is ‘rules-based’ and leverages a combination of customizable questions and answers as well as keyword matching to offer assistance,” Winterburn explains. But Quinlan adds that they’re adopting machine learning and natural language processing to improve the conversations. “We’re still on the one yard line,” Quinlan maintains. However, Winterburn adds that machine learning will further help the bot know when it should hand the conversation over to a real human.

Machine learning is a tricky thing to manage, as Microsoft learned a couple of years ago when users gamed its chatbot Tay to generate hate speech. Quinlan admits that this is a recent conversation within his company because they are trying to be proactive to ensure people aren’t weaponizing these interactive codes of ethics, for example, if they wanted to take out a competitor for a new job. “We don’t know of an incidence where that happened yet,” Quinlan maintains but concedes that with more than 250,000 cases a year, the law of inevitability could change that.

Still, Kann says on balance the experience has been a good one for the company and its people. “Employees are no longer limited to searching for what they need,” Kann says, “Rather, we are able to push information based on the analytics.” Kimberly-Clark’s ambition is to provide the “right content at the right time to the right employee.”

Quinlan puts the onus on employers which at one point were simply looking for signoffs on their codes of conduct, just to protect themselves legally. He contends that now it’s about building trust. “What is the follow up when people engage,” he posits, and what organizational justice looks like. “We can’t just expect people to speak up,” and not have their employer take satisfactory action.

Ultimately, Kann says, “the Interactive Code’s greatest value is driving our ethics and compliance core mission: business engagement.”

Read More

LEAVE A REPLY

Please enter your comment!
Please enter your name here