Using AI to minimise bias in an employee performance review
Liz Melton and
Grant Riewe
Additional contact information
Liz Melton: Strategic Partnerships Manager, USA
Grant Riewe: Chief Technology Officer, USA
Journal of AI, Robotics & Workplace Automation, 2022, vol. 2, issue 1, 17-23
Abstract:
Performance reviews are intended to be objective, but all humans experience bias. While many companies opt for group reviews as a way to de-bias and challenge the status quo, what is being said in those meetings, how those comments are said and the context for those remarks are just as important. At the same time, most people’s attention span is of shorter duration than a review and being promoted depends on what bosses remember about their direct reports, their subjective measure of employee success, and their ability to convince others that employee accomplishments are deserving of a reward. As a result of these compounding factors, meta-bias patterns emerge in company culture. Combine those limitations with the fact that reviews are often a breeding ground for subtle — and not-so-subtle — bias, and it begs the question: Why are we not using technology to help? With developments in natural language processing (NLP) and conversational AI (CAI), computers can identify biased phrases in real time. Although these technologies have a long way to go to match human nuance, we can at least flag problematic phrases during something as significant as performance reviews. And with the right inputs rooted in social science and normalised based on geography, contextual relationships and culture, we could be surfacing insidious bias throughout organisations. This paper examines how a future CAI tool could reduce bias and, eventually, teach people to re-evaluate and reframe their thinking. In a performance review setting, the system would flag problematic phrases as they are said, and committee heads would stop the conversation. The committee would then evaluate the comment, ask the presenter for further information, and only continue once there is sufficient clarity. Once the discussion concludes, the review cycle would continue until another phrase is identified. The system serves to be persistently aware throughout all conversations and highlight potential bias for everyone to learn from. Beyond pointing out biased phrases during a performance review, a combination of NLP and CAI can serve as a foundation for company-wide analytics. Organisations can track who is speaking in a majority of meetings, what was said, who challenges biased phrases, whether or not certain types of people are misrepresented in reviews more or less frequently, and so on. All this information gives a fundamentally new picture of what is happening inside a company, laying the groundwork for human resource (HR)-related metrics that individuals (and the company as a whole) can improve over time.
Keywords: bias; bias detection tool; bias detection system; AI; performance evaluations; review process; performance reviews; performance review (search for similar items in EconPapers)
JEL-codes: G2 M15 (search for similar items in EconPapers)
Date: 2022
References: Add references at CitEc
Citations:
Downloads: (external link)
https://hstalks.com/article/7358/download/ (application/pdf)
https://hstalks.com/article/7358/ (text/html)
Requires a paid subscription for full access.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:aza:airwa0:y:2022:v:2:i:1:p:17-23
Access Statistics for this article
More articles in Journal of AI, Robotics & Workplace Automation from Henry Stewart Publications
Bibliographic data for series maintained by Henry Stewart Talks ().