embrace technology with an eye on fairness and worker well-being

As the great enabler of humans, artificial intelligence touches just about every aspect of our lives today. AI suggests where to have our next meal, the movies we might be interested in seeing this weekend, taking a walk when we become too sedentary or when to leave the house to get to work on time.

And when we’re in the office, AI is helping to prioritize our days, write emails and learn new skills. For those of us in HR/recruiting, AI can already help us find, screen, schedule and assess candidates. In fact, AI is empowering workers at such an unprecedented rate that the excitement over its promise sometimes overshadows the potential risk it may pose in the workplace.

What are these risks? Some might view AI as an unbiased and benevolent force being harvested to serve all of us at work, but the ethical considerations in developing and deploying AI are not universally observed, let alone universally defined. For well-meaning employers, deploying robots and automation in the workplace has the risk of leading to adverse results when developers fail to clearly examine the implications of implementing AI in their workflows. Additionally, ethical AI deployment also depends on the users and how they plan to configure it into their workflow.

You might have read about growing concerns over the use of AI in the recruitment process. One of the most worrisome developments is the potential for embedded bias in AI-powered technologies used to screen, interview, assess and hire workers. These concerns aren’t isolated to recruitment or just the workplace. In fact, the level of anxiety around fair AI development has prompted dozens of OECD members to adopt international standards to ensure AI systems are fair, safe, trustworthy and robust.

AI is transforming society as OECD Secretary-General Angel Gurría said in announcing the adoption earlier this year, but “it raises new challenges and is also fueling anxieties and ethical concerns.” As part of a global effort to calm fears and concerns, the OECD has outlined steps for governments around the world to spur a more comprehensive approach for developing fair and transparent AI systems. 

While this is an ongoing effort, a growing number of companies, ironically, are turning to AI for help with minimizing bias in their recruitment process. Most companies know that bias can creep into practically every stage of the recruitment journey. Whether it’s how job descriptions are composed, how interviews are conducted, or the process for reviewing applications and CVs, there are many ways in which unconscious and overt bias can occur. AI today is being used to blind the voice and appearance of job seekers, help companies develop more inclusive job ads and expand sourcing to include more diverse candidates.

While AI can help organizations to minimize biases, the algorithms used to do this can also be inherently prejudiced. Meredith Whittaker, co-founder of the AI Now Institute, stresses that “AI is not impartial or neutral. In the case of systems meant to automate candidate search and hiring, we need to ask ourselves: What assumptions about worth, ability and potential do these systems reflect and reproduce? Who was at the table when these assumptions were encoded?”

One of the most important considerations is ensuring data used by AI systems are well managed because this can be a major culprit for creating biased algorithms. It is critical for companies to be inclusive in their collection of data, measured in the labeling of the information and to employ a diverse team to make decisions around the data and determine which metrics are used in building the AI. These are just some of the steps to building a more ethical and fair AI-powered system.

With many HR solutions developers now offering tools to attempt to minimize bias, it can be challenging for employers to choose the right ones to deploy in their recruitment efforts. Moreover, how can an organization ensure the AI system it picks will specifically address bias against a specific group, such as gender imbalance within a workforce? Ironically, algorithms may be the answer to correcting algorithmic bias, according to Frida Polli, the CEO of pymetrics, an AI-based assessment technology company. 

She explained that if a company realizes that the outcome of its AI recruitment systems are resulting in, for example, fewer women hires, algorithms can be rebuilt to account for biases in sourcing or assessments. An important consideration is whether to make algorithms open-sourced so the development process is transparent.

Measuring for and mitigating bias is an important aspect of ethical AI as it’s a cornerstone to the responsible use of technology for recruitment purposes. However, it is critical that you also take into account broader ethical AI considerations, such as  those developed by The Institute for Ethical AI & Machine Learning, which include data risk awareness and privacy concerns. Your company can also explore developing its own AI principles and creating an AI ethics committee to ensure they are consistently adhered to.

The underlying concepts of these principles aren’t new, but as more companies implement AI systems to augment their hiring process, human capital leaders should always be mindful of the slippery slope they may face from embracing such technologies in the workplace. Whether they mitigate redundancy concerns through upskilling or reskilling or take greater steps to safeguard the privacy of workers and job applicants, deploying AI shouldn’t be done in blind faith. There must always be a strategy and a plan of action for ensuring a practical and ethical use of the technologies that are being used today and into the future.

Glen Cathey
Glen Cathey

Glen Cathey

Head of Digital Strategy

Glen Cathey is a globally recognized sourcing and recruiting leader, blogger, and corporate/keynote speaker. Glen currently focuses on researching, evaluating, and implementing innovative digital approaches and solutions such as conversational UX, ethical AI, and blockchain.