Blog Article

AI in Hiring: Getting the Best from Humans and Artificial Intelligence

The best of both worlds for humans and AI in hiring

What is AI?

AI, or article intelligence, refers to the simulation of human intelligence in machines or computer systems. Through algorithms and software, AI enables machines to perform tasks that typically require human intelligence, effectively mimicking humans to perform tasks, make decisions, and solve problems.

When it comes to AI and hiring, talent acquisition teams are looking to AI technology for tasks such as resume screening, candidate sourcing, video interviewing, employee selection, and much more. But while AI brings advantages to the hiring process, it also carries certain risks, particularly around the heightened demand for critical thinking and human judgement, potential biases and discrimination, and lack of transparency. 

How do candidates perceive AI?

Candidates have mixed feelings towards AI’s role in recruitment. Research reveals a phenomenon known as "emotional creepiness," where candidates express an unsettling feeling of things being "off." A feeling of creepiness arises when individuals find themselves in unfamiliar situations, unsure of how to respond – so, an understandable reaction given the only recent emergence of AI in the workplace (ChatGPT, for example, debuted only in November 2022). This "emotional creepiness" was particularly pronounced when AI was introduced at later hiring stages, such as AI-supported telephone or video interviews, intensifying the candidate’s sense of discomfort and uncertainty.

Candidates have also reported concerns regarding the potential for AI to inject bias into the hiring process. Research indicates that candidates tend to perceive that a selection process is more fair and equitable if they believe a human reviewed their application rather than an algorithm. Furthermore, candidates have expressed concerns about their diminished opportunity to showcase their true selves compared to human interaction. 

On the other hand, while emotional creepiness might surface toward the end of the selection process, research reveals a lack of negative emotional responses to AI support during earlier stages, such as preselection. In fact, in certain scenarios, candidates perceive AI as a potentially more objective evaluator in comparison to human assessment. Notably, women working in the tech sector have raised concerns regarding potential biases stemming from human evaluations in contrast to AI.

Strengths versus weaknesses

Humans and AI bring distinct strengths to the hiring process. Human strengths lie in discerning which problems need solving and recognizing when a different approach is warranted – a facet known as metacognitive skills. This involves reflecting on and adapting one’s approach to a given issue, such as the ability to scrutinize data points for accuracy before acting on them. Unlike AI, humans possess the capacity to pause and question a given course of action, and generally excel in the interpersonal aspects of decision-making. When convincing a candidate to accept a job, the personal touch and genuine investment in their success are irreplaceable – something AI cannot replicate.

On the other hand, AI’s capability lies in its swift and consistent processing of extensive data sets. Unlike humans, who have limitations in processing large volumes of information simultaneously, AI can effortlessly analyze a thousand data points to make precise decisions or predictions. When properly configured, AI consistently yields the same outcome for identical input data, eliminating potential biases or discrepancies that can occur with human decision-makers (the operative point being when properly configured). This fosters fairness and equitability in the hiring process. For example, one interviewer may give different ratings to the same interview question answers on day one compared to day two, or two different interviewers can score the same candidate's answers differently. Ultimately – that’s not fair. But AI can give consistency in decision making.

(Check out our recent blog on what the future holds for AI hiring bias regulation, and what the New York AEDT Rule means for organizations and hiring).

Generative language models

All AI systems mirror the biases and patterns present in their training data. Specifically, Generative AI mirrors the biases and trends found in human language usage. While Generative AI may give the appearance of being an impartial judge, it’s crucial to recognize that its responses are shaped by the collective biases of its training data. Its easy and interactive demeanor can foster a sense of trust for the individual providing prompts. Nevertheless, it’s essential to acknowledge that its responses are derived from biased and incomplete training data.

When utilized as a tool, Generative AI can serve as a valuable time-saving resource, as long as its limitations are understood and taken into account. For example, when using ChatGPT or alike to help draft a job description, users should be mindful of if their language and prompts contain any biases – unconscious or not.

The best of both worlds

Both humans and AI bring strengths and weaknesses to HR decision making. Human intelligence brings nuanced judgment, empathy, and an understanding of the complexities of interpersonal dynamics. In contrast, AI can streamline administrative tasks and process vast amounts of data, allowing organizations to focus on more strategic aspects of hiring. When combined, human intelligence and AI can create a robust decision-making framework that marries the best of both worlds, fostering efficient and insightful practices that not only identify top talent but also nurture a thriving workforce.

 

Related Articles

  • Lauren Bailey Shares How to Hire Top Female Talent

    How to Look Beyond the Resume and Start Hiring Top Female Talent

    Read More
  • The Criteria Team at the 2023 HR Tech Expo and Conference

    Key Learnings from HR Technology Conference & Expo 

    Read More
  • AI in Hiring

    What's in Store for AI Hiring Bias Regulation?

    Read More