Historically, HR has made qualitive vs. quantitative hiring decisions or has relied on intuition or unobjective data, which has introduced bias in the process (whether conscious or not).
Today, it is critical that HR uses objective, role-relevant data to inform decision-making, not just to reduce bias, but also to inform better talent decisions. However, it is essential to understand that while data and AI can significantly help inform better and faster hiring decisions, they do not replace a human. Humans are still critical to making the best decision for an organization, and should always be kept in the loop. Just as there is not only one skill to measure success of a “perfect” candidate for a particular role, employers need more than one piece of datum to inform their talent decisions. There is a plethora of important skills for every role and by looking holistically at that data, hiring organisations can increase the probability of success when predicting day-one job performance.
With this in mind, Debbie Walton, Editor at TALiNT Partners talked to Ben Porr, Harver’s Chief Customer Officer about the reliance on good data to make better decisions.
Debbie Walton: How has the historical reliance on qualitative hiring decisions in HR contributed to biases in the talent acquisition process?
Ben Porr: Humans are inherently biased, often unconsciously where they’re not aware of using faulty inputs to make judgments. Machines are also biased, because they are built by humans. But the nice thing about machines is that you can track and measure everything they do, so you can track if their decisions cause disparate impact and where it occurred. This improves fairness because you can identify the bias and correct it.
Time and time again, we see structured interviews are effective to reduce bias and to predict success. Why? Not because you’re asking all applicants the same questions. It’s because you are using the same rating schema to make a judgement on their knowledge and skills.
Without defining the rating you’re using, the person is going to default to a pseudo similar-to-me effect, where they will judge the person based on what they have experienced in the past. Strengths that work will be inflated, weaknesses that they’ve seen people fail or overcome will be inflated. And all of this is instantaneous, then justified by “logic” afterwards.
DW: In what ways can data and AI be effectively utilised to reduce bias in talent decisions within the HR landscape?
BP: Just like any advancement of technology, utilising data and AI will cause a twofold effect. On one hand, it will help with efficiency to automate the more mundane and repetitive tasks. On the other hand, it will also increase the amount of information that can be used. Remember, more data doesn’t necessarily mean better. Just like any normal distribution, there will be more good data and more bad data.
For example, consider the Large Language Models (LLMs) behind generative AI tools like ChatGPT. From an efficiency standpoint, they create a new opportunity for less technical people to use qualitative data or free text data instead of just quantitative data. When a recruiter wants to hire for a job, they will typically talk to the hiring manager to understand the critical skillset they need.
When a recruiter wants to hire for a job, they will typically talk to the hiring manager to understand the critical skillset they need.
For some organizations, this is already predefined, so LLMs might not be as beneficial. But for new jobs or organizations that don’t have these predefined job descriptions, a recruiter can now generate a summary of critical skillsets from the LLM as a starting point and then refine as they discuss with the hiring manager.
All in all, I think it will help with efficiency in the near future, but actually improving prediction is much further out.
DW: While data is crucial, it’s mentioned that humans remain critical in decision-making. How can HR professionals strike the right balance between data-driven insights and human judgment in talent decisions?
BP: To start, make sure your inputs are valid and relevant for the role at hand. If job descriptions, critical skills, and other upfront elements are off, then any resulting output will be off, too. Some of Harver’s customers use AI-powered tools to debias job description language so they aren’t artificially narrowing their candidate pool. Human recruiters and hiring managers are still involved throughout the hiring process.
Another way to strike the right balance is to include rigor and structure throughout your hiring process. If you aren’t leveling the playing field for all job seekers, your talent decisions will suffer. Coming back to structured interviews and everyone using the same rating schema mitigates potential for adverse impact, while still making the most of human judgment and data-driven insights in talent decisions.
DW: What challenges might organisations face in adopting a more data-centric approach to talent decisions, and how can these challenges be overcome?
BP: Harnessing data efficiently and effectively will be a success factor common to employers that acquire and manage talent well in 2024. We actually discuss this, AI, and other key 2024 HR trends during our recent webinar 2024 HR Trends: Predictions, Pitfalls & Paradigm Shifts. One big challenge is analysis paralysis. What if you have all this data but don’t know how to begin? You don’t need to boil the ocean – start with one role, get that right, and then build from there.
If you don’t have good, robust, or trustworthy data, then your first step is to just track data consistently. I can’t tell you how many times I’ve heard HR leaders say they can’t trust the data so they can’t make inferences from it. Well, until you start tracking consistently, you can’t really identify what is faulty in the data and what is not.
Once you track consistently, you have to start connecting data. As a psychologist, I’m all about hypothesis testing. I think that these 3-5 skills are related to successful performance on the job. Ok, well, what data do I have to measure these skills and what data do I have to show that performance has improved?
Harnessing data efficiently and effectively will be a success factor common to employers that acquire and manage talent well in 2024.
Then, once you determine that performance has improved, you can determine how much it has improved and show the business the ROI on your team’s work. Performance is a difficult one, but start small. Did they show up on day 1? Did they complete onboarding? Have they taken LMS courses? Did they make it through the first 90 days? Do they get promoted?
Once you track this consistently, you can start identifying what skills lead to good outcomes and what lack of skills lead to poor outcomes. One shift we have seen is in contact center roles. They used to just want to either a sales, customer service or technical support agent. Now, you are finding that they need people who can do all three! Customer service is the critical piece, but they want people who can perform at least tier 1 level support and upsell/cross-sell where they identify opportunities. These require different skillsets than traditional customer service roles.
I know not everyone has the time or resources to analyze skills or conduct incumbent benchmarking studies. If that’s the case for you, add a robust and rigorous People Science team to your checklist when looking for a solution provider. Leverage their I-O psychologists to get you started and iterate from there.
DW: Could you provide examples of specific skills or attributes that are often overlooked in traditional hiring processes, and how data can help in identifying and evaluating these?
BP: I started my career about 20 years ago and thought we were moving to skills-based hiring back then. We love rebranding things to make them sound new. When we discuss skills-based hiring, we are thinking about whether the person has the knowledge, skill, ability, personality, and motivation to do the job.
Some organizations are keen to assess hard skills, like typing for a live chat agent at a contact center or ability with certain coding languages for a software developer role. I’m excited to see more employers also focusing on soft skills, or the durable interpersonal skills and traits that can be more transferable to different roles or functions.
The basic principle is that people have transferable skills that span job families. The skills that make you successful in one field that requires it, can also be used in other roles where there’s also a strong correlation of success with those skills. Skills-based hiring is really expanding people’s minds that you don’t have to have experience or even education in that specific area to be successful in the job. When we look at it this way, we open up more opportunities for our candidates, recruiters and employees.
We know that the reason we ask people to draft and submit a resume is because we are making a leap that their education and experience proves that they have inherited the knowledge and skills needed to perform that work. But, as we know, just because a person has experience, let alone the inflation that happens in a resume, doesn’t mean they have attained the skills. We have always known that there are ways to measure a person’s skill level in certain critical areas so we can match people to jobs.
DW: What role does continuous learning and adaptation play in the context of using data to inform talent decisions, especially as the job market and skill requirements evolve?
BP: As the job market, macroeconomic factors, and skills and technology evolve, continuous learning and adaptation are critical for preparing for and thriving during volatility and uncertainty. Employers need organisational agility and workforce mobility, which means they need resilient talent with curiosity and learning agility. That requires skills data as the foundation for stability. Ask my son at football practice: You can’t pivot in a new direction if you don’t have stable footing to begin with.
I’m excited to see more employers also focusing on soft skills, or the durable interpersonal skills and traits that can be more transferable to different roles or functions.
In a sense, data and people can both enhance and check each other. As an example, one of Harver’s customers was so inspired by early results with Harver’s assessment data that they audited their whole hiring journey to mitigate bias. Along the way, data helped speed up decisions and increase quality and fairness. At the same time, their People Analytics team checks data at each stage of the hiring process to identify potential adverse impact so we can resolve.
DW: How can organisations ensure that their approach to data-driven talent decisions aligns with ethical considerations, and what safeguards should be in place to prevent unintended consequences or biases in the process?
BP: Whether we’re talking about talent selection or leveraging AI, keep humans in the loop and don’t lose sight of the need for continually validating and debiasing. Some Harver customers start with a double-blind pilot where they gather assessment data in the background but rely only on prior evaluation methods and processes. Once they compare their actual hiring results based on prior data with what could have been if they’d focused on job seekers we identified as the best matches, they see how the objective data through validated assessments improves quality of hire and recruiting efficiency.
In AI terms, think of this like training a machine on biased, incomplete, or otherwise faulty data. As a longtime supporter of fair hiring that benefits employers and job seekers alike, Harver is committed to leveraging AI to reduce — never introduce — bias in talent decisions. Rigor and keeping humans in the loop throughout any selection process or usage of AI is paramount.
We’ve seen what can happen otherwise, like a few years ago with Amazon’s failed AI-powered recruiting engine. It was trained to screen candidates based on a decade of resumes submitted to the company. But because men tend to be the dominant group in tech, Amazon’s AI unfairly impacted female job seekers. Rigorous human oversight from the start of training and throughout testing could have debiased the tech before it got to that point.