In today’s business world, there are two powerful trends in employee recruitment that seem to be working against each other. The first is an emphasis on culture. Many of the world’s leading companies are placing an increased emphasis on character over skill set and upside over experience. This is the case both during and after the hiring process. Former CEO Tom Monahan probably captured it best when he said “Find ballplayers, not those who look good in baseball hats”.
The second trend is the growing presence of AI. According to research from Cornell University, 18 companies globally are currently offering AI-driven recruitment software with funding ranging from $1 million to $93 million. The prospect of having an algorithm automatically screen resumes, rank candidates and schedule them for interviews is super appealing, but it raises an interesting question: can we code for culture?
AI for the hiring process
While candidates are not yet shaking hands with C3PO in their interviews, it’s a mistake to think of AI as belonging to a far distant future. Software such as Mya can already reach out to both passive and active candidates, as well as interview and follow up with them via a conversational chatbot. Other programs, like TapRecruit, help optimise job postings by analysing the title and wording of the listing and offering more targeted suggestions.
These systems claim to improve the experience for both employers and jobseekers, by more effectively matching the right people to the right role.
The most significant claim of these AI systems is that they eliminate bias from the hiring process. The argument runs something like this: if we cut the human element out of recruitment, we will also eliminate the preconceptions that get in the way of us hiring the best person for the job. This is essential to building a diverse and creative workplace. If recruitment becomes a technical, rational and computational process, surely it will also become a fairer one. Right?
Wrong(ish). My contention is that as AI has continued to develop over the last decade, technology can’t be so easily separated from the society which created it. In 2015, Google’s Photos app mistakenly classified images of black people as gorillas, while Nikon’s camera software misread images of Asian people as blinking.
These problems are not limited to photos. In 2018, the Washington Post released a report which found that Amazon’s Alexa made 30% more mistakes when given instructions by someone with a non-native accent.
Other areas of AI
Reflecting on these examples, Kate Crawford, a lead researcher at Microsoft and Co-director of the AI Now Research Institute, said that “Predictive programs are only as good as the data they are trained on, and that data has a complex history”. Crawford points to the lack of diversity in the AI industry, which is 88% male and predominantly white. She continues, “Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters”.
For business leaders today who are considering employing AI in their hiring process, Crawford’s comments are good to think about. We need to be wary of the idea that technology will offer a solution to human bias. I believe that if we’re going to implement AI we should use it to improve rather than replace the hiring human element of the process.
If screening resumes with AI creates more time for in-depth interviews with the best candidates then this is a positive. The best way to eliminate bias is to give applicants the best possible opportunity to showcase their abilities. At Finder, we make a point of hiring people not profiles, creativity not categories, and disruptors not conformers. We don’t hire someone to complete a task, we hire them to pursue a vision.
A recent study forecast that 85% of the jobs that people will do in 2030 don’t exist yet. If businesses want to take advantage of new industries and opportunities over the next decade, they should be wary of cutting humans out of the hiring process altogether.