Five years ago we were all talking about machine learning. Then along came big data, paving the way for deep learning. But today’s new buzzword is AI. Artificial intelligence has moved from the realms of fantasy into reality, to such an extent that it is now receiving massive injections of public funding. It even sparked the Twitter battle of the century between Elon Musk and Mark Zuckerberg, who traded a series of barbed tweets over its pros and cons. We’re constantly being told that AI is taking over the world: our service industries are being Uberized, the economy is plagued by one banking crisis after another, and professions like accounting, sales and trading, and even cab driving are on the decline. Facebook’s chief AI scientist, Yann Lecun, has no doubt that AI will help to improve and even save lives in the future. Google’s co-founder Sergey Brin explains how “We will be able to make machines that can reason, think and do things better than we can.” And Ray Kurzweil, Google’s Director of Engineering, goes one step further still: “By 2045, computers will be a billion times more powerful than all of the human brains on Earth.”

However intelligent these machines may be though, are they really capable of assessing and evaluating humans? What will become of recruiters? Will they find themselves out of a job, replaced by a million different algorithms?

AI Can Certainly Add Value to the Hiring Process…

As keen advocates of the use of technology in HR, we are firm believers in the benefits of AI and what it can bring to the hiring table. Undoubtedly, AI can simplify the whole process for both recruiters and candidates, smoothing the path through what can often seem like a minefield.

AI can add value in many ways: matching talent with openings, creating chatbots that help to screen job seekers, and automated video interviewing platforms, to name but a few.

…But it Can Never Fully Replace Human Input

AI is already widely used in image recognition in a number of ways: lip reading, recognizing certain emotions (like happiness, sadness, or anger), and identifying age or sex. Some of the leading video interview providers are now offering automated interpretation technologies to analyze a candidate’s performance and emotions.

This interpretation analyzes three main components:

  • Content: This involves the use of speech-to-text technology to transcribe what the candidate says into text. The content is then analyzed to provide useful statistics such as the number of words used by a candidate, and the number of words per minute.
  • Tone of voice: Known in scientific terms as prosody, this consists of analyzing the variations in pitch and rhythm in a person’s voice.
  • Facial microexpressions: Assuming the video is of a high enough quality, it is also possible to detect certain facial expressions like fear, disgust, or happiness.

However, we have experienced first-hand some of the limitations of AI in interpreting candidate video interviews:

 

Automated Analysis Technologies Are Unreliable

If you’ve ever tried to use voice recognition on your smartphone, or YouTube’s automatic captioning function, you’ll know just how far from perfect speech-to-text technologies are. As for analyzing facial expressions, it is worth pointing out that an algorithm needs 10 to 15 million images to be able to distinguish a cat from a dog, whereas a 3-year old-child needs only three. Although AI can detect basic emotions like happiness, fear, or disgust, these are rarely exhibited during a job interview. How would AI cope with detecting more complex, understated emotions? As Gwennaël Gâté, co-founder of Angus.ai, a French startup specializing in AI image and audio processing technologies, explains: “Real emotions are subtle: feelings like satisfaction and doubt are difficult enough for humans to detect, let alone for algorithms, which can really only detect beaming smiles or very obvious surprise.” And let’s not forget that the videos recorded by candidates are often of fairly poor quality, which further limits their potential for analysis.

In short, data extracted from videos are incomplete, and sometimes pretty inaccurate.

AI Takes No Account of Context

If a candidate is interrupted during a video interview, or lowers their eyes, or hesitates, this can be interpreted by AI as a sign of discomfort, embarrassment, or a lie. But it could be that the candidate has simply been disturbed by a car sounding its horn outside, a child entering the room, or a sudden shaft of bright sunlight streaming in through the window. Whereas it would be obvious to a recruiter what caused the reaction, AI is incapable of contextualizing it. And because candidates don’t conduct their video interviews in sterile environments, there is every likelihood they will be affected by external cues, suggesting that AI cannot be relied upon to produce reliable data in such a situation.

Algorithms Are Not Neutral

What conclusions can actually be drawn using data extracted from a video? Is a candidate who uses 140 words a minute more dynamic than someone who only uses 100 words? Or can this simply be put down to differing stress levels? Did one candidate perhaps simply have more to say? Or did they have poor listening skills? If one candidate talks louder than another, does this mean they are more confident? AI cannot possibly provide a scientific response to any of these questions.

One common approach to interpreting candidate performance consists of comparing data taken from applicant videos with data from videos completed by some of the company’s top performers. Cross-analysis of this data is supposedly able to predict whether a candidate is right for a particular job. But it’s not hard to imagine the pitfalls of this approach. Algorithms are far from neutral; in fact, they tend to reproduce recruiter bias, something that Amazon became only too aware of recently when it had to scrap its sexist hiring algorithm.

What Do Candidates Think About it All?

Perhaps the most important consideration relating to this kind of automated video analysis is, just how ethical is it?

When a candidate completes a personality test, they do so voluntarily; analyzing their personality via video without their knowledge has a completely different set of moral implications. Of course, there is the option (or should it be an obligation?) for recruiters to inform candidates about this process. But would candidates be pleased to know that their personality was going to be assessed in the space of three short minutes by an algorithm? This surely runs the risk of unsettling candidates before the interview, and could even damage a company’s reputation in the battle to hire top talent.

Safe to conclude then, that although artificial intelligence clearly has extraordinary potential to enhance and streamline the hiring process, it can never replace the human element!

Ebook
HR-globalisation

[Ebook] Successful international deployment

Download