You might have seen this video around the web over the past few weeks. It’s a new technology being developed and tested in Sweden called “Tengai”:
Think about the last time you interviewed a candidate. What was your first initial reaction to them? Before they even opened their mouth?
It was probably something like this:
- “They are decent looking enough”
- “Shorter than I thought, but skinny”
- “That hair could use a trim”
- “Smells good”
- “Dress choice was on point, very flattering”
- “Nice smile, probably could whiten those teeth a bit”
- “Love the shoes!”
That was done in the first ten seconds, then you shook their hand and it was damp because they were nervous and you were disgusted and in that moment you pretty much made the decision to not hire them.
Welcome to the reality of how humans interview.
This is why I do believe that something like Tengai has promise. Now we love to believe that a robot will not have any bias, but we do know that with machine learning robots can and will learn bias if left unchecked.
What robots will not do is judge us on all this superficial stuff! They will judge us on the quality of our answers, our intelligence, our word choice, the time it takes us to respond as compared to others answering the same question, our facial expressions at every single moment, our inflection in our voice when we might be exaggerating, etc.
Humans love to interview based on feel.
We lie and say we don’t. We do behavioral interview training and say we are looking at past performance all the while thinking in our mind as they are speaking, “will her personality work with the team, she might come off too strong, ugh, I don’t want the team fighting with the new person, I do need a strong female on my team, though, her hair is really great, plus she went to State and I love State, and she mentioned she likes dogs, I like dogs…” Oh, yes, great answer to that technical question I just asked.
A robot would not care about fit.
That’s hard for humans because we know fit matters. We also know that fit inherently causes our worst biases to come out.
We won’t ever come out and say the reason we aren’t picking you is because I didn’t like how used slang and sounded ghetto in that one answer. That would be biased. We will tell you that your experience was good, but we found someone else who had ‘better experience’.
A robot interviewing will move forward the best candidates based on data and criteria set. It will then be up to you to throw the bias somewhere into the process to screw up the robot’s selection!
What do you think? Can or will a robot interview better than a human?