We all are humans, and so are recruiters. Fancy a face-to-face meeting with a candidate searching to become your new sales director… You as a recruiter are quite punctual, a well organized clean desk fan. The candidate, nervous to make a good impression, is messing around with the sugar cube, leaving crumbs and coffee drips on your white table deck. Honestly, will this – irrelevant? – behavior influence your idea about the candidate’s eligibility, even though he/she ticks all the boxes mentioned on the job profile?
Even though most recruiters most likely will not be biased ‘on purpose’, bias is unavoidable in human interaction based decision making. Artificial Intelligence does not take into account your race, your smell, your habits and the like when screening your skills and matching them to job profiles. AI is said by its believers to exclude arbitrariness. As such, AI is much more objective – and, hence, ‘more fair’ – than humans ever can be. It uses objective tools such as deep learning and data analytics to draw conclusions.
Seasoning our dishes
Hurray. Or not? xMP, Actonomy’s flagship software tool for searching and matching jobs and candidates by using artificial intelligence, large ontologies and self learning techniques, is looking at candidates and jobs in a 100 % objective way: each candidate will be treated in exactly the same way. In most of the cases, this is the best thing to do. On the other hand, some personal elements will not be taken into account even though they could have an impact on the decision taken. Certain skills, such as quick learning capabilities, or high levels of empathy will typically be unused by AI-based systems. A friend of ours was told by recruiters that he was a job hopper. In reality, he was a fast learner, and always eager to explore new professional horizons. His CV, packed with different employers, would probably not be awarded by only using AI’s objectiveness. What a pity for him and – probably – for his potential new employers.
Machine learning is by definition striving for the average, the numerical mean. Extremes at different levels will not be awarded. Being unbiased intrinsically implies being in favor of averageness. But do not misunderstand us: we are not against unbiased decision making, we are just asking how much of relevant information about a candidate we are missing by calculating means? Would it not be wise to – in one way or another – also use the sometimes fuzzy domain of subjective elements, skills, values that make candidates differ from one another? Isn’t it precisely this subjectivity of individuals that is seasoning our dishes?
Maybe we should make the comparison with politics. As we are living in a country with a limited number of political parties/families, we are quickly generalizing, leaving the finegrained details aside. When you like trees, you vote green. If you want a basic income for all, you vote socialist. Liberals are capitalists, and capitalists do not like trees. And so on… Obviously, the reality is much more detailed. Systems consistently analyzing large amounts of data and averaging them, will typically miss out on many of these not unimportant details. The more data machine learning is based on, the more ‘average’ decisions you will get. Once again, and without drawing conclusions ourselves: is the pursuit of objectiveness leading to the loss of uniqueness?
There is this case of a large European recruitment agency. It has hundreds of thousands of résumés and heaps of assessment reports. By using data analytics, they now can predict solely on the basis of the CV who is most likely to get a concrete job offer, making the assessment phase almost obsolete. Now, how much sense does this make? We do fully understand this approach. From a technical point of view, this approach is fully understandable. But as humans, we remain with this rather philosophical question: does it lead to the loss of each and everybody’s uniqueness that could flavor the overall taste of the labour market? It is fair to say computers are playing an ever more important role in (automated) matching jobs and candidates in an unbiased way. But to a certain extent we might want to keep a certain level of bias in order to maintain the subjective character of humans.
Just asking …