AI vs. Recruiters - is this going to be a thing?
John Henry (painting by Palmer Hayden)

AI vs. Recruiters - is this going to be a thing?

 

“John Henry was hammering on the right side,

The big steam drill on the left,

Before that steam drill could beat him down,

He hammered his fool self to death.”

- the ballad of John Henry


One of my favorite legendary figures from Americana is John Henry, champion of the working world, proud of his trade, powerful and majestic archetypal hero of the everyman laborer. Would that I could match his work ethic and heart. But his legend comes from a truthful inflection point in history, when the average worker was at risk of being replaced by machines, changing the landscape of industry forever.

We're at such an inflection point again with the advent of GenAI and as witnesses to a very real quest for the "singularity" of Artificial Intelligence. And like many other corners of business, I see a question that keeps forming on everyone's lips in my own world of staffing:  Will AI replace Recruiters?

Let's examine what that might look like from two perspectives: As-Is and To-Be in a world with Artificial Intelligence.

 

As-Is:

Artificial Intelligence has certainly become more conversational and intuitive to use.  But the term Artificial Intelligence continues to be a bit of a misnomer.  There is no reasoning behind the code yet.  There's no judgement, no sense of ethics, no intuition, and certainly no self-awareness.  It is mimicking conversation and learning how to provide increasingly relevant information (yes, with the odd bit of hallucinatory content, but that's going to improve over time, hence the learning part).

There's chatbots and there's GenAI - to be clear these are two different things but no doubt they are trying to close the gap such that every chatbot is driven by a large language model and can do more and more impressive things.  But as we move further into this world, there will invariably be either regulation or at least self-regulation to keep companies who use these tools out of trouble.  One such requirement will be that it behooves companies to be transparent about the use of AI or chatbots so as not to be seen as tricking their customers and partners about who they're doing business with. So I have no doubt people will continuing to enter into conversations with open eyes that they're talking with a robot of some sort.

So let's think about your experiences with that so far.  No matter how good the AI interface is, no matter how fun it is to engage it in conversation, do you feel like trusting it with important tasks and information, such as your personal information or your career?  Your compensation goals?  The fact that you're even looking for a job? 

How many times have you been secretly disappointed that the customer service chatbot that pops up is your first point of contact?  Let's say your upset about a service or a product you've invested a fair amount of money in.  Aren't you instinctively going to try to drive past the chatbot, or even a cutting-edge AI, to get to a real accountable human being?  Is your career worth any less than that pricey flat screen you purchased?

Point being: If its important, you want to work with a human.  Humans can relate on a personal level.  Humans feel accountability.  Humans have the capacity (if not the will) to make something wrong right.  And humans have the ability to understand who you are, and what your value is. 

The As-Is state of AI does not measure up to the standards we have when important topics are on the line.  It might be sufficient if you want to return some yoghurt to Walmart.  It's not sufficient if you want to find the next step in your career. 

Oh and insofar as the specific tasks involved in recruiting talent, it can't perform any of the complex work a recruiter does.  It can't negotiate salaries.  It can't identify the pitfalls of compliance.  It can't spot that certain spark in a candidate who might otherwise be outside of the hiring manager's profile, and it can't justify that spark to any other stakeholder.  It can't have empathy for a candidate and advocate for their experience.  It can't apply creative approaches to sourcing candidates.  It not only fails to engender trust, but it simply can't do the heavy lifting of the job.

No doubt you may be thinking I'm prematurely lauding some staffing version of our legendary hero John Henry as he puts the steam engine to shame.  I'm also cognizant of what happened to John at the end of that story.  Trust me, we aren't there yet.  I'll beat the steam engine and I'll live to do it again.

To-Be:

Artificial Intelligence will continue to make impressive strides at an alarming rate. Us common folk don't even understand what's around the corner, but its clear the captains of industry do, and they're both excited and terrified at the same time.  I'm not sure exactly what nightmare Skynet scenario they're concerned about, but its ended up in front of Congress and on the minds of everyone who has a chip in the game.  It's not just going to make everyone a lot of money. It's clear its going to change the world, perhaps more than the Internet did. 

So let's assume that the "singularity" (that, by the way, is the moment AI becomes self-aware and sentient) is on the horizon, perhaps in the next decade.  What changes about your need to entrust important information and tasks with a trustworthy agent?

In the future, what if AI:

  • Has the mental dexterity to negotiate a salary package that all parties feel good about?
  • Can read a candidate's personal fears and desires about their career and work to address those?
  • Can identify a nuance in the candidate's personality that makes them a fit for the culture of your company?
  • Can apply increasingly accurate algorithms to identify top talent from multiple sources - far more sources than a human could handle at the same time?
  • Can communicate in such a convincingly human way that you can almost forget you're talking to a machine?

Almost...

Except there's that original premise I brought up.  Ethically, legally, companies will still likely have to reveal who is human and what is machine.  To do any less might be to invite litigation. 

Do you trust AI with your career?  Do hiring managers trust AI to think in the best interest of the company? Does AI have any business making decisions about people?  Or do we instinctively want, actually demand, that decisions made about human beings be made by other human beings, even if they're flawed and can't work at the same pace as the electronic mind?

Here's what I think.  The answer will be somewhere in between.  Right now, it's legally problematic for AI to make any decisions about candidates.  For instance, at Stack Overflow, all those decisions are made by 100% human brains.  But in the future companies may be allowed to use AI to do some of the initial heavy lifting in identifying possible passive candidates (note I didn't say applicants) from various sources.  But every step thereafter will invariably be fallible flesh and blood, because candidates and many hiring managers will demand it.

It's appropriately about Human Nature.  Humans don't want anyone other than a human across the virtual table from them when it comes to making a living in this world.  Because the alternative is a world without trust.  No business of any kind can take place without trust.


Good luck and happy hunting out there.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics