Imagine ye olden hero of antiquity, Theseus.
Amongst his chopping up of Minotaurs and other shenanigans, he had a well-maintained boat.
It was probably his Dad’s boat, so he wouldn’t mind spending more on fixing it than leasing a brand new x-schooner.
Every time something broke, he’d replace it.
Eventually, every part would have been swapped out in the endeavour of having an optimal ship.
At no point was the ship itself replaced, only its parts.
Yet, if none of the original ship remains, is it still his Dad’s?
And if it isn’t, wouldn’t it have been better to just buy a new one in the first place?
You probably know by now, I’m prone to naval gazing, seeking to inform better decisions.
What may surprise you is that I’m also a technophile, given that much of my content is about the philosophy, psychology, process, principle and people of recruitment, offering criticism and scepticism on how tech is deployed.
I love keeping abreast and playing with new tech, but I’m a slow adopter for two simple reasons:
what problem does this solve?
what are the unexpected consequences or byproducts?
In 2000, after graduating, I joined my dad’s exec search business full-time - he urgently needed help keeping the ship on an even keel, for obscure reasons I won’t go into.
I’ve tried googling them, and they aren’t easy to find.
I’d already supported his business during my gap year and school holidays, even updating filing cabinets when I was 16, at a weekend.
Having recently graduated with a Maths degree and a geeky tech itch to scratch, I partially digitised his paper-based office through the simple implementation of ‘DT search’ and macros for parsing applications from Outlook.
Between that and improving their pre-existing Access database (nightmares of one-to-many) and their skills thesaurus, we had a functioning and ugly ATS.
I wrote a ‘playbook’, or as we liked to call them ‘document’, on how to use Boolean searches effectively using DT search, and on job boards. LinkedIn hadn’t been invented yet.
I learnt rudimentary HTML coding to enhance our website, and how to configure NT 4.0 across our 13 PCs, two locations and home offices, with technical support for a variety of user issues. Chris(topher) really wanted a laptop.
As well as antivirus configuration, updates and strategy, particularly salient following the Anna Kournikova incident.
We saved a lot of admin time and improved searchability for potential candidates, and I repeated the exercise a couple of years later on a temporary assignment for another search firm.
Saving time and enabling work is a great opportunity of tech, something much lauded by AI marketing.
When I haven’t been in recruitment, I’ve been in consulting sales and we had a wonderful CRM that had been purpose-built and integrated into an, at the time, innovative automated email system that brought in many inbound leads. This was in 2003.
Yet I noticed something interesting - by relying on the CRM for notes, I became a CRM-sales salesperson.
“Not in, left message, try again in three days.”
“Not interested for now, happy for a follow-up call in six months.”
Then I’d call in six months, and he’d bought a competitor products three months earlier because I was crap at sales.
What might have happened if I didn’t think writing notes in a CRM was a key part of my job? I was 24 at the time, so I had a lot to learn.
Such as, what if you didn’t use a CRM at all and instead found a better system for qualifying and disqualifying in transactional prospecting?
Of course, a blend of the two is better.
Flash forward a couple of decades, and AI is the razzle dazzle destined to replace every part of the recruitment industry, making it shipshape for maximum productivity.
Or is it?
Last week, I ran a short experiment re-writing some of my top LinkedIn posts with different AI.
I was curious about the output, whether people would notice and how it might affect engagement.
It's too short an experiment for any robust analysis, but it was an interesting exercise nonetheless.
Some figures:
Post 1: "Making it easy for candidates to get in touch"
Original - 520k views, 487 likes, 129 comments
ChatGPT - 49k views, 92 likes, 25 comments
Post 2: "He turned down a £55k offer for one at £50k"
Original - 193k views, 90 likes, 24 comments
Gemini - 12k views, 36 likes, 24 comments
Post 3: "Why 400 applications isn't what you think"
Original - 90k views, 100 likes, 37 comments
Claude - 2k views, 22 likes, 16 comments
(Last year, LinkedIn seemed to put a weird priority on my posts, given the likes and comments are low for that level of voos, although who even knows what an impression is anyway)
I wrote an original post in the middle, which garnered 5k views.
Takeaways:
- I kept to basic prompting and had to rewrite the AI posts to make sense. Each AI shaved the edges off and added unnecessary wordage. They all deleted situational insight that added nuance to the posts
- Claude was the best output and seemed more responsive to adjusted prompts
- Engagement is significantly down, so their lower figures don't mean much on their own, and they are performing better than the current average (normal posts were 6k views last year, less than 1k at the moment)
- The time saved in AI-ifying was lost in editing
- Only one person commented on bot content. What about those who didn’t comment? I chatted to Daryl Hewison about it privately and he said it was obviously AI-augmented.
Using AI in this way made me feel icky, and I took a little time to reflect on why.
I see words as the consequence of thoughts and systems of ideas.
Writing them with meaning for your audience requires you to process these in a way that challenges and embeds them.
In a sense, you have to interpret your own words and translate them how others might read them.
As my daughter wonderfully put it, "Can I teach you what I've learnt to help me better learn?"
This, for me is the major flaw of ‘AI’ right now.
The more we abdicate creative thought in favour of efficiency, the less we learn and sharpen our own ideas, losing the opportunity to improve what's within.
If situational insight is required for effective engagement, AI currently lacks the tools to find this. It may happen with more sophisticated prompting, but why not spend that effort in creation instead?
Critical thinking.
Maybe that isn't important.
AI may be great for administration and automating manual tasks, but not so much where relationships are key.
So for advertising and creative communications, such as those found in recruitment to start conversation, it may facilitate better generic content but remains flawed.
Let’s face it: much AI content is better than the average human writes, even if it feels artificially intelligent.
It’s based on averaged common human content, configured against your prompt, and is much faster to generate, if content alone is the priority.
My priority is to bring the right people forward, and everything I write about commercially has the intent of doing that - whether that’s an advert, any follow-up, or something else.
I don’t worry about wholly unsuitable readers, whose level of investment I reciprocate.
It doesn’t matter how many wholly unsuitable applications I receive; only starting conversations with qualified candidates.
Automation can save a lot of time in rejecting unsuitable candidates, and there’s an argument that AI can tailor feedback individually.
But I don’t think this is a problem for me to solve.
For wholly unsuitable applicants, at application stage - they get a templated email saying they did not meet the criteria set out in the advert, an invitation to reply if they disagree, and a link through to my free job search resources.
I can do this because my adverts are written specifically, with clear suitability criteria.
I reciprocate their level of investment.
Anyone past the application stage gets a call or email from me, depending on their preference, out of a duty of care.
Tailored automated messages aren’t needed.
Content isn’t just about advert writing - what about how we represent candidates?
Copywriting skill holds here, in that our assessment of a candidate is the brief and how we represent them the copy (whether in writing, call or video).
While interview arrangements, company info, even contracts are an opportunity to cement and build relationships if they work on the principles of timelineness, clarity and answering questions before they are asked (you already know most of the questions your audience should ask, so why don’t you answer them before they are asked?).
Whether AI currently has substantive value in content depends on, for me, how important candidate relationships are to the recruitment process.
Does building trust and loyalty affect the outcomes you want to reach?
If the answer is genuinely yes, then your process needs to be built around this.
If the answer is no, such as in common-skill business-as-usual recruitment, where it doesn’t matter who does the job because the job itself is all that matters,
then you can get away with taking people out of the recruitment process and fully automating it.
It’s a challenge at scale - with under-resourced TA teams managing volume vacancies - automation is key. It’s also a prophesy that may fulfil itself.
The argument currently is that people won’t trust AI or automation, but AI will continue to improve, whereas generally speaking - what is the quality of experience candidates have from people?
A couple of examples that we may have experienced outside of recruitment:
I buy transactional items from humanless systems like Amazon. If I need to touch and feel it, to trust a buy, the final point of sale will be with people.
I go to Abbeygate Cinema for the experience. The bar downstairs is buzzing, they serve great food, the screens are small with comfy chairs - it’s quite inexpensive, mainly because they get their money from food and drink.
I don’t go Cineworld because it’s soulless - I’d rather wait for it to come out on Netflix - plus then I don’t have to hang out near people. #winning
How many endeavours have gone from human-centric to fully automated, without us noticing?
It’s already been done in recruitment.
Many times.
Well before AI.
Such as an experiment Amazon ran a few years back with a humanless recruitment process for Data Scientists, according to my new ex-Amazon TA friend.
Here’s the rub. If we can fully automate recruitment, what people are actually needed to run it?
The more human parts you take out of recruitment, the less human is needed.
What will a recruiter look like when there are no manual inputs, irrespective of the outcomes achieved?
Will we still be recruiters, or are we greater than the sum of our parts?
Or will we be configuration managers and business analysts, specialising in recruitment process?
Ah, but technology will allow us to focus on the human parts of recruitment, you may say.
How long for?
For my part, I am concentrating on being the Abbeygate Cinema of recruitment, the artisanal baker (not the craft brewer, that’s taken), the curated experience, because I think this will have a longer shelf life.
And because the feedback I get from candidates, jobseekers and clients inspires me to do more.
I’ll look at the opportunities AI provides to enable my work, such as data scraping for technical searches, automated market mapping, and better AI sourcing.
These things aren’t ready yet, IMO, so I’m upskilling my sourcing instead to find what they miss.
I won’t be investing in AI transcription (interpretation is more important), AI interviewing (it’s behaviour over time for me, combined with ‘what will candidates think’) and other areas - I’ll explain why in more detail over the following weeks, though you can glean why from this article.
Maybe I’m wrong, maybe we should all dive headlong into AI, but I’m happy to rock the boat to see what falls out.
Thanks for reading.
Regards,
Greg
p.s. Hi Ross, how’s your week been?
So what I'm taking from this is that you're a recruitment nepo baby