This is a series that is changing over time, as I read more and more about AI, and more news comes out.
I had planned a back-and-forth, with the People argument counter-balanced by the AI’s corner.
I had planned on this edition to be on ethics of AI, particularly around use of IP and data scraping, and maybe on environmental impact.
However, Mabley vs Workday is a notable event, and a sign of things to come.
But first.
A couple of weeks back I made a family favourite - Spinach and Artichoke dip. It’s not as healthy as it sounds, and comes from a recipe my wife brought when she came to the UK from Canada twenty odd years ago.
A recipe which uses a devilish measure - cups - rather than grams.
And so I googled a conversion for each step of the recipe, and the result was slightly more watery than it should be.
On Sunday I had another crack, and this time I nailed it, and yet I noticed something curious.
That the AI results I relied on both times were different, despite my search being exactly the same.
A 20% difference in two steps might not seem much, and yet our experience was vastly different.
I don’t know about you, but I rely on automated calculation and computation to return consistent and reliable results.
Imagine providing such a tax return to HMRC!
And what about the recipes that make up recruitment?
Back to the subject at hand - I encourage you to do your own reading on the class action lawsuit, as there is much good analysis out there already, including an excellent article from Jan Tegze.
For the purpose of this post there are three notable points:
That AI is trained by our preferences - it’s a warped mirror of our intent.
Consider how, if we have bias in our process, then an ‘unbiased’ platform which gives us what it thinks we want, may become biased purely from our unintentional training.
We’ve seen this time and time again with LLMs - “Tell me some famous philosophers”; “How biased are these results?”
That AI may not give consistently the same results, each and every time you give an instruction
While my dip might not be life changing (it is), the outcome was significantly different across two iterations.
How might bias and inconsistency lead to problems at scale?
That how end users experience AI may not relate to how it’s programmed.
I would love to analyse Mabley’s applications, his CV and his career.
In my job seeker support work, I talk to a lot of people who’ve had similar experiences, including with hidden disabilities.
While they do experience systemic issues, there are also many times when they describe not being qualified applicants - my advice here is not to apply if you aren’t qualified, because it improves everyone’s experiences.
To this last point, I can’t help but think that candidate resentment combined with bad advice from whiffy careers coaches (ATS ate your CV / must be 103% compliant etc) are factors, which aren’t directly related to Workday, the employer or hiring in general.
From what I understand, the primary issues may be filter questions and how applications are ranked, both of which are human configuration issues.
And so ironically, I don’t think the problem in this case is AI at all - it’s people.
For AI to be effectively deployed, and for bias to be removed, it would rely on it’s configuration and design to be wholly unbiased.
It seems to me this case highlights the real problem with technology.
That it papers over the cracks in poor process, rather than solving problems at root.
And so, as AI solutions ramp up, we’ll see more and more unexpected consequences.
Because AI output remains what we see in a hall of mirrors, reflecting an imperfect society.
Thanks for reading.
Regards,
Greg