Email or username:

Password:

Forgot your password?
Paul Cantrell

There’s a lot to chew on in this short article (ht @ajsadauskas):
bbc.com/worklife/article/20240

“An AI resume screener…trained on CVs of employees already at the firm” gave candidates extra marks if they listed male-associated sports, and downgraded female-associated sports.

Bias like this is enraging, but completely unsurprising to anybody who knows half a thing about how machine learning works. Which apparently doesn’t include a lot of execs and HR folks.

1/

157 comments
Paul Cantrell

Years ago, before the current AI craze, I helped a student prepare a talk on an AI project. Her team asked whether it’s possible to distinguish rooms with positive vs. negative affect — “That place is so nice / so depressing” — using the room’s color palette alone.

They gathered various photos of rooms on campus, and manually tagged them as having positive or negative affect. They wrote software to extract color palettes. And they trained an ML system on that dataset.

2/

Paul Cantrell

@flowchainsenseisocial I have not. I take it the relevance is the environment → affect connection?

Paul Cantrell

Guess what? Their software succeeded!…at identifying photos taken by Macalester’s admissions dept.

It turns out that all the publicity photos, massaged and prepped for recruiting material, had more vivid colors than the photos they took. And they’d mostly used publicity photos for the “happy” rooms and their own photos for the “sad” rooms (which generally aren’t in publicity materials).

They’d encoded a bias in their dataset, and machine learning dutifully picked up the pattern.

Oops.

3/

Paul Cantrell

The student had a dilemma: she had to present her research, but the results sucked! the project failed! she was embarrassed! Should she try to fix it at the last minute?? Rush a totally different project?!?

I nipped that in the bud. “You have a •great• presentation here.” Failure is fascinating. Bad results are fascinating. And people •need• to understand how these AI / ML systems break.

4/

DELETED

@inthehands thanks for sharing Paul, these studies are invaluable, a scientists job isn’t to “prove” something works such that disproving is a failure, it’s to take a hypothesis, test it and then report the results.

I was wondering about the circumstances though, would the results not have been invalidated from the start due to “manual tagging”? That’s already bias for your dataset, your AI can only decide what the people who tagged it thinks a good room looks like? Or is that expected/accepted/ignored because thats just how things are built?

@inthehands thanks for sharing Paul, these studies are invaluable, a scientists job isn’t to “prove” something works such that disproving is a failure, it’s to take a hypothesis, test it and then report the results.

I was wondering about the circumstances though, would the results not have been invalidated from the start due to “manual tagging”? That’s already bias for your dataset, your AI can only decide what the people who tagged it thinks a good room looks like? Or is that expected/accepted/ignored...

Paul Cantrell

@james It’s an approach called “supervised learning:”

en.wikipedia.org/wiki/Supervis

It can be totally valid. The trick (well, the first one, and after that I’m out of my depth) is that you can’t evaluate the results against the training data, so you train the system on only X% of your tagged data, then check how well it matches the desired output for the remaining Y% it hasn’t “seen” before.

Aaron

@james @inthehands I have seen some efforts to identify, quantify, and mitigate bias in the human-generated labels, if that's what you're getting at. I would say, yes, there will *always* be bias in manually tagged data. The question is, do the biases present in that data affect the job you want the model to do? Often the only source of truth for whether a task has been performed correctly is human judgment. In those cases, we can identify secondary biases (like gender or race in hiring decisions) that we want to specifically mitigate, but what we are training the model to learn is literally a bias itself, e.g. the bias towards candidates that hiring managers think will do well in the position.

@james @inthehands I have seen some efforts to identify, quantify, and mitigate bias in the human-generated labels, if that's what you're getting at. I would say, yes, there will *always* be bias in manually tagged data. The question is, do the biases present in that data affect the job you want the model to do? Often the only source of truth for whether a task has been performed correctly is human judgment. In those cases, we can identify secondary biases (like gender or race in hiring decisions)...

Dawn Ahukanna

@inthehands @james
Observations:
1. There are not enough (disposable) developers to churn out code, so use “1-shot-imprint statistical engine”[1SISE] to generate all the code we want, how hard can it be?
2. There are not enough (disposable) data scientists & ML engineers to supervise “imprinting”, so use the entire internet for “1SISE”. Job done, right?
3. There are not enough (disposable) “natural resources” to power the “1SISE”. Oops!
4. “1SISE” only has 1 “biased” perspective, quel surprise!

Paul Cantrell

She dutifully gave the talk on the project as is, complete with the rug pull at the end: “Here’s our results! They’re so broken! Look, it learned the bias in our dataset! Surprise!“ It got an audible reaction from the audience. People •loved• her talk.

I wish there had been some HR folks at her talk.

Train an AI on your discriminatory hiring practices, and guess what it learns? That should be a rhetorical question, but I’ll spell it out: it learns how to infer the gender of applicants.

5/

Paul Cantrell

An interesting angle I’m sure someone is studying properly: when we feed these tabula rasa ML systems a bunch of data about the world as it is, and they come back puking out patterns of discrimination; can that serve as •evidence of bias• not just in AI, but in •society itself•?

If training an ML system on a company’s past hiring decision makes it think that baseball > softball for an office job, isn’t that compelling evidence of hiring discrimination?

6/

Jack Jackson

@inthehands interesting proposition - which would, I imagine, be responded to with goalpost-moving or No True Scotsman-ing from True Believers if you actually tried it.

Paul Cantrell

@scubbo Indeed, which is why it needs to be studied by some researcher (which to be clear is not me) qualified to investigate the question in a robust way that withstands scrutiny.

Paul Cantrell

There’s an ugly question hovering over that previous post: What if the men •are• intrinsically better? What if discrimination is correct?? What if the AI, with its Perfect Machine Logic, is bypassing all the DEI woke whatever to find The Actual Truth??!?

Um, yeah…no.

A delightful tidbit from the article: a researcher studying a hiring AI “received a high rating in the interview, despite speaking nonsense German when she was supposed to be speaking English.”

These systems are garbage.

7/

Paul Cantrell

I mean, maaaaaaybe AI can help with applicant screening, but I’d need to see some •damn• good evidence that the net effect is positive. Identifying and countering training set bias, evaluating results, teasing out confounders and false successes — these are •hard• problems, problems that research work long months and years to overcome.

Do I believe for a hot minute that companies selling these hiring AIs are properly doing that work? No. No, I do not.

8/

Paul Cantrell

AI’s Shiny New Thing promise of “your expensive employees are suddenly replaceable” is just too much of a candy / crack cocaine / FOMO promise for business leaders desperate to cut costs. Good sense cannot survive the onslaught.

Lots of business right now are digging themselves into holes now that they’re going to spend years climbing out of.

9/

Paul Cantrell

Doing sloppy, biased resume screening is the •easy• part of HR. Generating lots of sort-of-almost-working code is the •easy• part of programming. Producing text that •sounds• generally like the correct words but is a subtle mixture of obvious, empty, and flat-out wrong — that’s the •easy• part of writing.

And a bunch of folks in businesses are going to spend the coming years learning all that the hard way.

10/

maya_b

@inthehands ie. you still have to do your homework, and getting something else to do it for you isn't likely to get it right

Paul Cantrell

At this point, I don’t think it’s even worth trying to talk down business leaders who’ve drunk the Kool-Aid. They need to make their own mistakes.

BUT

I do think there’s a competitive advantage here for companies willing to seize it. Which great candidates are getting overlooked by biased hiring? If you can identify them, hire them, and retain them — if! — then I suspect that payoff quickly outstrips the cost savings of having an AI automate your garbage hiring practices.

/end

Paul Cantrell replied to Paul

Yeah. The spam arms race is playing out in many spheres, and it feels kind of desperate right now tbh. A defining feature of our present moment.

From @JMMaok:
mastodon.online/@JMMaok/111953

OddOpinions5 replied to Paul

@inthehands

we often hear about bad decisions made by local, state, or federal goverments
and a large part of this is cause gov't info is public (at least in the US)

but we rarely hear the details of bad decisision by corporations

spend 100 million on a new website that is so bad it gets buried ?

No One ever knows, cause that is private info

and no one seems aware of this

[ edit ] of course, the right wing spends a lot of time & $ harassing the media about this

@inthehands

we often hear about bad decisions made by local, state, or federal goverments
and a large part of this is cause gov't info is public (at least in the US)

but we rarely hear the details of bad decisision by corporations

spend 100 million on a new website that is so bad it gets buried ?

No One ever knows, cause that is private info

DELETED replied to Paul

@inthehands Hiring managers are overwhelmingly white women and men and already so biased that AI is only a representation of themselves. Either way the ones who deserve the most, lose out. I wouldn't be able to go through the things that white people have said to me in interviews and how they act on the job but its no short of unprofessional, white supremacist, narcissistic, unintelligible and so far removed from humanity I don't know how they were raised because its hard to be that ignorant

Pusher Of Pixels replied to Paul

@inthehands Definitely agree. The applicant screening part is a huge problem given how biased the AI systems inherently are.

I'm not sure the 'opportunity' of gathering the AI rejects is viable though. AI applicant screenings will find large numbers of qualified candidates. Just much whiter, male and homogeneous in nature.

That will *eventually* harm the companies, but we aren't starting from a fair playing field. So in the short term it's yet another block on one side of the societal unfairness balance.

@inthehands Definitely agree. The applicant screening part is a huge problem given how biased the AI systems inherently are.

I'm not sure the 'opportunity' of gathering the AI rejects is viable though. AI applicant screenings will find large numbers of qualified candidates. Just much whiter, male and homogeneous in nature.

Paul Cantrell replied to Pusher Of Pixels

@pixelpusher220
Right. What I’m describing isn’t easy. But to the extent that hiring processes are flawed, there is a competitive advantage there to be found.

Pusher Of Pixels replied to Paul

@inthehands Agreed. Hopefully it can be used successfully!

DELETED replied to Paul

@inthehands AI is never going to hire a candidate named Devonte who was the local black student union president in favor of Theodore William Authier III who was polo president in college. 😂

Eubie Drew (Spore 🦣)

@inthehands

This is how competitive systems learn: the language of death. In this case corporate death.

Politics careens from one failure to the next. Movement death, often learning something, but it only lasts a while.

Biological evolution: same thing. Species death.

Medicine too, though we work very hard to deny it. Death.

Technology is wrong more often than right. Progress still happens because the failed bubbles guide us violently.

DELETED

@inthehands The white bruhs that code will always see a resume that says Director as more powerful than a resume that says Assistant despite the director really being a buzz word that has nothing to do with a job and assistant meaning assistant manager of a retail chain that requires more people skills and work ethic than "director". AI is always going to weigh their own white male bias more highly than #womenofcolor in #hiring.

DELETED

@inthehands Hiring managers are so unskilled that in studies, choosing random resumes resulted in a more competent and happy workforce. Hiring managers generally hire people they like or remind them of themselves which always results in a bullied workforce because managers aren't exactly the best workers or nicest people. #hiringmanager. They lack the self awareness to know when they should hire the coal covered in dust vs the shiny diamond.

DELETED

@inthehands

They are betting that AI/ML is going to get better. From a historical view of technology, they are probably right.

I detest the trend as well, but if it replaces basic clerking jobs, that saves people from tedium too.

Paul Cantrell

@abreaction Better? Yes. Sure.

“Better” in the sense of “fundamentally different by nature?” I really, really doubt that.

The problems I mention in this post are •intrinsic• problems, baked into the nature of the tech: hachyderm.io/@inthehands/11195 They don’t vanish just because the tech gets better, no more than making a car go faster can make it play the piano.

OddOpinions5 replied to Paul

@inthehands @abreaction

looking at the truly mind bending progress in computers in the last 50 years, it would be , IMO, a very very brave person willing to predict if in the next 10 or 20 years, that AI is a flying car or something truly radical

DELETED replied to OddOpinions5

@failedLyndonLaRouchite @inthehands

That's true.

My napkin calculations say that AI is going to require lots of little rules and modifications to work, and it will plateau at some point, but it's going to be very effective for certain repetitive jobs.

I think they most want it for manufacturing. Could be really useful to have robots that notice anomalies and can correct them.

Tidings of Comfort & Jo

@inthehands big corporations are using AI to crush other businesses, with the temptation to greed over quality, and devaluing all the people and labor that built it. It takes away human education and evolution and replaces it with infinite monkeys in the machine. Simias ex machina.

DELETED

@inthehands

Any business that prioritizes profit over workers rights deserves to suffer and fail.

J Miller

@inthehands

Good thread!

This is all made even harder by the fact that applicants are simultaneously adopting LLMs. This reduces the effort needed to apply, resulting in larger applicant pools with different signals. Heck, the applicants will start to get advised to change softball to baseball. And in the pantheon of resume lies, that’s trivial.

But this shift by applicants also means I can’t entirely blame companies for trying some machine learning.

J Miller

@inthehands

I worked in HR at a large tech company when applying online first became a thing (early 2000s). They were getting a million applications a year, many requiring visa sponsorship where that would not be feasible. I’m not sure what numbers are like now. Legally, there was a change in the definition of an applicant. But a million per year was a big change for them, requiring a whole staff of contractors to scan and do data entry of resumes. This feels similar.

axoplasm

@JMMaok @inthehands this so much! We spent the last year trying to hire for a senior position and eventually just gave up. 1000+ applications, barely 10 worth interviewing

This is for an IT position at a nonprofit, not a tech co. A human reads *every* application

axoplasm

@inthehands @JMMaok …and talking to the recruiters this is not (yet) happening for non-tech positions at the org

DELETED

@inthehands AI is overwhelmingly made by white men so the bias will always be white supremacy, always. White male IT bruh's tend to lack even small amounts of empathy and are extremely sheltered so they can't see anyone else's perspective. They also work hard to destroy the careers of the few women and POC that work in large organizations. It's resulted in such a gaggle fuck that the top tech companies had to disband their AI hiring because of potential bad press. Though I think it was intention

nen

@inthehands So true. For me the most interesting thing about LLMs has been to to break them and then try to understand why they break in such strange ways (sadly, I didn't learn much about that)...

Sven A. Schmidt

@inthehands Reminds me of a lesson I learned about 30 years ago in a physics course. In pairs we had to run experiments a full day and then prepare an analysis.

Our results were garbage. We tried everything to explain them, all attempts failed. In the end we went in to present our “results” and expected to be roasted.

On the contrary, our tutor was delighted. Turned out an essential part of the experiment was broken and he praised us for doing all the “false negative” analysis 😮

Christine M.

@finestructure @inthehands

"False negative analysis" and being brave enough to say "We don't know - yet." - both valuable positions when the situation warrants.

And: Don't let yourself be discouraged.

buherator
@finestructure @inthehands I heard a legend about a lab exercise at our uni where students were tasked to figure out the contents of a box by electrical measurements on some external connectors. Sometimes the box contained a potato wired up.
Captain Superfluous

@inthehands

Though I am a man, I too do not fit the superfluous parameters of today's hiring gods.

@ajsadauskas

Paul Cantrell

@CptSuperlative It’s brutal out there, and I know a lot of people who are just beside themselves trying to navigate the current hiring environment chaos.

Annelies Kamran

@inthehands @CptSuperlative This has been the case for several years now. I used to ask people I knew for help in applying because I knew I was getting weeded out, but I would just get told I was well qualified and to apply on the website. As if I couldn't see that that wasn't how they had got *their* jobs. So I gave up.

axoplasm

@inthehands @ajsadauskas mirror phenomenon where applicants use ML to write resumes. Arms race to the bottom. I can (and have) told stories. I’ve seen it get much worse in the last 2-3 yrs

Paul_IPv6

@inthehands @ajsadauskas

considering how badly just HR chews up tech hiring, i can't even imagine how bad AI resume screeners are. we've had decades to try to come up with algorithmic ways to replace a qualified human doing this and failed...

Paul_IPv6

@inthehands @ajsadauskas

so. linkedin & AI resumer reader story.

i have had one single "manager" job in my entire career. was sr director of a tools group. lasted 6 months. entire career, IC/SME. no other management titles other than the one.

got a cold call email from a firm developing AI based resume reading software company. "based on your resume, our software says you'd be a perfect candidate for VP of Eng".

i declined. i did also decide not to tell them just how bad their software must be, based on that "hit". that was in the last 2 years. i doubt it's improved noticeably.

@inthehands @ajsadauskas

so. linkedin & AI resumer reader story.

i have had one single "manager" job in my entire career. was sr director of a tools group. lasted 6 months. entire career, IC/SME. no other management titles other than the one.

got a cold call email from a firm developing AI based resume reading software company. "based on your resume, our software says you'd be a perfect candidate for VP of Eng".

sidereal

@paul_ipv6 @inthehands @ajsadauskas I made up a company and said I was a manager of it on LinkedIn and actually got job offers. Faking it til you make it really works with these fools.

CivicWhitaker

@inthehands I’m already seeing a lot of resumes where it’s clear it’s written for an AI screener and not a human. I had one with a table in it! Which makes *perfect* sense if you’re trying to get through a screener. Companies should probably start disclosing if they use a screener or not so you submit both a AI-Resume and a Human-resume

OddOpinions5

@inthehands @ajsadauskas

I haven't had to look for a job for 15 years (thank god) but my memory of job application software is that the main goal of said software is to impress on the candidate that they are entirely expendable, and their main job is to do whatever the company wants

Ben Fulton

@inthehands @ajsadauskas I mean, I've been telling people to change their name to Luke and say they play lacrosse for years.

Lyude🌹#BLM

@inthehands @federicomena @ajsadauskas they say it "judges things based on body language"
that is so painfully neurotypical it hurts holy shit

Shephallmassive

@inthehands @ajsadauskas no maybe about it.. its no good saying your a company that encourages diversity if you pay companies to run selection algorithms to weed out the people there had always been prejudices against. Paying so you dont waste time interviewing people who are not like us.Powerful people deciding who is valid. Commissioning algorithms to be prejudiced for you, so you dont risk being caught exercising your nasty illegal prejudices, should still be a crime.

Some Guy Named Chris

@inthehands @ajsadauskas Fantastic info and much respect for the way you handled that student's project.

It's so vital that young scientists really internalize how much we learn from failure.

Lien Rag

@inthehands Since when do HR and execs know anything of value ?

Phil Wolff

@inthehands @ajsadauskas If you get through the ATS gauntlet, human screeners average an 8-second glance at your resume.

DanCast

@inthehands @ajsadauskas Last time I was looking for a job, I remember being contacted by a local company that built an ML-based sourcing tool. I headed over to LinkedIn and was completely unsurprised to discover that out of 150 employees, only three weren't white.

Dana Fried

@inthehands @ajsadauskas Amazon tried and scrapped the same approach years ago (for the exact same reason!); this is a well-known story; I have no idea how people can be making the same mistakes again.

Sir Egg of Nogg 🎅 🎄

@inthehands @ajsadauskas

HR filtering is already an absolute trainwreck. Waiting for AI to make it all but impenetrable.

I had run into a person that I wanted to hire. Showed him an ad that we had for the position. He wrote a cover letter, and massaged his resume to cover the requirements and submitted it.

I never heard anything for over a week. Contacted the HR rep covering my department. They claimed ignorance. Contacted the applicant, he confirmed when he'd submitted it.

He sent me copies of his resume and cover letter. It should have been a slam dunk.

Got back in touch with HR. All they could confirm was that the applicant had been filtered out, but they could not determine why. Beyond Fucking useless.

@inthehands @ajsadauskas

HR filtering is already an absolute trainwreck. Waiting for AI to make it all but impenetrable.

I had run into a person that I wanted to hire. Showed him an ad that we had for the position. He wrote a cover letter, and massaged his resume to cover the requirements and submitted it.

Dave Mc

@inthehands @ajsadauskas I did a project that put variable speed limits on some highways to help flow. We created a traffic sim to see if it would work elsewhere. Used regression to tweak the driver behaviour model so they behaved as we saw drivers respond to the variable limits in the reality. Seemed work. Until one run, someone forgot to turn the signs on and the modelled drivers still acted just as well. We'd made sim'd drivers respond better to congestion, not to react to the signs. Doh!

Sky, Cozy Goth Prince of Cats

@inthehands As a former class-action lawyer, this is potentially fantastic news, depending on how the development of liability law shakes out. Even a relatively black-box machine learning algorithm is more documentable than what fifty different HR folks are silently thinking and feeling.

A land fit for all our futures

@inthehands @ajsadauskas

that's one of those "no shit, sherlock" headlines if I ever saw one

sipuliina

@inthehands @ajsadauskas I dont think any amount of fixing the biases of these systems by "implementing guardrails" or something like that will make things much better. These things simply shouldn't be done with an AI. And it isn't only about race, even though it is a prominent bias. There will always be biases, many of which will be harder to detect than race.

Weekend Editor

@inthehands @ajsadauskas

If your dataset contains biases, then ANYTHING you train on it will inherit those biases (absent specific corrective action).

Pretty much every textbook tells you that your models will sometimes train on non-obvious "details". This applies to AI. It applies to machine learning. It applies to statistics, even simple old regression and classification.

If AI/statistics practitioners know this, why do we have to keep re-learning this lesson the hard way?

Perhaps managements need a couple knocks to the side of the head to beat this fact into them?

@inthehands @ajsadauskas

If your dataset contains biases, then ANYTHING you train on it will inherit those biases (absent specific corrective action).

Pretty much every textbook tells you that your models will sometimes train on non-obvious "details". This applies to AI. It applies to machine learning. It applies to statistics, even simple old regression and classification.

Bec

@inthehands @ajsadauskas

Hello. May I share this thread on LinkedIn?

Negative12DollarBill

@inthehands @ajsadauskas

It's kind of a feedback loop, isn't it? The kind of tech-bro who thought this was a good idea is also the kind of tech-bro who would assume the all-male candidates it selected were chosen on "merit".

DELETED

@inthehands @ajsadauskas It could be argued that AI performed as expected. It was trained on the existing staff and the article implies that successful males were in senior positions. Biased input equals biased output. Of course, if that is the reason, then essentially you have "echo chamber" hiring.

DFY fan for life (she/her)

@inthehands argh

it's so preventable if they just were even paying a little bit of attention

And the headlines say "what's DEI good for again?"

Catherine Berry

@inthehands @ajsadauskas

It's not even an ML-specific problem. The oldest axiom of computer programming is "Garbage in, garbage out".

sabik

@inthehands @ajsadauskas @pyoor
Or maybe that's a feature, to certain people

Kevin Karhan :verified:

@inthehands @ajsadauskas

Pretty shure affected candidates may be eligible for compensation amidst this blatant #discrimination...

Because that's some shite that would get #HR folks fired and sued by their employers in any reasonable juristiction.

Gordon W

@inthehands @ajsadauskas
@emmettoconnell
AI tools don’t write well, don’t convey consistently accurate information, are biased to those already in power, why use them for a hiring process!?. Ease of use? It’s no mystery how to find the best mix of candidates. Good HR and Hiring managers already do it. Artificial Intelligence provides no marginal benefit.

Go Up