Email or username:

Password:

Forgot your password?
Top-level
Paul Cantrell

An interesting angle I’m sure someone is studying properly: when we feed these tabula rasa ML systems a bunch of data about the world as it is, and they come back puking out patterns of discrimination; can that serve as •evidence of bias• not just in AI, but in •society itself•?

If training an ML system on a company’s past hiring decision makes it think that baseball > softball for an office job, isn’t that compelling evidence of hiring discrimination?

6/

66 comments
Jack Jackson

@inthehands interesting proposition - which would, I imagine, be responded to with goalpost-moving or No True Scotsman-ing from True Believers if you actually tried it.

Paul Cantrell

@scubbo Indeed, which is why it needs to be studied by some researcher (which to be clear is not me) qualified to investigate the question in a robust way that withstands scrutiny.

Paul Cantrell

There’s an ugly question hovering over that previous post: What if the men •are• intrinsically better? What if discrimination is correct?? What if the AI, with its Perfect Machine Logic, is bypassing all the DEI woke whatever to find The Actual Truth??!?

Um, yeah…no.

A delightful tidbit from the article: a researcher studying a hiring AI “received a high rating in the interview, despite speaking nonsense German when she was supposed to be speaking English.”

These systems are garbage.

7/

Paul Cantrell

I mean, maaaaaaybe AI can help with applicant screening, but I’d need to see some •damn• good evidence that the net effect is positive. Identifying and countering training set bias, evaluating results, teasing out confounders and false successes — these are •hard• problems, problems that research work long months and years to overcome.

Do I believe for a hot minute that companies selling these hiring AIs are properly doing that work? No. No, I do not.

8/

Paul Cantrell

AI’s Shiny New Thing promise of “your expensive employees are suddenly replaceable” is just too much of a candy / crack cocaine / FOMO promise for business leaders desperate to cut costs. Good sense cannot survive the onslaught.

Lots of business right now are digging themselves into holes now that they’re going to spend years climbing out of.

9/

Paul Cantrell

Doing sloppy, biased resume screening is the •easy• part of HR. Generating lots of sort-of-almost-working code is the •easy• part of programming. Producing text that •sounds• generally like the correct words but is a subtle mixture of obvious, empty, and flat-out wrong — that’s the •easy• part of writing.

And a bunch of folks in businesses are going to spend the coming years learning all that the hard way.

10/

maya_b

@inthehands ie. you still have to do your homework, and getting something else to do it for you isn't likely to get it right

Paul Cantrell

At this point, I don’t think it’s even worth trying to talk down business leaders who’ve drunk the Kool-Aid. They need to make their own mistakes.

BUT

I do think there’s a competitive advantage here for companies willing to seize it. Which great candidates are getting overlooked by biased hiring? If you can identify them, hire them, and retain them — if! — then I suspect that payoff quickly outstrips the cost savings of having an AI automate your garbage hiring practices.

/end

Paul Cantrell replied to Paul

Yeah. The spam arms race is playing out in many spheres, and it feels kind of desperate right now tbh. A defining feature of our present moment.

From @JMMaok:
mastodon.online/@JMMaok/111953

OddOpinions5 replied to Paul

@inthehands

we often hear about bad decisions made by local, state, or federal goverments
and a large part of this is cause gov't info is public (at least in the US)

but we rarely hear the details of bad decisision by corporations

spend 100 million on a new website that is so bad it gets buried ?

No One ever knows, cause that is private info

and no one seems aware of this

[ edit ] of course, the right wing spends a lot of time & $ harassing the media about this

@inthehands

we often hear about bad decisions made by local, state, or federal goverments
and a large part of this is cause gov't info is public (at least in the US)

but we rarely hear the details of bad decisision by corporations

spend 100 million on a new website that is so bad it gets buried ?

No One ever knows, cause that is private info

DELETED replied to Paul

@inthehands Hiring managers are overwhelmingly white women and men and already so biased that AI is only a representation of themselves. Either way the ones who deserve the most, lose out. I wouldn't be able to go through the things that white people have said to me in interviews and how they act on the job but its no short of unprofessional, white supremacist, narcissistic, unintelligible and so far removed from humanity I don't know how they were raised because its hard to be that ignorant

P J Evans replied to DELETED

@gentrifiedrose @inthehands
I had a lead person who was nice enough (though self-promoting) in groups, but was unqualified for the work they were doing, and genuinely a bigot when not in a group or where they could be overheard. (HR required two witnesses.) They had the pieces of paper, though, and I didn't...but I think they were afraid I'd try for their job. (Didn't want it; I loved the one I was hired for.)

DELETED replied to P J Evans

@PJ_Evans @inthehands Over 80% of #HR are white women so hiring will always be biased and the fear that someone wants their job is no different than saying the mexican immigrants want to rape and steal. The paranoia comes from #narcissism where they think they'll be treated the way they treat others 😂 no one wants their job. #hire

P J Evans replied to DELETED

@gentrifiedrose @inthehands
At that company non-whites were well represented at all level, so that wasn't a problem. It was that the one person was able to game their bigotry so they couldn't be reported and fired. (It was visible in the group: fewer Latinos and no blacks.)

DELETED replied to P J Evans

@PJ_Evans @inthehands That happened in my governement job where I thought I was lucky to work in the most diverse company but the minority were white men and women who had outsized power. As studies show, the fewer the whites, the bigger the discrimination and the more damage they unleash.

Pusher Of Pixels replied to Paul

@inthehands Definitely agree. The applicant screening part is a huge problem given how biased the AI systems inherently are.

I'm not sure the 'opportunity' of gathering the AI rejects is viable though. AI applicant screenings will find large numbers of qualified candidates. Just much whiter, male and homogeneous in nature.

That will *eventually* harm the companies, but we aren't starting from a fair playing field. So in the short term it's yet another block on one side of the societal unfairness balance.

@inthehands Definitely agree. The applicant screening part is a huge problem given how biased the AI systems inherently are.

I'm not sure the 'opportunity' of gathering the AI rejects is viable though. AI applicant screenings will find large numbers of qualified candidates. Just much whiter, male and homogeneous in nature.

Paul Cantrell replied to Pusher Of Pixels

@pixelpusher220
Right. What I’m describing isn’t easy. But to the extent that hiring processes are flawed, there is a competitive advantage there to be found.

Pusher Of Pixels replied to Paul

@inthehands Agreed. Hopefully it can be used successfully!

DELETED replied to Paul

@inthehands AI is never going to hire a candidate named Devonte who was the local black student union president in favor of Theodore William Authier III who was polo president in college. 😂

Martha Howell replied to Paul

@inthehands
Backing way up, how many jobs require skills that are relevant to a specific sport? (And no, "teamwork" isn't an answer. There are a million non-sports examples of teamwork that can be highlighted in the average person's work history.)

Paul Cantrell replied to Martha

@MHowell For sure. I mean, the premise is to paint a “whole person” picture that fosters useful conversation in the interview, but I’m sure as often as not things like this become a discrimination vector. Conversely, though, I don’t think it’s possible to scrub enough personal identity characteristics from a resume to prevent discrimination.

Analog AI replied to Paul

@inthehands You can just buy the same AI, and interview only people whose resumes were rejected.

Eubie Drew (Spore 🦣)

@inthehands

This is how competitive systems learn: the language of death. In this case corporate death.

Politics careens from one failure to the next. Movement death, often learning something, but it only lasts a while.

Biological evolution: same thing. Species death.

Medicine too, though we work very hard to deny it. Death.

Technology is wrong more often than right. Progress still happens because the failed bubbles guide us violently.

DELETED

@inthehands The white bruhs that code will always see a resume that says Director as more powerful than a resume that says Assistant despite the director really being a buzz word that has nothing to do with a job and assistant meaning assistant manager of a retail chain that requires more people skills and work ethic than "director". AI is always going to weigh their own white male bias more highly than #womenofcolor in #hiring.

P J Evans replied to DELETED

@gentrifiedrose @inthehands
Doing QC is "less than" being lead person, even though it requires a lot of knowledge and experience.

DELETED

@inthehands Hiring managers are so unskilled that in studies, choosing random resumes resulted in a more competent and happy workforce. Hiring managers generally hire people they like or remind them of themselves which always results in a bullied workforce because managers aren't exactly the best workers or nicest people. #hiringmanager. They lack the self awareness to know when they should hire the coal covered in dust vs the shiny diamond.

Matt McIrvin

@inthehands I think there is one exception--for a lot of people in creative fields who may have some kind of borderline ADHD condition, getting past the blank page or the digital equivalent is a real struggle. And if there's something that can push them past that step from nothing to something, they'll find it useful.

There's a powerful temptation to just use version zero, though, especially if you're not the creator but the person paying the creator.

Paul Cantrell replied to Matt

@mattmcirvin Indeed, I ran a successful exercise much along these lines with one of my classes (see student remarks downthread):
hachyderm.io/@inthehands/10947

I think there really is a “there” there with LLMs; it just bears close to no resemblance to the wildly overhyped Magic Bean hysteria currently sweeping biz. Generating bullshit does actually have useful applications. But until the dust settles, how much harm will it cause?

JP

@inthehands The victims will learn the hard way. The people doing it will learn the easy way: making huge amounts of money and then skipping town before the poisoned soil kills all the crops, ready to do it again to a fresh set of victims.

StevenSavage

@inthehands in a discussion I saw someone noted that a "removing AI from workflow" consulting company would soon be viable.

DELETED

@inthehands

They are betting that AI/ML is going to get better. From a historical view of technology, they are probably right.

I detest the trend as well, but if it replaces basic clerking jobs, that saves people from tedium too.

Paul Cantrell

@abreaction Better? Yes. Sure.

“Better” in the sense of “fundamentally different by nature?” I really, really doubt that.

The problems I mention in this post are •intrinsic• problems, baked into the nature of the tech: hachyderm.io/@inthehands/11195 They don’t vanish just because the tech gets better, no more than making a car go faster can make it play the piano.

OddOpinions5 replied to Paul

@inthehands @abreaction

looking at the truly mind bending progress in computers in the last 50 years, it would be , IMO, a very very brave person willing to predict if in the next 10 or 20 years, that AI is a flying car or something truly radical

DELETED replied to OddOpinions5

@failedLyndonLaRouchite @inthehands

That's true.

My napkin calculations say that AI is going to require lots of little rules and modifications to work, and it will plateau at some point, but it's going to be very effective for certain repetitive jobs.

I think they most want it for manufacturing. Could be really useful to have robots that notice anomalies and can correct them.

Robotistry replied to DELETED

@abreaction @failedLyndonLaRouchite @inthehands That is a much harder, more expensive problem than "improvements in AI" - the flaws in AI often boil down to "failure to correctly ground the concept in the real world".

The model hallucinates because without grounding that understands the concepts of "food" and "color" as subjective experiences, "blue" and "blueberry" are almost the same.

Robots *require* grounding to connect their actions to their task.

Tidings of Comfort & Jo

@inthehands big corporations are using AI to crush other businesses, with the temptation to greed over quality, and devaluing all the people and labor that built it. It takes away human education and evolution and replaces it with infinite monkeys in the machine. Simias ex machina.

DELETED

@inthehands

Any business that prioritizes profit over workers rights deserves to suffer and fail.

J Miller

@inthehands

Good thread!

This is all made even harder by the fact that applicants are simultaneously adopting LLMs. This reduces the effort needed to apply, resulting in larger applicant pools with different signals. Heck, the applicants will start to get advised to change softball to baseball. And in the pantheon of resume lies, that’s trivial.

But this shift by applicants also means I can’t entirely blame companies for trying some machine learning.

J Miller

@inthehands

I worked in HR at a large tech company when applying online first became a thing (early 2000s). They were getting a million applications a year, many requiring visa sponsorship where that would not be feasible. I’m not sure what numbers are like now. Legally, there was a change in the definition of an applicant. But a million per year was a big change for them, requiring a whole staff of contractors to scan and do data entry of resumes. This feels similar.

axoplasm

@JMMaok @inthehands this so much! We spent the last year trying to hire for a senior position and eventually just gave up. 1000+ applications, barely 10 worth interviewing

This is for an IT position at a nonprofit, not a tech co. A human reads *every* application

axoplasm

@inthehands @JMMaok …and talking to the recruiters this is not (yet) happening for non-tech positions at the org

DELETED

@inthehands AI is overwhelmingly made by white men so the bias will always be white supremacy, always. White male IT bruh's tend to lack even small amounts of empathy and are extremely sheltered so they can't see anyone else's perspective. They also work hard to destroy the careers of the few women and POC that work in large organizations. It's resulted in such a gaggle fuck that the top tech companies had to disband their AI hiring because of potential bad press. Though I think it was intention

Stu

@inthehands one interesting side effect of all this HR AI is that, as I potentially need to reenter the job hunting game, I'm thinking of creating a resume not formatted for clarity to people, but absolutely optimized plain text for algorithms.

Sigh.

Dieu

@inthehands maybe it's simply about ridding oneself of the awful decision making. Throwing dice in a way that allows one to convince oneself one's not just rolling dice.

Paul Cantrell

@hllizi
Yes, much of the corporate appeal of AI is whitewashing bias.

Ivan Sagalaev :flag_wbw:

@inthehands first of all, thank you!

Now, reading through this thread prompted a related but different thought: the current generation of Tesla's self-driving AI eschews codified decision-making in favor of learning how to drive based purely on humans. Which should obviously be a bad idea if your stated goal is to devise a better-than-human behavior. But everyone is just closing their eyes and saying "well, I guess they know better what they're doing". They don't.

Paul Cantrell

@isagalaev True. Their stubborn focus on vision over other types of input is also baffling. Tesla’s whole approach to self-driving makes no sense to me; looks like a bottomless money pit from where I sit.

(Note that Boston Dynamics doesn’t use ML of this type at all, IIRC.)

Ivan Sagalaev :flag_wbw:

@inthehands I think their vision layer is okay. It can reliably identify and classify objects and their placement. It's what to do with this information that has always been the problem: you've got this car over there moving that way and that car standing over here. What input you apply to pedals and the steering wheel? This part turned out to be harder than vision. And now they're trying to solve it with AI as well. Which just swaps one set of edge case for another and can't be debugged.

Paul Cantrell

@isagalaev At least some of the embarrassing Tesla self-driving fails I’ve seen in videos online are situations where cross-checking multiple forms of input (radar, map, etc) would probably have helped a lot.

Ivan Sagalaev :flag_wbw:

@inthehands I own one, and I can tell when they switched from radar to vision to determine obstacle in front of the car, it became much smoother and reliable. Radar is too low-res and just results in noisy signal you can't rely on.

Another thing that's missing is memory. Musk likes to talk about human eyes as sensors, but we also rely on memory *a lot*. After through a turn a few times, a human is much better at predicting behavior. But Tesla goes through every interaction with tabula rasa.

Go Up