Email or username:

Password:

Forgot your password?
Top-level
Paul Cantrell

Years ago, before the current AI craze, I helped a student prepare a talk on an AI project. Her team asked whether it’s possible to distinguish rooms with positive vs. negative affect — “That place is so nice / so depressing” — using the room’s color palette alone.

They gathered various photos of rooms on campus, and manually tagged them as having positive or negative affect. They wrote software to extract color palettes. And they trained an ML system on that dataset.

2/

85 comments
Paul Cantrell

@flowchainsenseisocial I have not. I take it the relevance is the environment → affect connection?

Paul Cantrell

Guess what? Their software succeeded!…at identifying photos taken by Macalester’s admissions dept.

It turns out that all the publicity photos, massaged and prepped for recruiting material, had more vivid colors than the photos they took. And they’d mostly used publicity photos for the “happy” rooms and their own photos for the “sad” rooms (which generally aren’t in publicity materials).

They’d encoded a bias in their dataset, and machine learning dutifully picked up the pattern.

Oops.

3/

Paul Cantrell

The student had a dilemma: she had to present her research, but the results sucked! the project failed! she was embarrassed! Should she try to fix it at the last minute?? Rush a totally different project?!?

I nipped that in the bud. “You have a •great• presentation here.” Failure is fascinating. Bad results are fascinating. And people •need• to understand how these AI / ML systems break.

4/

DELETED

@inthehands thanks for sharing Paul, these studies are invaluable, a scientists job isn’t to “prove” something works such that disproving is a failure, it’s to take a hypothesis, test it and then report the results.

I was wondering about the circumstances though, would the results not have been invalidated from the start due to “manual tagging”? That’s already bias for your dataset, your AI can only decide what the people who tagged it thinks a good room looks like? Or is that expected/accepted/ignored because thats just how things are built?

@inthehands thanks for sharing Paul, these studies are invaluable, a scientists job isn’t to “prove” something works such that disproving is a failure, it’s to take a hypothesis, test it and then report the results.

I was wondering about the circumstances though, would the results not have been invalidated from the start due to “manual tagging”? That’s already bias for your dataset, your AI can only decide what the people who tagged it thinks a good room looks like? Or is that expected/accepted/ignored...

Paul Cantrell

@james It’s an approach called “supervised learning:”

en.wikipedia.org/wiki/Supervis

It can be totally valid. The trick (well, the first one, and after that I’m out of my depth) is that you can’t evaluate the results against the training data, so you train the system on only X% of your tagged data, then check how well it matches the desired output for the remaining Y% it hasn’t “seen” before.

Aaron

@james @inthehands I have seen some efforts to identify, quantify, and mitigate bias in the human-generated labels, if that's what you're getting at. I would say, yes, there will *always* be bias in manually tagged data. The question is, do the biases present in that data affect the job you want the model to do? Often the only source of truth for whether a task has been performed correctly is human judgment. In those cases, we can identify secondary biases (like gender or race in hiring decisions) that we want to specifically mitigate, but what we are training the model to learn is literally a bias itself, e.g. the bias towards candidates that hiring managers think will do well in the position.

@james @inthehands I have seen some efforts to identify, quantify, and mitigate bias in the human-generated labels, if that's what you're getting at. I would say, yes, there will *always* be bias in manually tagged data. The question is, do the biases present in that data affect the job you want the model to do? Often the only source of truth for whether a task has been performed correctly is human judgment. In those cases, we can identify secondary biases (like gender or race in hiring decisions)...

Dawn Ahukanna

@inthehands @james
Observations:
1. There are not enough (disposable) developers to churn out code, so use “1-shot-imprint statistical engine”[1SISE] to generate all the code we want, how hard can it be?
2. There are not enough (disposable) data scientists & ML engineers to supervise “imprinting”, so use the entire internet for “1SISE”. Job done, right?
3. There are not enough (disposable) “natural resources” to power the “1SISE”. Oops!
4. “1SISE” only has 1 “biased” perspective, quel surprise!

Paul Cantrell

She dutifully gave the talk on the project as is, complete with the rug pull at the end: “Here’s our results! They’re so broken! Look, it learned the bias in our dataset! Surprise!“ It got an audible reaction from the audience. People •loved• her talk.

I wish there had been some HR folks at her talk.

Train an AI on your discriminatory hiring practices, and guess what it learns? That should be a rhetorical question, but I’ll spell it out: it learns how to infer the gender of applicants.

5/

Paul Cantrell

An interesting angle I’m sure someone is studying properly: when we feed these tabula rasa ML systems a bunch of data about the world as it is, and they come back puking out patterns of discrimination; can that serve as •evidence of bias• not just in AI, but in •society itself•?

If training an ML system on a company’s past hiring decision makes it think that baseball > softball for an office job, isn’t that compelling evidence of hiring discrimination?

6/

Jack Jackson

@inthehands interesting proposition - which would, I imagine, be responded to with goalpost-moving or No True Scotsman-ing from True Believers if you actually tried it.

Paul Cantrell

@scubbo Indeed, which is why it needs to be studied by some researcher (which to be clear is not me) qualified to investigate the question in a robust way that withstands scrutiny.

Paul Cantrell

There’s an ugly question hovering over that previous post: What if the men •are• intrinsically better? What if discrimination is correct?? What if the AI, with its Perfect Machine Logic, is bypassing all the DEI woke whatever to find The Actual Truth??!?

Um, yeah…no.

A delightful tidbit from the article: a researcher studying a hiring AI “received a high rating in the interview, despite speaking nonsense German when she was supposed to be speaking English.”

These systems are garbage.

7/

Paul Cantrell

I mean, maaaaaaybe AI can help with applicant screening, but I’d need to see some •damn• good evidence that the net effect is positive. Identifying and countering training set bias, evaluating results, teasing out confounders and false successes — these are •hard• problems, problems that research work long months and years to overcome.

Do I believe for a hot minute that companies selling these hiring AIs are properly doing that work? No. No, I do not.

8/

Paul Cantrell

AI’s Shiny New Thing promise of “your expensive employees are suddenly replaceable” is just too much of a candy / crack cocaine / FOMO promise for business leaders desperate to cut costs. Good sense cannot survive the onslaught.

Lots of business right now are digging themselves into holes now that they’re going to spend years climbing out of.

9/

Paul Cantrell

Doing sloppy, biased resume screening is the •easy• part of HR. Generating lots of sort-of-almost-working code is the •easy• part of programming. Producing text that •sounds• generally like the correct words but is a subtle mixture of obvious, empty, and flat-out wrong — that’s the •easy• part of writing.

And a bunch of folks in businesses are going to spend the coming years learning all that the hard way.

10/

maya_b

@inthehands ie. you still have to do your homework, and getting something else to do it for you isn't likely to get it right

Paul Cantrell

At this point, I don’t think it’s even worth trying to talk down business leaders who’ve drunk the Kool-Aid. They need to make their own mistakes.

BUT

I do think there’s a competitive advantage here for companies willing to seize it. Which great candidates are getting overlooked by biased hiring? If you can identify them, hire them, and retain them — if! — then I suspect that payoff quickly outstrips the cost savings of having an AI automate your garbage hiring practices.

/end

Paul Cantrell replied to Paul

Yeah. The spam arms race is playing out in many spheres, and it feels kind of desperate right now tbh. A defining feature of our present moment.

From @JMMaok:
mastodon.online/@JMMaok/111953

OddOpinions5 replied to Paul

@inthehands

we often hear about bad decisions made by local, state, or federal goverments
and a large part of this is cause gov't info is public (at least in the US)

but we rarely hear the details of bad decisision by corporations

spend 100 million on a new website that is so bad it gets buried ?

No One ever knows, cause that is private info

and no one seems aware of this

[ edit ] of course, the right wing spends a lot of time & $ harassing the media about this

@inthehands

we often hear about bad decisions made by local, state, or federal goverments
and a large part of this is cause gov't info is public (at least in the US)

but we rarely hear the details of bad decisision by corporations

spend 100 million on a new website that is so bad it gets buried ?

No One ever knows, cause that is private info

DELETED replied to Paul

@inthehands Hiring managers are overwhelmingly white women and men and already so biased that AI is only a representation of themselves. Either way the ones who deserve the most, lose out. I wouldn't be able to go through the things that white people have said to me in interviews and how they act on the job but its no short of unprofessional, white supremacist, narcissistic, unintelligible and so far removed from humanity I don't know how they were raised because its hard to be that ignorant

Pusher Of Pixels replied to Paul

@inthehands Definitely agree. The applicant screening part is a huge problem given how biased the AI systems inherently are.

I'm not sure the 'opportunity' of gathering the AI rejects is viable though. AI applicant screenings will find large numbers of qualified candidates. Just much whiter, male and homogeneous in nature.

That will *eventually* harm the companies, but we aren't starting from a fair playing field. So in the short term it's yet another block on one side of the societal unfairness balance.

@inthehands Definitely agree. The applicant screening part is a huge problem given how biased the AI systems inherently are.

I'm not sure the 'opportunity' of gathering the AI rejects is viable though. AI applicant screenings will find large numbers of qualified candidates. Just much whiter, male and homogeneous in nature.

Paul Cantrell replied to Pusher Of Pixels

@pixelpusher220
Right. What I’m describing isn’t easy. But to the extent that hiring processes are flawed, there is a competitive advantage there to be found.

Pusher Of Pixels replied to Paul

@inthehands Agreed. Hopefully it can be used successfully!

DELETED replied to Paul

@inthehands AI is never going to hire a candidate named Devonte who was the local black student union president in favor of Theodore William Authier III who was polo president in college. 😂

Eubie Drew (Spore 🦣)

@inthehands

This is how competitive systems learn: the language of death. In this case corporate death.

Politics careens from one failure to the next. Movement death, often learning something, but it only lasts a while.

Biological evolution: same thing. Species death.

Medicine too, though we work very hard to deny it. Death.

Technology is wrong more often than right. Progress still happens because the failed bubbles guide us violently.

DELETED

@inthehands The white bruhs that code will always see a resume that says Director as more powerful than a resume that says Assistant despite the director really being a buzz word that has nothing to do with a job and assistant meaning assistant manager of a retail chain that requires more people skills and work ethic than "director". AI is always going to weigh their own white male bias more highly than #womenofcolor in #hiring.

DELETED

@inthehands Hiring managers are so unskilled that in studies, choosing random resumes resulted in a more competent and happy workforce. Hiring managers generally hire people they like or remind them of themselves which always results in a bullied workforce because managers aren't exactly the best workers or nicest people. #hiringmanager. They lack the self awareness to know when they should hire the coal covered in dust vs the shiny diamond.

Matt McIrvin

@inthehands I think there is one exception--for a lot of people in creative fields who may have some kind of borderline ADHD condition, getting past the blank page or the digital equivalent is a real struggle. And if there's something that can push them past that step from nothing to something, they'll find it useful.

There's a powerful temptation to just use version zero, though, especially if you're not the creator but the person paying the creator.

Paul Cantrell replied to Matt

@mattmcirvin Indeed, I ran a successful exercise much along these lines with one of my classes (see student remarks downthread):
hachyderm.io/@inthehands/10947

I think there really is a “there” there with LLMs; it just bears close to no resemblance to the wildly overhyped Magic Bean hysteria currently sweeping biz. Generating bullshit does actually have useful applications. But until the dust settles, how much harm will it cause?

DELETED

@inthehands

They are betting that AI/ML is going to get better. From a historical view of technology, they are probably right.

I detest the trend as well, but if it replaces basic clerking jobs, that saves people from tedium too.

Paul Cantrell

@abreaction Better? Yes. Sure.

“Better” in the sense of “fundamentally different by nature?” I really, really doubt that.

The problems I mention in this post are •intrinsic• problems, baked into the nature of the tech: hachyderm.io/@inthehands/11195 They don’t vanish just because the tech gets better, no more than making a car go faster can make it play the piano.

OddOpinions5 replied to Paul

@inthehands @abreaction

looking at the truly mind bending progress in computers in the last 50 years, it would be , IMO, a very very brave person willing to predict if in the next 10 or 20 years, that AI is a flying car or something truly radical

DELETED replied to OddOpinions5

@failedLyndonLaRouchite @inthehands

That's true.

My napkin calculations say that AI is going to require lots of little rules and modifications to work, and it will plateau at some point, but it's going to be very effective for certain repetitive jobs.

I think they most want it for manufacturing. Could be really useful to have robots that notice anomalies and can correct them.

Tidings of Comfort & Jo

@inthehands big corporations are using AI to crush other businesses, with the temptation to greed over quality, and devaluing all the people and labor that built it. It takes away human education and evolution and replaces it with infinite monkeys in the machine. Simias ex machina.

DELETED

@inthehands

Any business that prioritizes profit over workers rights deserves to suffer and fail.

J Miller

@inthehands

Good thread!

This is all made even harder by the fact that applicants are simultaneously adopting LLMs. This reduces the effort needed to apply, resulting in larger applicant pools with different signals. Heck, the applicants will start to get advised to change softball to baseball. And in the pantheon of resume lies, that’s trivial.

But this shift by applicants also means I can’t entirely blame companies for trying some machine learning.

J Miller

@inthehands

I worked in HR at a large tech company when applying online first became a thing (early 2000s). They were getting a million applications a year, many requiring visa sponsorship where that would not be feasible. I’m not sure what numbers are like now. Legally, there was a change in the definition of an applicant. But a million per year was a big change for them, requiring a whole staff of contractors to scan and do data entry of resumes. This feels similar.

axoplasm

@JMMaok @inthehands this so much! We spent the last year trying to hire for a senior position and eventually just gave up. 1000+ applications, barely 10 worth interviewing

This is for an IT position at a nonprofit, not a tech co. A human reads *every* application

axoplasm

@inthehands @JMMaok …and talking to the recruiters this is not (yet) happening for non-tech positions at the org

DELETED

@inthehands AI is overwhelmingly made by white men so the bias will always be white supremacy, always. White male IT bruh's tend to lack even small amounts of empathy and are extremely sheltered so they can't see anyone else's perspective. They also work hard to destroy the careers of the few women and POC that work in large organizations. It's resulted in such a gaggle fuck that the top tech companies had to disband their AI hiring because of potential bad press. Though I think it was intention

Stu

@inthehands one interesting side effect of all this HR AI is that, as I potentially need to reenter the job hunting game, I'm thinking of creating a resume not formatted for clarity to people, but absolutely optimized plain text for algorithms.

Sigh.

Dieu

@inthehands maybe it's simply about ridding oneself of the awful decision making. Throwing dice in a way that allows one to convince oneself one's not just rolling dice.

Paul Cantrell

@hllizi
Yes, much of the corporate appeal of AI is whitewashing bias.

Ivan Sagalaev :flag_wbw:

@inthehands first of all, thank you!

Now, reading through this thread prompted a related but different thought: the current generation of Tesla's self-driving AI eschews codified decision-making in favor of learning how to drive based purely on humans. Which should obviously be a bad idea if your stated goal is to devise a better-than-human behavior. But everyone is just closing their eyes and saying "well, I guess they know better what they're doing". They don't.

nen

@inthehands So true. For me the most interesting thing about LLMs has been to to break them and then try to understand why they break in such strange ways (sadly, I didn't learn much about that)...

Sven A. Schmidt

@inthehands Reminds me of a lesson I learned about 30 years ago in a physics course. In pairs we had to run experiments a full day and then prepare an analysis.

Our results were garbage. We tried everything to explain them, all attempts failed. In the end we went in to present our “results” and expected to be roasted.

On the contrary, our tutor was delighted. Turned out an essential part of the experiment was broken and he praised us for doing all the “false negative” analysis 😮

Christine M.

@finestructure @inthehands

"False negative analysis" and being brave enough to say "We don't know - yet." - both valuable positions when the situation warrants.

And: Don't let yourself be discouraged.

buherator
@finestructure @inthehands I heard a legend about a lab exercise at our uni where students were tasked to figure out the contents of a box by electrical measurements on some external connectors. Sometimes the box contained a potato wired up.
Go Up