Email or username:

Password:

Forgot your password?
Dr. Damien P. Williams, Magus

This is the EXACT kind of shit we have been warning you about. "A.I." tools trained on data filled with asaumptions and prejudices about marginalized people and then being deployed in situations with literal life-&-death implications for the people involved. In this case, diaabled parents being more likely to be flagged as "unfit."

This is utter nightmare shit.

apnews.com/article/child-prote

39 comments
Cassidy James :eos: :gg: :fh:

@Wolven someone taking my kid away is nightmare fuel. I just can’t even start to think about what I would do—I care about them more than anyone in the world, and I freak out even just thinking about the scenario.

I can’t begin to fathom what these parents have gone through.

JimmyB (he/him)

@Wolven taking kids away from parents should be a very last resort after many other routes have been tried. When you read this shit about the #USA you know the land of the free stuff is complete BS.

This is dystopian

:mastodon: Mike Amundsen

@Wolven

"They wonder if an artificial intelligence tool that the Allegheny County Department of Human Services uses to predict which children could be at risk of harm singled them out..."

so Allegheny has their own "pre-cogs" ala "Minority Report"?

Dr. Damien P. Williams, Magus

@mamund prdefoctive risk assessments have been used is CPS social worl for a long time; just now they let the automated algorithms run on their own

unrelatedwaffle

@Wolven as long as it cuts labor costs, i guess. paying people to evaluate child welfare is just too onerous for us, so we'll let a computer decide what families remain intact. jesus fucking christ.

Itwasntme

@Wolven I can’t comment on the legality of this scheme, however a very recent Royal Commission in Australia just examined a scheme that targeted welfare recipients. It was given the name Robodebt. It was found to be partly illegal and grossly unfair.

Fifi Lamoura

@itwasntme @Wolven robodebt collection literally killed people too, the additional stress and despair resulted in deaths. These experiments with automated "social" sorting are extremely inhumane, evil even if we consider evil to be a lack of empathy. Machines have no empathy because we don't build them to be kind, we build them very explicitly to be cruel and to automate our own cruelty.

BrianOnBarrington

@Wolven Years ago, I was involved in an AI experiment involving a significant financial institution. The institution experimented with an underwriting model (theoretical only — no real mortgages were underwritten) to see if AI could accelerate underwriting and improve quality.

Instead, the plug got pulled in less than two weeks when the model ingested the historical underwriting data and began systematically redlining. People forget that AI echoes human failings.

Dr. Damien P. Williams, Magus

@brianonbarrington yup.

And at least THEY *stopped*. Some groups went right on ahead with it, just abstracted through proxy metrics

BrianOnBarrington

@Wolven We knew something was up when we did a Google Maps mashup of the mortgage data with a map of the City of Philadelphia and just looked at each other with eyes the size of saucers. When we brought the data back, the experiment was shut down immediately. The institution then started thinking about latent bias in underwriting so it came to a good result, but it also made me incredibly skeptical of the premise of AI-driven underwriting.

Captain Janegay 🫖

@brianonbarrington @Wolven Here's another similar story, this time using AI to flag people for welfare fraud investigation: wired.com/story/welfare-state-

Almost ten years ago I had a friend who was researching AI bias for his master's thesis. I had a sense that it was very prescient research but I couldn't quite get my head around what the real world implications would be. Welp!

Chip Butty

@Wolven @brianonbarrington another model that mathematically demonstrates historical prejudice? I do think this is the lesson we should be taking from AI research

Torbjörn Björkman

@brianonbarrington @Wolven Why would a financial institution even be interested in predicting what a human would do in the first place here? I thought the whole point of having computers work the maths for these things was explicitly to *cut out* human thinking.

BrianOnBarrington

@TorbjornBjorkman @Wolven The experiment was to see if underwriting could be largely automated; the data set used to determine underwriting quality was about 30 years of mortgage approval data from public records for mortgages that did not end up in foreclosure or public distress. Of course, human beings underwrote those mortgages so their bad habits became patterns that were emulated by the algorithm.

Torbjörn Björkman

@brianonbarrington @Wolven Ah, OK. An *attempt* to cut out the human, initially forgetting that you don't actually have any non-human data to fit to.

Are people often discussing the problem of time here by the way? If you're fitting to 30-year old data, you're fitting to a different society. 30 years ago is not a very reliable guide to what's going on in any specific neighbourhood (at least not here in Helsinki).

TinDrum

@TorbjornBjorkman @brianonbarrington @Wolven How else should an AI/algorithm be trained? If not on existing datasets (the larger the better) then how?

The cost of orienting a tool like this to an otherwise human task would likely be enormous, no? And also likely wouldn’t solve the problem.

The discrepancy is knowledge/ understanding of human bias which requires awareness/acknowledgement of bias.

How easy is it to assemble a team of engineers who are versed in such things? A team skilled in countering such things might well require fundamentally diverse background and experience but how does that kind of approach square with a typical management team or, indeed the culture more broadly.

Seems a lot like a paradigmatic shift is required.

@TorbjornBjorkman @brianonbarrington @Wolven How else should an AI/algorithm be trained? If not on existing datasets (the larger the better) then how?

The cost of orienting a tool like this to an otherwise human task would likely be enormous, no? And also likely wouldn’t solve the problem.

The discrepancy is knowledge/ understanding of human bias which requires awareness/acknowledgement of bias.

TinDrum

@TorbjornBjorkman @brianonbarrington @Wolven Or maybe some kind of audit process would work (clearly I have no expertise whatever).

It still seems a lot like the problem in most cases is acknowledging there’s a problem at all.

Torbjörn Björkman

@oscarjiminy @brianonbarrington @Wolven I think the paradigmatic shift needed is to not think that throwing mathematics at things necessarily helps.

And to regulate the ways in which risk calculating businesses are allowed to take and spread their risks. Set boundary conditions for their optimization problems such that their solutions generate a sane outcome when agregated.

TinDrum

@TorbjornBjorkman @brianonbarrington @Wolven Ok but that brings us back to excluding the tool altogether.

I don’t object to regulation but I doubt that’s likely in the US for eg anytime soon.

Torbjörn Björkman

@oscarjiminy @brianonbarrington @Wolven It could well be that it won't be very soon, agreed. But I still think that path will work sooner than waiting for US financial institutions to spontaneously develop sound thinking about the various risks they take on behalf of all of society.

BrianOnBarrington

@oscarjiminy @TorbjornBjorkman @Wolven That’s the big problem with FICO. Back during the Great Financial Crisis there was a reverse correlation between owners surrendering their homes to mortgage holders and FICO. Deep underwater borrowers with high FICOs were more likely to do the financially rational thing and walk away from their houses… not exactly what the predictive model intended. Now there are so many versions that it’s rather a joke.

BrianOnBarrington

@TorbjornBjorkman @Wolven Oh without question, old data will skew. But that’s how AI “learns.” Without a data set of “what to do,” it struggles to develop an outcome. I’m not an AI expert but at the time, “greenfield AI for underwriting” was science fiction. It probably still is. The most interesting innovation I’m aware of in the space was what SoFi did with student loans.

NilaJones

@Wolven

Every time I see a story like this, I think of the hundreds of people who have the same horrific experience, but don't have the connection to get a journalist to write an article

Amadi Lovelace

@Wolven this is where I live and I had no idea this was in play here but now I understand why so many CYS cases that should’ve been shut down from the jump were allowed to drag on for months and years. I’m sick. Absolutely sick.

Scott Matter

@Wolven

Not even a half shuffle step from there to straight up eugenics.

Angela

@Wolven Oh, fantastic, the mother in question has ADHD, like myself, one of the developers is here in NZ, and

"The developers have started new projects with child welfare agencies in Northampton County, Pennsylvania, and Arapahoe County, Colorado. The states of California and Pennsylvania, as well as New Zealand and Chile, also asked them to do preliminary work."

So good to know I'm going to be declared unfit any day now by a machine.

Angela

@Wolven (My son is autistic and highly sensory-seeking, he's basically a miniature Johnny Knoxville, and we've already had one hospital visit with a suspected concussion, so yeah, I think I have grounds to be worried, lmao, fuck...)

Alyssa

@seawall @Wolven as a leader in the tech space, a parent, and someone with some complicated issues, this is scary and deeply conflicting.

🇳🇿 💉💉💉💉Roger Parkinson

@Wolven no one talks about rule-based systems anymore, but they had a defined set of rules that could be read and, if necessary, argued about. I used to build loan approval systems with them. No racial or disability bias allowed, those rules just weren't coded in. They would be screaming obvious if they were there. This case should have been handled by agreed rules automated or not.

bencourtice

@Wolven that is even more horrifying than the advertisement-smothered, unreadable APNews website

bencourtice

@Wolven (but more seriously: I worry that the weight of tech capital will mean that these kinds of crimes get brushed off by the techbros mumbling sonething about needing to tweak their algorithm, that it's almost there, we can't ditch it now after so much investment etc etc when actually the whole thing should be sent to the dustbin)

Steven Bodzin bike & subscribe

@Wolven You might want to follow Garance Burke, whose whole beat is algorithms

KLB

@Wolven ok writers. Here’s a novel or screenplay idea for you. Along the lines of #NeverLetMeGo by Ishiguro.

Nick Bathum

@Wolven
> real-world laboratory for testing AI-driven child welfare tools

Utterly nightmarish indeed.

Alakest

@Wolven @dweinberger is likely to have a useful take an this subject.

This article, "Alien Knowledge", from 2017 is a great intro to the dynamic and issues.

wired.com/story/our-machines-n

In a nutshell (my hot take) - how do we treat sources of answers that are useful, but are arrived at in ways we can't (ever) practically check and how do we insulate against pernicious bias, ulterior or inadvertent?

A.L. Blacklyn

@Wolven

Points are deducted for accessing health services, which means parents are penalized for trying to improve the lives for their families.

No government official actually confirms that the parents are endangering the child (despite "innocent until proven guilty" with the presumption of human judgement) before stealing the child away into notoriously risky care.

Because of an automated program that reduced jobs?

Holy shit, this is horrifically wrong.

Go Up