Email or username:

Password:

Forgot your password?
72 comments
publictorsten

Nebenaspekt der Story: Eigentlich hatte das Originalfoto ja den gewünschten Bildausschnitt. Ich vermute, dass an einer Stelle das Bild beschnitten wurde und an einer anderen Stelle plötzlich ein anderes Bildformat gewünscht wurde.

Das ist unsinnig und damit wohl der künftig vorherrschende Einsatz von KI: Unnötige Aufgaben schaffen und hoffen, dass nicht allzu schief geht.

Florian Idelberger

@publictorsten das musst du nicht vermuten, das steht da genau so

Niko Trimmel :veriqueer:

@fl0_id @publictorsten das prinzipielle Problem ist nicht so ungewöhnlich/unüblich: Person A bekommt das Foto übermittelt für die Webseite, schneidet es auf ein Format zu und legt das Original entweder gar nicht ab oder in einer unüberblickbaren Struktur. Person B, die Social Media macht, bekommt das Original gar nicht zu sehen. Das sind Orga-Probleme, die du so in den meisten größeren Organisationen finden wirst.

Nfoonf

@publictorsten vor zwei Jahren so: „Leute wer hat das Pornhub Fullbackup auf die Platte mit den Model Trainingdaten gelegt?“

happyborg

@publictorsten pretty poor that in examining the photo she didn't notice this and think, oops, not like that.

Tom Stoneham

@publictorsten What is so shocking about this is *not* that the genAI did exactly what it is designed to do, namely create teh most likely looking image, but that the person who used it switched off their brain at the sight of the words 'AI'.

Before AI, if a social media person had had to do a major edit of a photo which left them deciding 'bra or no bra', they would have asked the sitter. Now they use genAI to be 'more productive', they stop doing their own job properly.

Konrad Rudolph

@tomstoneham @publictorsten I am noticing this a lot: GenAI being used as an excuse/accelerator for a massive erosion of professionalism, with the excuse of “increased productivity” (“getting things done”).

Matthias Krämer

@klmr @tomstoneham @publictorsten That seems to be what it's made for: Productivity without responsibility.

MylesRyden

@klmr @tomstoneham @publictorsten

This is in keeping with the TechBro mantra, "Move fast and break things."

Which is how we have quickly gotten to a world full of broken things.

tyx

@klmr @tomstoneham @publictorsten
That's my favorite joke from the long-pre-AI typewriter era:
HR asking a candidate for a secretary position on an interview:
You write in your resume that you can type 1250 words per minute. That's a lot. How did you achieve this?
-That was rather easy, I just don't care what gibberish is the result.

G. Allais

@tomstoneham @publictorsten I would personally not take for granted that, taking a professional shot of a woman as its input, "the most likely looking image" is one that sexualises her.

There are built-in biases and the blame does not solely fall at the feet of whoever is in charge of socials. The people who built the machine to do that and marketed it as a magic tool everyone should use have a huge responsibility.

Tom Stoneham

@gallais @publictorsten Good points.

1) I was assuming that the genAI was picking up on images you see everywhere of a style of female business attire we might call 'smart sexy', a distant echo of sex-positive feminism. It is a personal choice to present like that by both men and women (e.g. Obama with two shirt buttons undone).

2) I agree absolutely. The purveyors of this crap are trying to make people less good at their jobs to make the AI indispensible.

petrosyan

@publictorsten Not only didn't she consent to this edit, additionaly her photo was uploaded to an AI tool without her consent. I really don't know what's worse.

Kathleen

@petrosyan @publictorsten Sexualizing her photo was so special. Honestly it's essential to fire the individual who did this.

Ben Evans

@KathleenC @petrosyan @publictorsten The point is that no human did this - the AI tool that the social media person used did it "automatically".

It's an example of the harmful biases ("pictures of women should be sexually suggestive") encoded into every one of these damn things, and why none of them are fit for purpose.

Yann 不停 Heurtaux :antifa:

@kittylyst @KathleenC @petrosyan @publictorsten The Social Media person was left, not trained enough, not supervised enough. Fire them, and most of all their Manager.

Dave "Wear A Goddamn Mask" Cochran :donor:

@shalf @kittylyst @KathleenC @petrosyan @publictorsten I understand where you're coming from, and you are right to be upset. This was not an act of malice, it was addressed professionally and responsibly by management once they became aware of the situation, and - based solely on the original post since that's all the context I have - the injured party appears satisfied with the outcome.

If we fired people for not noticing things that IT tools do, every time it happened, there would literally not be anyone left working in IT. No one in their right mind thinks an "expand image" plugin is going to sexualize the image's contents; would you punish someone for printing flyers that have hidden metadata from the printer (snopes.com/fact-check/househol) in them?

The correct course of action, imo, is to start exactly as they've done by removing the offending image (and apologizing to the injured party), and to then use that as an educational moment so that it doesn't happen a second time.

Now, if it DOES happen a second time, fuck 'em. Unless/until, though, let people learn from their mistakes.

@shalf @kittylyst @KathleenC @petrosyan @publictorsten I understand where you're coming from, and you are right to be upset. This was not an act of malice, it was addressed professionally and responsibly by management once they became aware of the situation, and - based solely on the original post since that's all the context I have - the injured party appears satisfied with the outcome.

Yann 不停 Heurtaux :antifa:

@dave_cochran Alright. Train better the SoMe worker after a solid warning. Fire their Manager.
To be clearer: reputation and legal risk are way underestimated in this industry. It has to change. Yes, that means people at responsibility and strategy level thinking AI is harmless and allowing or promoting its unsupervised use, mostly for cost reasons (and ignoring externalities), losing their jobs.

Dave "Wear A Goddamn Mask" Cochran :donor:

@shalf have you seen this report? nngroup.com/articles/computer-

admittedly, it's a few years old now but I'd bet all the money in all of my pockets against all the money in all of yours that the data would be essentially the same if it was done again today.

The short version is that people, as a group, are WAY worse at using computers than people, as a group, think. Like, by virtue of having and using a Mastodon account at all, you are probably in the top 20% or so of computer users worldwide.

The point being, punishing people for things they cannot reasonably be expected to know about or even to know that there's ANYTHING TO LEARN about them, is counterproductive at best, and actively harmful at worst.

It's probably worth keeping in mind that we're talking about the folks running a conference here, and not, like, Facebook. If anything, this could become a phenomenal talk at the very conference it happened at since they're talking about UI/UX stuff (if I'm remembering the OP correctly)!

I just don't think that people acting in good faith should be penalized for things that they had absolutely no reason to think would cause harm, y'know?

@shalf have you seen this report? nngroup.com/articles/computer-

admittedly, it's a few years old now but I'd bet all the money in all of my pockets against all the money in all of yours that the data would be essentially the same if it was done again today.

The short version is that people, as a group, are WAY worse at using computers than people, as a group, think. Like, by virtue of having and using a Mastodon account at all, you are probably in the top 20% or so of computer...

Yann 不停 Heurtaux :antifa:

@dave_cochran “Absolutely no reason to think would cause harm”, in 2024, in a design-oriented communications and events community and business. Alright. Let me elaborate: it was the Manager’s job to a. have heard and learned from Monteiro e.g. 11y ago (see Webstock 2013 iirc), b. have heard and learned about e.g. Gebru et al. stories and work. Defending workers is right, hence my leaning towards your point. Managers are paid to know better. About time for some accountability in those fields.

Yann 不停 Heurtaux :antifa:

@dave_cochran To be honest and fair, let in charge, I would fire producers of indoors conferences without neither masks mandates nor best-in-line air renewing and filtering infrastructures and practices too for being stupid and having learned nothing, to explain how I approach taking care of people attending events, or producing/volunteering/speaking there. YMMV.

Yann 不停 Heurtaux :antifa: replied to Dave "Wear A Goddamn Mask" Cochran :donor:

@dave_cochran Then you cannot not see the issue with a woman speaker being sexualized through automation and edition of a picture she provided but never consented to be edited, never being shown the edited version, and nothing internally preventing that to a. happen, b. not be thrown away in due time, c. ever be published.

How is it taking good care of your speaker community? How can it not show failure at a high level of core biz competencies? How can someone in charge not be fired?

Kathleen

@dave_cochran @shalf @kittylyst @petrosyan @publictorsten

Nope. No excuses for creating and posting a SEXUALIZED photo of this individual without permission. No excuses. This isn't "not noticing every single piddly thing IT tools ever do," you straw man you, this is altering a the professional photo of a woman to SEXUALIZE IT.

Wake up.

Kathleen

@dave_cochran @shalf @kittylyst @petrosyan @publictorsten Yeah because "hidden metadata" is exactly the same as showing a faked image they pretended were her BREASTS.
Do you even listen to yourself?

jelte

@kittylyst @KathleenC @petrosyan @publictorsten the change itself is such an example, yes, but the mistake is using the tool in the first place.

Rich Felker

@kittylyst @KathleenC @petrosyan @publictorsten A human did do it. A human decided to use the shit "AI" tool, and uncritically accepted the result that came out of it for use in the material they were posting.

What YOU are doing is exactly the reason bad people love "AI". You are failing to account for the human agency of how the tool was used, and laundering the blame onto the tool that can't be held accountable.

Amoshias

@kittylyst @KathleenC @petrosyan @publictorsten a human chose to use the tool.

You can't wash bad decisions by saying "an AI tool did it."

Karsten Johansson

@kittylyst @KathleenC @petrosyan @publictorsten The person who "did this" was the person who accepted the AI output even with the most obvious difference from the original.

AI is not to blame. Blindly accepting what it produces is to blame. There is a whole team that should have caught that at some point, but the one who specifically did the action should definitely be raked over the coals.

We've taught an entire generation to check their Wikipedia-found data before using it. Why did this change when it comes to AI output?

@kittylyst @KathleenC @petrosyan @publictorsten The person who "did this" was the person who accepted the AI output even with the most obvious difference from the original.

AI is not to blame. Blindly accepting what it produces is to blame. There is a whole team that should have caught that at some point, but the one who specifically did the action should definitely be raked over the coals.

Jon Het-CIS.

@KathleenC @petrosyan @publictorsten It was done by A.I. There is no one to fire. I guess you could fire a random person who ordered the A.I. to modify the photo to fit the space, or the team that programed the A.I. I am not sure that would help anything though.

Susan Kaye Quinn 🌱(she/her)

@Jon_Kramer if you start holding people accountable for using AI and causing harms with it, I guarantee you that will "help"

@KathleenC @petrosyan @publictorsten

Rich Felker

@Jon_Kramer @KathleenC @petrosyan @publictorsten C&P'ing the exact same reply I gave to someone else with the same bad argument:

A human did do it. A human decided to use the shit "AI" tool, and uncritically accepted the result that came out of it for use in the material they were posting.

What YOU are doing is exactly the reason bad people love "AI". You are failing to account for the human agency of how the tool was used, and laundering the blame onto the tool that can't be held accountable.

Becky

@Jon_Kramer @KathleenC @petrosyan @publictorsten Sure, but someone used the AI tool and would have (or should have) reviewed its output. That person should be held accountable.

It’s one thing to use an AI tool, but you can’t just shrug when it does something in your name.

Jon Het-CIS.

@RenewedRebecca @KathleenC @petrosyan @publictorsten Have YOU looked at the output? When you do, and compare it to the input, tell me if you can figure out what is wrong. Remember, you can't refer to the original uncropped photo. Just the cropped photo, and the AI enhanced photo.

Becky

@Jon_Kramer @KathleenC @petrosyan @publictorsten Did the person involved have both the cropped photo and the enhanced one? Did the cropped photo have the invented details? Why is this hard for you?

Jon Het-CIS.

@RenewedRebecca @KathleenC @petrosyan @publictorsten She had the cropped photo. The artist through the use of A.I. created the 'enhanced' version. These details are not in dispute, or in any way 'hard'.

Becky

@Jon_Kramer @KathleenC @petrosyan @publictorsten Here’s the thing… She was responsible for the photo. If using AI makes it impossible to be verify the details on it, AI shouldn’t be used.

Markus Sugarhill :breadpats:

@Jon_Kramer the photo should not have been uploaded to any ai tool (I seriously doubt it was done in premise) without consent. This is where the wrongdoing started. @KathleenC @petrosyan @publictorsten

Auld Ma Twəgg

@Jon_Kramer @KathleenC @petrosyan @publictorsten the person who used the tool is responsible for its output. This is how it has always worked. When I was making videos for corporates I wouldn't be able to get away with using random stock footage, much of which is highly sexualised. I couldn't just say "this is what the shutterstock search returned". This is obviously absurd.

Jon Het-CIS.

@quinsibell @KathleenC @petrosyan @publictorsten LOOK at the output as a stand alone piece, and tell me what is objectionable about it. The phrase "highly sexualized" indicates to me that you have not seen the photo.

I think many people commenting on this thread have not seen the photo, and therefore fail to understand what this discussion IS. This leap to vilify a woman who was just attempting to do her job the best way she could is... petty.

And I am done with it.

Auld Ma Twəgg

@Jon_Kramer the woman who's face it was objected to it. At the very least edited photos require approval. Like I said, I couldn't get away with this conduct prior to AI image generators. Someone selected and posted the image. They are responsible.

Jon Het-CIS.

@quinsibell Great. Now go look at the picture and tell me what is objectionable.

Don't get back to me, I'm done with this idiotic conversation about nothing.

Auld Ma Twəgg

@Jon_Kramer it doesn't matter what I consider objectionable. A photo was edited and distributed in a way that the person in the photo objected to without consultation.

YankeeDoodleAndy

@petrosyan @publictorsten There needs to be more conversation around this IMO.

Kilian Evang

@publictorsten Side note: "respectable guy with 5 kids at home", dafuq?

Matthias Krämer

@texttheater He's not a childless cat man. And now we know what that can communicate about men.

Karsten Johansson

@Kraemer_HB @texttheater This is a version of the Republican's recent idea that a woman is useless if she didn't bear her own children.

Somehow having children bestows magical qualities on the parents, that are unavailable to everyone else.

siona

@publictorsten kinda sad that AI is now trying to undress us all :P

PocketHunk🏳️‍🌈

@publictorsten
A good graphic designer or photographer knows how to add length without AI. One of my tasks working for conference clients was working magic with terrible headshots, all before AI could help.

Steven Sandoval

@publictorsten It's wild what proprietary softwareland is getting up to these days.

For I am CJ :screwattack: :black_sparkling_heart: :screwattack:

@publictorsten

It is almost assuredly no coincidence that the kind of information that ML model was trained on also happens to reflect the viewpoints of the many toxic people who are (organizational) 'Leaders' in this area

mastodon.social/@publictorsten

small, primitive, wingless
@publictorsten that's.... certainly an informative presentation on UX + AI!
Mark Connolly 🍻 🚴🏼‍♀️ (he, him, his)

@publictorsten @EricLawton What the actual fuck?!? I’ll add that one of my proudest moments was when Elizabeth spoke at Fluxible, and did so with her recently-born baby in a carrier strapped to her front. As someone who has organized UX events, it’s profoundly disappointing to see that this photo shit happened to such a wonderful person.

Miakoda

@publictorsten
@Kishi

So programmers have taught AI male gaze?

Jima :Compromise_bi_flag:

@hellomiakoda @publictorsten @Kishi Jesus.

The expression in your profile pic really summarizes what should be the general response to this news. 😒

kasperd

This makes me wonder where they got their training material.

TransitBiker

@publictorsten the unethical nature of this stupid technology on full display here.

Go Up