Has anyone written about how textual generative AI feels strangely close to toxic masculinity in some respects? The absolute confidence in everything stated, the lack of understanding of the consequences of getting that confidence wrong for important questions, the semi-gaslighty feeling when it “corrects” itself when you call it out on something. It so often feels like talking to someone one would despise and avoid in “real life.” I’m curious if anyone did some writing on this.
@mwichary Yes, there are studies about the social and ethical impact of biased #AI #LLM #generativeAI, especially in questions of masculinism, racism or homophobia. It's a fact that the popular models are trained mainly by men (with the #SiliconValley "philosophy"***) on men dominated content. The latest is this #UNESCO study: https://cepis.org/unesco-study-exposes-gender-and-other-bias-in-ai-language-models/
This test became quite well-known in 2023: https://rio.websummit.com/blog/society/chatgpt-gpt4-midjourney-dalle-ai-ethics-bias-women-tech/
#bias #biased #GenderBias #gender #AIEthics
@mwichary Yes, there are studies about the social and ethical impact of biased #AI #LLM #generativeAI, especially in questions of masculinism, racism or homophobia. It's a fact that the popular models are trained mainly by men (with the #SiliconValley "philosophy"***) on men dominated content. The latest is this #UNESCO study: https://cepis.org/unesco-study-exposes-gender-and-other-bias-in-ai-language-models/
This test became quite well-known in 2023: https://rio.websummit.com/blog/society...
@mwichary Well yeah, if there's one thing that embodies LLMs it's speaking complete fictions with the unearned confidence of a mediocre white man.
@mwichary when the a large set of the conversational corpus of LLM's is taken from places like Twitter and a reddit, then their ... Flavour of discourse is the output