Email or username:

Password:

Forgot your password?
Top-level
jonny (good kind)

@Cheeseness I think there's a genuine mental model disconnect where a (surprisingly to me) reasonably large number of people that aren't in any sort of "mega pro-AI" ideological camp, but just regular tool using ppl don't see it as being any different from any other information source like stackexchange or wikipedia.

10 comments
jonny (good kind)

@Cheeseness like, they would never imagine themselves as actually logging in and writing anything down on any of those other websites, so their social reality is totally meaningless. they just are always there and have always been there. having to type a question in chatGPT is just about as different as typing in a question form for ask jeeves is from typing in an abbreviated keyword imperative form for google

Cheeseness

@jonny 100% agree.

It also makes me a little uncomfortable with the role of Wikipedia or StackExchange within culture in terms of critical thinking vs just taking whatever at face value without any thought or consideration.

jonny (good kind)

@Cheeseness for the public record in case of any federation weirdness i typed this at the exact same time, mind twinning neuromatch.social/@jonny/11132

Cheeseness

@jonny Both messages show the exact same timestamp at my end :D

jonny (good kind)

@Cheeseness oh wait i forgot i mentioned stackexchange and wikipedia in the first post and thought we had both come up with those as examples of informational institutions we were uncomfortable with for this exact reason, so not as one in a million as i was thinking but still yes mind twins

Selena

@Cheeseness @jonny
There is something like overdoing critical thinking: 'question everything' sounds good in theory, but I'd much rather work with someone who believes wikipedia than someone who wants to constantly weigh it against other evidence (evidence from Facebook or Quora)

ChatGPT is a bit like StackOverflow in that it will turn up a lot of bullshit and half-truths and it's up to the user to sift through that and find the thing that's potentially useful: you usually can't just copy-past

Cheeseness

@Selena @jonny To be clear about where I'm coming from, I'd be wary of working with anybody who doesn't even think of glancing over cited sources for further reading when processing the content of a Wikipedia article. Wikipedia exists to summarise knowledge, and observing others assume that there isn't more to think about/learn is what makes me uncomfortable.

It's not quite "question everything" so much as "be interested/engaged with the stuff one is discovering."

Cheeseness

@Selena @jonny For synthesised text/images/whatever else, I can't imagine finding interest or value in it if I can't delve into the training corpus and think about how the nuances of that are reflected in the output.

AdeptVeritatis

@Selena @Cheeseness @jonny

ChatGPT is definitely not like StackOverflow. There are no consequences for wrong answers.

Nobody downvotes bad answers. Nobody gets points for helpful answers.
ChatGPT isn't held accountable for the wrong information and it doesn't show a list of its history of answers, so we can check, if it has any reputation for a specific topic.

Go Up