Email or username:

Password:

Forgot your password?
Top-level
Selena

@Cheeseness @jonny
There is something like overdoing critical thinking: 'question everything' sounds good in theory, but I'd much rather work with someone who believes wikipedia than someone who wants to constantly weigh it against other evidence (evidence from Facebook or Quora)

ChatGPT is a bit like StackOverflow in that it will turn up a lot of bullshit and half-truths and it's up to the user to sift through that and find the thing that's potentially useful: you usually can't just copy-past

3 comments
Cheeseness

@Selena @jonny To be clear about where I'm coming from, I'd be wary of working with anybody who doesn't even think of glancing over cited sources for further reading when processing the content of a Wikipedia article. Wikipedia exists to summarise knowledge, and observing others assume that there isn't more to think about/learn is what makes me uncomfortable.

It's not quite "question everything" so much as "be interested/engaged with the stuff one is discovering."

Cheeseness

@Selena @jonny For synthesised text/images/whatever else, I can't imagine finding interest or value in it if I can't delve into the training corpus and think about how the nuances of that are reflected in the output.

AdeptVeritatis

@Selena @Cheeseness @jonny

ChatGPT is definitely not like StackOverflow. There are no consequences for wrong answers.

Nobody downvotes bad answers. Nobody gets points for helpful answers.
ChatGPT isn't held accountable for the wrong information and it doesn't show a list of its history of answers, so we can check, if it has any reputation for a specific topic.

Go Up