Email or username:

Password:

Forgot your password?
Carl T. Bergstrom

We're so fucked.

"Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar shows that many are about applied, often controversial topics susceptible to disinformation: the environment, health, and computing. The resulting enhanced potential for malicious manipulation of society’s evidence base, particularly in politically divisive domains, is a growing concern."

misinforeview.hks.harvard.edu/

57 comments
cobalt

@ct_bergstrom Oh wow, the scholarly papers already a majority of those cited: "Roughly two-thirds of the retrieved papers were found to have been produced, at least in part, through undisclosed, potentially deceptive use of GPT. The majority (57%) of these questionable papers dealt with policy-relevant subjects (i.e., environment, health, computing), susceptible to influence operations. Most were available in several copies on different domains (e.g., social media, archives, and repositories)"

Bob Calder

@cobalt @ct_bergstrom
policy implications are concerning however there are 440 retracted COVID-19 papers on retraction watch. AI is just part of the problem.

Faye

@Blob_Calder @cobalt @ct_bergstrom Well indeed… The reason why we do what we do is the real problem. It’s not only our endeavour to describe and understand our world, as scientists do, but the competitive relations we’ve created to sustain ourselves in relation to doing so…
AI could help speed things up analysing data, but we use it for _creating_ data. AI is our Frankenstein’s monster. We’ve started to describe and understand our own creation.

Michael Busch

@cobalt @ct_bergstrom I note "A sample of scientific papers with signs of GPT-use found on Google Scholar was retrieved".

So a majority of the papers that were flagged as potentially fraudulent were in fact made using the text generator.

That is already quite bad enough, since the number of fraudulent papers indexed by Google Scholar with just a couple of obvious tells of ChatGPT is apparently >100.

Kid Mania

@504DR @ct_bergstrom
If you haven't watched (or read) "Three Body Problem"...first contact with an alien race takes place in Mao's China...
youtu.be/Ycs6JRx-pxk?si=ksFxzN

Kid Mania

@504DR @ct_bergstrom

Intriguing thought experiment...I sometimes wonder about that button.

504 Battery Dr

@clintruin @ct_bergstrom

Now I'm gonna have to go watch it again.

The perks of having an old, forgetful and covid brain - watching the same movie over and over again, and it's like the first time everytime. 😏

John Timaeus

@ct_bergstrom

Is it time to find a proper Butler and begin a Jihad?

I always thought Herbert was a little over the top. But now I'm not sure. He may have been on to something.

maegul

@johntimaeus @ct_bergstrom

> Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.

> The target of the Jihad was a machine-attitude as much as the machines ... Humans had set those machines to usurp our sense of beauty, our necessary selfdom out of which we make living judgments.

It’s funny, both Tolkien and Herbert had strong Ludditism in their stories, and many AI nerds are likely fans.

chris martens

@johntimaeus @ct_bergstrom @maegul see also the unveiling of the Torment Nexus, inspired by classic sci fi novel “don’t invent the torment nexus”

ˈdälfən™🐬 💥 🌊

@ct_bergstrom Yet academia has embraced it, especially in certain disciplines (like business).

Amy Petty

@dalfen @ct_bergstrom

Perhaps some sectors have, but I don't think it's fair to assert this against academia as a whole. Most academics I listen to have been sounding the alarm about AI, not embracing it.

bhahne

@ct_bergstrom Does Google maintain and publish robust upload logging so that it's possible to determine what actors are flooding the ecosystem with this poison? Or is it just "a random paper has appeared, it came out of nowhere"?

Margret Kuarell

@ct_bergstrom we should start printing reliable sources on paper again to save them from ai corruption

:blahaj: Why Not Zoidberg? 🦑

@ct_bergstrom I am starting to become quite radicalized by this; I am starting to feel that in order to save society, we must classify AI development as terrorism.

Becinator

@WhyNotZoidberg @ct_bergstrom I'm a teacher at a university. I strongly believe that the emergence of AI/ChatGPT etc is going to set back humanity's development by at least a decade or two. It seems that a lot of people are just becoming reliant on it and are losing the ability to think for themselves.

ShadowInTheVoid

@artdragon86 @WhyNotZoidberg @ct_bergstrom to be fair the previous tech fads (especially crypto/NFTs) and the way social media has been weaponised by the far right have already set us back about 20 years.

Arcane Alchemist

@ShadowInTheVoid @artdragon86 @WhyNotZoidberg @ct_bergstrom The dynamics are similar. Many people think that they are resistant to misinformation. However, we don’t see it or don’t want to see it, if it confirms our world view. Add to that, that AI often spits out nonsense in an environment where people expect facts (search engine) and you have the perfect feedback loop.

Steve

@artdragon86 @WhyNotZoidberg @ct_bergstrom hot take: people who become reliant on ai couldn't think for themselves to begin with

Tig3rch3n

@ct_bergstrom
This reads like the three Body Problem - Sophons and the end of Science o_O
But compared to that, AI shit would easier to Mitigate...
Just stop Publishing AI-Crap

Pēteris Krišjānis

@ct_bergstrom while it is very disheartening, it doesn't change much in reality, does it? Like global warming - if you claim to be skeptic, no amount of valid papers gonna convince you. Same with other issues.
While cooking up "evidence" for political agenda is nothing new, and I don't see how that's gonna change people minds, it is saddening.

Amy Petty

@peteriskrisjanis Yes, yes it absolutely does. What kind of argument is this?

Carl T. Bergstrom

@peteriskrisjanis Systemic reviews. Search engines. Training for future LLMs. Daubert decisions. Automated discovery. Need I continue?

altruios phasma

@ct_bergstrom

The issue here is human bias being proliferated at an exponential rate. These problems were here before. How many scientists in the 50-70’s said lead in gasoline was fine?

People point at AI because it’s new.

tschenkel

@altruios @ct_bergstrom

I point at AI not because it's new, but because it exponentially increases the power of misinformation. Creating misinformation by hand is limited in speed and reach. Creating misinformation by automated means (of which AI is the most powerful) is essentially unlimited.

maybenot

@tschenkel @altruios @ct_bergstrom

this, this is the thing. It tilts the field even further, makes all the "old" problems worse, and the various ways in which they're made worse are cumulative.

The lie has not only gone around the world, before truth managed to put on it's shoes, it went a dozen times and spawned two generations of offspring, all of them avid travelers.

altruios phasma

@tschenkel @ct_bergstrom

Automation and AI are not the same thing…

I generally agree with your point, just not exactly where your point is aimed.

The issues have been accelerating since 2016, Cambridge analytica

altruios phasma

@tschenkel @ct_bergstrom

Automation has been going on much longer than AI.
2016: Cambridge analytica is where things started really ramping up. The political misinformation machines were doing the automatic a/b testing.

The methodology of that time and now look nearly identical: besides image generation (something you probably take a few passes in Photoshop to clean up), I just don’t see where the efficiency gains are. Compared to what was already happening.

Anne Ominous

@ct_bergstrom honestly, add this to the many reasons. we stopped having functional societies in most places on earth a long time ago, IMO

Hermannus Stegeman

@ct_bergstrom just a wild guess: these same papers produced bij AI are used bij LLM to give answers to search-questions? "Garbage is even more garbage out"?

Simon Zerafa :donor: :verified:

@ct_bergstrom

All scientific publications and journals should immediately require a declaration from authors that no LLM/AI systems were used in the generation of results or the paper being published.

Strypey

@simonzerafa
> All scientific publications and journals should immediately require a declaration from authors that no LLM/AI systems were used in the generation of results or the paper being published

... and convincing evidence of a false declaration ought to be grounds for being banned from publishing, by all credible academic journals. For at least 5 years, if not permanently.

@ct_bergstrom

Ken Hallenbeck

@simonzerafa @ct_bergstrom many do, but only some authors comply. That casts doubt on all submissions, unfortunately. And, at least for now, we can only really identify LLM use when authors submit with obvious editing failures. That's what this paper caught - also I document here as best I can:
peeraireview.com/search.html

The future is pretty simple... We have to view all papers starting in 2023 as potentially containing GenAI content, unless we know and trust the authors.

@simonzerafa @ct_bergstrom many do, but only some authors comply. That casts doubt on all submissions, unfortunately. And, at least for now, we can only really identify LLM use when authors submit with obvious editing failures. That's what this paper caught - also I document here as best I can:
peeraireview.com/search.html

Benedikt

@ct_bergstrom
Here's an example of an extremely brash example of such a paper (link below).
The problem is not only the spill of this 'content' in general, but also for-profit paper mills like #Frontiers which let that shit happen.
arstechnica.com/science/2024/0

Theriac

@ct_bergstrom@fediscience.org
I dunno - I think google and any other corporate entity trying for content inperpertuum have fucked themselves.

As it stands authors will email copies of their papers if asked - as they want people reading their stuff more than insisting people go through the hoops of the sites "publishing" the papers.

Academia will simply move away from publishing entities and companies that pump ai slop into everything.

Xela

@ct_bergstrom it's getting even harder to fight misinformation.
Yes, I think we're fucked. 😬

xs4me2

@ct_bergstrom

Already now this is a big problem. The next big thing after the IT revolution… or a next phase of it…

Paul Tourville

@ct_bergstrom Of course, no one could have seen this coming... cripes.

FPS DEV GUY

@ct_bergstrom you know what's bad then? The fact that they are not reviewed properly. If they are, then we wouldnt be here in the first place.

Carl T. Bergstrom

@tengkuizdihar Yeah, so read the article.

By and large this is not peer reviewed literature.

Strypey

(1/2)

@ct_bergstrom
> Our analysis of a selection of questionable GPT-fabricated scientific papers found in Google Scholar

BTW I'm guessing this problem is even worse in China. Where there have long been questionable attitudes towards academic publishing, influenced by decades of the CCP's MiniTru approach to information. Also where training MOLE (Machine Operated Learning Engines) is even more hyped and funded, and less regulated, unless for MiniTru purposes.

Strypey

(2/2)

"China continues to have problems with research integrity and has the highest number of retractions of any country due to plagiarism, invented data and fake peer review, but is seeking to improve this by removing cash incentives and use of plagiarism software."

#KenHyland, 2023

doi.org/10.1002/leap.1545

Nini

@ct_bergstrom It's poisoning the well, misinformation and ignorance are great for creating an underclass to exploit and who aren't smart enough to argue back.

cwicseolfor

@nini @ct_bergstrom And that bit about reaching a point where there's no longer an aim of twisting or replacing the truth but destroying the notion of truth altogether.

Vanishing all signal in a total wave of noise means inviting all who fear, or are overwhelmed by, the responsibility of decisionmaking to amputate all higher mental function, reason, values, offering carte blanche to run with whatever is most gratifying, easiest, most comfortable. Become livestock.

Marcello Seri

@ct_bergstrom and that is just the beginning. Have you seen this? sakana.ai/ai-scientist/ (link to arXiv is also in the post)

craignicol

@ct_bergstrom funny how the people who have the most money in AI are also the ones who have funded or made money from disinformation 🤔

Antoine Alberti

@ct_bergstrom ok so not only do LLMs suck more energy than we can decently produce, and destroy the internet by flooding it with more BS than we can debunk, but its next target is now science.
Of course, LLMs are not a scient thing trying to fuck with humanity. We are doing this to ourselves in enlightened consent.

Go Up