Email or username:

Password:

Forgot your password?
Alex Wild

Several of us overly online biologists spent years quietly doing an experiment on Twitter, trying to find out if tweeting about new studies from a set of mid-range journals caused an increase in later citations, compared to set of untweeted control articles.

Turns out we had no noticeable effect; the tweeted papers were cited at the same rate as the control set.

Our paper, headed by Trevor Branch, was published today in PLOS One:

#SciComm #Twitter #X #Science

journals.plos.org/plosone/arti

34 comments
millennial falcon

@alexwild makes sense. twitter not exactly famous for empirical information ay.

Alex Wild

We were able to increase the amount of views and downloads those papers got, though. We could get eyeballs on the science.

I guess you can lead a horse to paper, but you can't make him cite.

Alex Wild

The good news is, if you thought Twitter's descent into Musk-filled madness might be detrimental to your efforts to get other scientists to cite your work, fear not. In this regard, Twitter was not actually that useful.

Alex Wild

There have been several studies showing that the highly-tweeted papers are also highly-cited.

I think that's right. But not because tweeting causes citations.

In light of our results, it seems more likely that both social media communicators, and publishing scientists, recognize impactful work when they see it. Good science just gets talked about more, regardless of the medium.

That scientific research impact can't easily be gamed by social media I find quite reassuring.

aggualaqisaaq🇦🇶

@alexwild

It's worth recalling that, as NPR discovered a few months ago, Twitter turned out to be pretty useless for journalists as well. And I wouldn't be the least bit surprised to learn that the same holds for all the businesses and advertisers who left.

So, really, there's no reason for anyone to keep patronizing that fascist sh**hole!

niemanreports.org/articles/npr

Nathan Lowell

@alexwild

The problem with social media is that it's soylent green all the way down.

Janne Moren

@alexwild
So, people who would cite your paper will usually find it without any social media self-promotion on your part. That'll be a relief for quite a few people.

David Benfell, Ph.D.

@jannem @alexwild I would expect most scholars to perform their literature searches using databases at academic libraries. This work doesn't confirm that but it's what students are taught to do.

Ron Parsons

@alexwild
I use social media to see interesting papers outside of my usual reading. So probably accessing things I may never cite in a formal paper but helps in other ways (teaching, general knowledge, etc)

Michael Emerman

@ronpar @alexwild Yes, totally agree with this. Social media is better for finding papers outside my expertise that I would not otherwise notice, but would never cite since they are not in my area. Things in my field, I would see anyway.

Climate Jenny 2.0

@alexwild My eyeballs appreciate the social media posting. It does nothing for anyone’s science career, but it does improve science communication.

Starraven

@alexwild
While worthwhile for the evidential data, I really would never have considered that people who would need to cite a scientific article would be looking for them on Twitter.

Dan Goodman

@alexwild brilliant work, thank you for doing this experiment! I'm wondering if you might expect a different result for tweets of your own work rather than someone else's? The reasoning being that your followers are more likely to be the sort of people that might cite your work in future than the followers of a general science communicator. I would for sure argue that for most people the majority of their followers are unlikely to cite, but for your own work you might be reaching the critical audience. One of the reasons I'm on twitter as a follower rather than tweeter is that I see papers from people in my extended community that I wouldn't have seen in the journal that it got published in. It would seem surprising if this effect was totally negligible given that twitter is the source of a substantial fraction of the papers I read. (Although less recently, the quality of twitter really has noticeably declined of late.)

@alexwild brilliant work, thank you for doing this experiment! I'm wondering if you might expect a different result for tweets of your own work rather than someone else's? The reasoning being that your followers are more likely to be the sort of people that might cite your work in future than the followers of a general science communicator. I would for sure argue that for most people the majority of their followers are unlikely to cite, but for your own work you might be reaching the critical audience....

Michelle

@alexwild @jd Thanks for sharing! I have got it opened in a to-read tab!

Steve Gisselbrecht

@alexwild
This is wonderful. Everything about it.

Dr. Evan J. Gowan

@alexwild In my experience, tweeting/posting about other peoples' studies rarely gets a ton of attention (in terms of likes and retweets and equivalents), at least compared to posting about my own studies. People are maybe more likely to remember papers posted on social media if they have a closer connection to the author. I definitely felt that on the response of some of my studies. I am still going to post about papers I read and find interesting, though. ;)

Scott Matter

@alexwild

Nice one! Really cool study and interesting result.

It’s helpful to think about where and how to focus my efforts in relation to the kind of metrics my institution will recognize and value.

Tangent: any recommendations on studies that look at other forms of impact (ie harder to measure things like collaboration, non-publication outcomes, etc)?

Cameron Neylon

@alexwild

Nice work! This takes me back to speculative musings on the time domain behaviour of these interventions (hdl.handle.net/20.500.11937/32, your Figure 2 made me think of my Figure 4)

You've really captured the immediacy of the viewing effect and I'm wondering whether a citation effect might be clearer if analysed in a more time dependent way rather than at a three year census point...

...but you've given us the necessary information to make that analysis possible, which is fabulous! (whether I have the time is another question)

The other question I've got is whether the citations might show greater diversity (reaching a wider range of scholars) because they are coming through a set of followers that might have wider geographic or disciplinary diversity. And we can test that as well! (same caveats apply...)

@alexwild

Nice work! This takes me back to speculative musings on the time domain behaviour of these interventions (hdl.handle.net/20.500.11937/32, your Figure 2 made me think of my Figure 4)

You've really captured the immediacy of the viewing effect and I'm wondering whether a citation effect might be clearer if analysed in a more time dependent way rather than at a three year census point...

John Towse

@alexwild @cameronneylon
Yes, this makes sense…
There are direct citations (citing X because I’m replicating X / extending X by taking the next step / assimilating X into a theory) and there are more indirect citations (citing X because it’s interesting & cool & maybe it can link to these data Y). Social media might be expected to pull more of the latter, but evidently not noticeably so. A deeper dive into the non-sig citation gain might examine this diversity

Cameron Neylon

@johnntowse @alexwild The other point is that using a bigger citation data source might give a different result if there is a real effect but the effect size isn't huge and the statistical power not quite there. That's another thing that would be relatively easy to test with OpenCitations and the DOIs (I'll put it on the list...)

John Towse

@cameronneylon @alexwild
The issue of power is discussed in the paper of course, but I am sympathetic to the argument there the effect size is not that impactful (even if it exists at the population level, it’s not making much difference for the individuals who do or don’t tweet about their papers)

Cameron Neylon

@johnntowse @alexwild

Agreed, my counter would be that in many of these cases the distribution of effects amongst individual outputs is wild, so effect sizes may look small on average but the effect when it happens can be quite large. And I would always have expected any effect to be large, but for a subset of papers.

Obviously randomised control trials like this to smear some of those effects out by design.

I feel that a Hidden Markov Model or time domain analysis would ultimately help in understanding the underlying pathways. But I also get that those approaches tell us about probabilistic associations, not causality - which is where the approach here is strong

And all of that said your main point is well supported - that for any specific paper, being tweeted about doesn't (didn't?) lead to significantly more citations on average

@johnntowse @alexwild

Agreed, my counter would be that in many of these cases the distribution of effects amongst individual outputs is wild, so effect sizes may look small on average but the effect when it happens can be quite large. And I would always have expected any effect to be large, but for a subset of papers.

John Towse

@alexwild @cameronneylon
Absolutely, these are really interesting questions to think about in response to a clever paper. (And in the meantime those who stay away from social media / certain social media can modulate their FOMO!)

llewelly

@alexwild
interesting. I think while on twitter I followed about 8 of the 11 authors, maybe more. (But, I am not a scientist of any kind.)

John Towse

@alexwild Nice read, thanks!
I’ve sometimes wondered about the value of the research conference circuit in getting others to know about work (an offline analogy to this), thinking about the availability heuristic (if you can recall something more easily it factors into your decision making). However, maybe the degree of engagement is different anyway (listening to a talk vs the time taken to retweet) ?

Furqan Shah

@alexwild The elephant in the room is "if there are -big name- co-authors on a paper or not". If all the authors are relatively obscure/lesser-known names in their respective field(s), nobody is going to take their science seriously (no matter how groundbreaking or robust). And if so, these papers will get cited on merit, *if* they really must be cited, and *if* someone has been able to reproduce the findings. With information overload, the value of individual papers tanks substantially. Sorry!

Christopher Kyba

@alexwild It's a neat experiment, but the presentation suffers from a classic problem: the results are consistent with an effect, but it's presented as if there is none, because it's not "statistically significant".

Since the CI overlaps zero with a mean 12% citation increase, it likely also overlaps 25% increase in citations. "Tweeting has no noticeable effect" and "Our results are consistent with tweeted papers having 1/4 more citations" are wildly different presentations of the same result.

Craig Aaen Stockdale

@alexwild Thank you for doing this! What about other types of impact? Citations in policy docs, for instance?

levampyre

@alexwild I wonder, if it were the same for posting on Mastodon. My feeling is, that there are more science interested folks around here.

Go Up