Email or username:

Password:

Forgot your password?
Top-level
Faith has purple hair! :v_tg: :v_lb:

@Impossible_PhD While the study certainly needs to be repeated, the sample size isn't that bad given how definitive the results are. One of the funny things about statistics is that you really only need a sample size of 30-35 to get very precise results IF it's actually a random sampling. That's a load-bearing "if", though. Double-blind should endure that the two groups are randomly selected from within the test group as a whole. The bigger question is if the group that was desperate enough to sign up for HRT under those circumstances are representative of the trans-masc community at large. I'm not the right person to answer that question so I won't speculate.

Still, it's a spectacular and encouraging result. Hopefully other research groups and other hospitals will repeat the experiment. Then hopefully regulators will listen and just give the trans dudes their T when they ask for it. 🤞

@KevinLikesMaps

2 comments
Doc Impossible

@faithisleaping @KevinLikesMaps Oh yeah. I've done my own work with randomized samples, so definitely the case--it's just that the difference in strength between a 67 participant study and a 670 participant study is much more than an order of magnitude, in terms of confidence.

Faith has purple hair! :v_tg: :v_lb:

@Impossible_PhD Actually no it's not. That was kinda my point. For what the study is trying to show—a reduction in mental suicidal ideation in trans-mascs when they start T—you don't actually need a big sample size. You could run that study with 100k participants and it wouldn't directly increase statistical confidence. No, that doesn't appear to make sense on the face of it but it's true. Statistics can be surprising sometimes.

If you are asking a question in the form "If I do X on a Y, does Z happen (vs. an appropriate null)?" where you can quickly and reliably observe Z, you don't need a big sample size. You need an actual random sample of Y—which is sometimes easier said than done. In fact, "OMG look at my sample size!" is one of the classic tools used to lie about statistics. A crap study with a million participants is still crap. A good study with 50 is better.

So why did drug company run 50k-person tests on COVID vaccines? Why not run them on a few dozen and call it good? Two reasons:

1.

Safety. The study size required to prove it works isn't huge (still bigger than 35 but a few thousand will do). The real reason for huge studies is to try and catch the rare side effects. No only are 1:1000 side effects possible but you need enough data to determine whether that side effect is actually from the vaccine or from eating cucumbers for lunch. In order to ferret that out, you need the 1:1000 side effect to be repeated a bunch of times. That means a lot of people.


2.

Unless you're going to actually inject people with COVID—which would be highly unethical—you need to wait until people get it randomly. Given that a single person may never get COVID over the space of a year if they're reasonably careful—which most early vaccine trial candidates probably were—waiting for subjects to get randomly exposed takes a long time with a small sample size. You get more people exposed randomly the more people you have in your study so it goes faster.

The study wasn't going for safety so that removes point 1. To 2, when testing the effect of T on suicidal trans-mascs, you can just ask. It's important to note that it removed ideation, not completion of suicide. That's something you can test for just by asking and get rapid and accurate data. If you were testing for completion, you would fall into case 2 and need a bigger study.

(Sorry if I'm going into math educator mode here. I've seen too much crap about statistics in the last 3-4 years. 😫)

@KevinLikesMaps

@Impossible_PhD Actually no it's not. That was kinda my point. For what the study is trying to show—a reduction in mental suicidal ideation in trans-mascs when they start T—you don't actually need a big sample size. You could run that study with 100k participants and it wouldn't directly increase statistical confidence. No, that doesn't appear to make sense on the face of it but it's true. Statistics can be surprising sometimes.

Go Up