Skip to main content

All Questions

0 votes
1 answer
10 views

Filter one-sided tests before FDR

In this blog post, the author shows one example of "weird" p-value histogram ("Scenario C"): One provided explanation is that a one-sided test was run, and the tests where the ...
Alexlok's user avatar
  • 145
0 votes
0 answers
29 views

False Discovery Rate-Adjusted Multiple Confidence Intervals after p-value-adjustment

I apologize in advance if my question is incomprehensible, I am very new to this topic. My data consists of 6 dependent variables (DV1-6) and 5 independent variables (IV1-5) of interest. For reasons ...
Joe55711's user avatar
2 votes
1 answer
64 views

Quantitatively distinguishing between multiple-comparison tests

As someone relatively new to applied statistics, I have been trying to better understand some of the best practices for applying statistical methods. I have been recently trying to understand when to ...
Argon's user avatar
  • 153
0 votes
0 answers
30 views

FDR q-values interpretation

In performing 43 test on two groups of individuals (independent t-test) I obtained 27 significant results, then adjusting the pvalues with BH procedure for controlling FDR I obtained the so-called q-...
Ed9012's user avatar
  • 391
3 votes
1 answer
287 views

FDR adjusted p-values and q-values

I am currently working on adjusting p-values in R using the False Discovery Rate (FDR) method and have encountered some confusion regarding the basic concepts of ...
Ed9012's user avatar
  • 391
1 vote
1 answer
137 views

Adjusting for multiple comparisons after Kruskal-Wallis

I am comparing biomarker levels (50 total) obtained from immunoassay in a cohort divided into 3 groups (control vs. inactive vs. active). I want to use the Kruskal-Wallis test with Dunn’s test to ...
NG0429's user avatar
  • 11
2 votes
1 answer
67 views

Dealing with many positive-depenent insiginicant results in Benjamini-Hochberg-procedure [closed]

I'm dealing with a statistical problem, where I tested a bunch of hypotheses with a very strong positive dependence within certain groups of them. Many of them didn't lead to significant results. Now, ...
mizanshu's user avatar
1 vote
0 answers
15 views

Should one correct for multiple comparison for secondary outcomes in RCTs or clinical trials?

This might a simple question answered elsewhere, but it comes up often in NHST discussions and I've heard conflicting statements. Imagine a metanalysis of 50 trials on 100,000 patients total to ...
DRG's user avatar
  • 323
1 vote
0 answers
15 views

FDR Correction needed on a pixel-wise comparison?

I have a question, which twirls my mind but I can't find a robust basis to answer it! I have a time-frequency data (let's say 5 columns x 5 rows) for each participant in two different groups. I have ...
KhonsKhandr's user avatar
1 vote
0 answers
30 views

How to Determine the Number of Independent Tests When Considering Multiple Testing

I have a question regarding how to determine the number of independent tests when performing multiple testing corrections. Suppose my research hypothesis is whether age is related to cognition decline ...
zjppdozen's user avatar
  • 347
1 vote
0 answers
98 views

Can the critical P-value returned by the Benjamini-Krieger-Yekutieli (BKY) procedure be greater than the false discovery rate?

If I use the Benjamini-Krieger-Yekutieli (BKY) procedure for an FDR correction, is it possible for the critical P-value returned to be greater than the desired false discovery rate? I just tried ...
Joel's user avatar
  • 61
0 votes
0 answers
38 views

Methods to control for false omission rate?

What methods are available to control for false omission rate in multiple comparisons, similar to the way Benjamini-Hochberg controls for false discovery rate? The goal is to filter out obvious ...
goweon's user avatar
  • 253
1 vote
1 answer
315 views

P-value correction for multiple Mann-Whitney tests, some of them being dependent

I have performed multiple comparisons using Mann-Whitney U tests, and want to correct the p-values to know which results are worth reporting. The structure of the data is as follows : 2 experiments, ...
alpagarou's user avatar
1 vote
1 answer
174 views

Adjust p-value of the all output or just the variable of interest?

I am running linear mixed effect models using the lme4 package in R with the following output. ...
Jess H's user avatar
  • 11
1 vote
0 answers
1k views

Do the p-values produced by lmerTest::lmer need to be adjusted for multiple comparisons?

I am running a fairly simple linear mixed model with multiple observations per patient, comparing a lab value to a reference group (healthy control). Here's some fake data: ...
HarD's user avatar
  • 227

15 30 50 per page
1
2 3 4 5
9