0
$\begingroup$

I have a lot of effect sizes, estimated from the same linear model, but where the tested explanatory variable is different. These models are run in two different groups, but it is the same model and the explanatory variable ($x_1$) is the same.

Group A: $$ y_A = \alpha_A + \beta_{1A}x_{1A} + \beta_{2A}x_{2A} + ... + \epsilon_A $$ Group B: $$ y_B = \alpha_B + \beta_{1B}x_{1B} + \beta_{2B}x_{2B} + ... + \epsilon_B $$

So $x_1$ is the tested parameter, $y, x_1, x_2$ are the same variables, but of course they have different values in the different groups.

I would like to compare the distributions of the $\beta_1$ values, to see if they are different between the two groups. If I wanted to compare the means, I assume I could just do a t-test (after checking for normality). But what if I wanted to see if paired effect sizes so $\beta_{1A}$ and $\beta_{1B}$ have different values consistently across the two groups? Could I do a test where I test if the rankings of the paired effect sizes are different? For example a paired Wilcoxon signed-rank test?

The idea is to get insights into if the values of effect sizes for the paired variables $\beta_{1A}$ and $\beta_{1B}$ differ between the two groups statistically significantly. Does this make sense?

$\endgroup$

0

Browse other questions tagged or ask your own question.