Our lecturer said that if two random variables are independent, it should usually be "obvious" from their joined density. On the following examples, he then indeed proceeded to prove independence by guessing what the marginal densities would be.
I am not sure as to why that works, here's his reasoning (I tried following it as acurrately as I could):
Let's assume we can find two functions $g,h$ such that $$f_{X,Y}(x,y) = g(x) h(y) $$ Thus we have $$f_X(x) = \int f_{X,Y} (x,y) dy = g(x) \cdot c, c = \int h(y) dy$$ $$f_Y(y) = \int f_{X,Y} (x,y) dx = h(y) \cdot d, d = \int g(x) dx$$ Thus the marginal densities seem to equal to our guess up to a multiplicative constant, but $$f_X(x) f_Y(y) = g(x) h(y) \cdot cd = f_{X,Y} (x,y) \cdot cd= f_{X,Y} (x,y)$$ Because $$c\cdot d = \int h(y) dy \cdot \int g(x) dx = \int g(x) h(y) dxdy=1$$
Now, I generally follow, but I remain unconvinced/unsure, here are some additional questions
Mainly, is this correct and true? Does simply separating the variables like that into individual functions prove independence?
Is this a common approach? I've never encountered it and now he uses it multiple times per lecture, since usually he can just guess the functions right away.
If I understand correctly, if I find $g,h$, I haven't found the marginal distributions (for that I'd have to find $c$ and $d$), but I have proven they are independent. Is that correct?
Thank you.
edit: Due to BCLC's answer, I'll just add this comment if further discussion should take place: I am fully aware of measure theory and of what he calls advanced probability theory (I mean, I am a begginer, but I do understand how to build the probability measure "from scratch")