I am confused about the notion of $\sigma$-algebras representing information and what information is contained in $\sigma(X)$ for a random variable $X$.
Suppose $(\Omega, \mathcal{F}, \mathbb{P})$ is a probability triple, and $(Y_\gamma : \gamma \in C)$ is a collection of random variables. I am reading that $\sigma(Y_\gamma : \gamma \in C)$ consists of all the events $F \in \mathcal{F}$ such that for all $\omega \in \Omega$ it is possible to determine whether or not $\omega \in F$ given $(Y_\gamma(\omega) : \gamma \in C)$.
I don't even see how this makes sense if we restrict ourselves to a single random variable $X$. Suppose I know $X(\omega)$. I can only be sure $F$ occurred if $F\supseteq X^{-1}(\{X(\omega)\})$ and I can only be sure that $F$ didn't occur if $X(\omega) \notin X(F)$. But we know that $\sigma(X) = \sigma(\{X^{-1}(B) : B \in \mathcal{B}\})$. Where are all the other sets coming from?