It's well-known that sets are "isomorphic" to logic: if we treat $\varphi(A_1, A_2)$ as a shorthand for $\forall x: \varphi(x \in A_1, x \in A_2)$ then $A \land B \equiv A \cap B$ and $A \rightarrow B \equiv A \subseteq B$ and so on.
I've noticed that a large number of true logical statements become events with probability 1 when interpreted probabilistically. For example, if $A \subseteq B$ ($\equiv A \rightarrow B$) then $\mathbb{P}(B|A) = 1$. If you squint hard enough you should see modus ponens there.
To connect a Boolean algebra with a Boolean ring you set $x \lor y := x + y - xy$, and wouldn't you know it, $\mathbb{P}(A \cup B) = \mathbb{P}(A) + \mathbb{P}(B) - \mathbb{P}(A \cap B)$. That connection can't just (ahem) be a random event, can it? ;-)
If we combine some propositional calculus and/or a Boolean algebra with measure/probability theory, can we get some theorems for free? Is it e.g. the case that if $\varphi$ is some tautology then the set-theoretic interpretation of $\varphi$ always has probability 1? Is there something stronger that's also true?
I also notice that $\mathbf{0}$ and $\mathbf{1}$, by which I mean the empty set and the set of all outcomes, are independent from all other events, and that I run into problems with Huntington's equation when I set $\lnot x := 1 - x$ and try to make a Boolean algebra over $[0, 1] \subseteq \mathbb{R}$, to do particularly with higher-order terms.
What are the theorems I'm grasping at but not quite seeing?