Convergence in probability can be highly valuable precisely because it is weaker than almost sure convergence. In many practical senarios, we can only establish convergence in probability, whereas almost sure convergence may not hold.
Strong and Weak LLN
For example, we have two notions for the LNN (law of large number); one is strong law of large number and the other is weak law of large number. The former refers to almost sure convergence, while the latter refers to convergence in probability.
The simplest form of the LLN is for i.i.d. random variables $X_1,X_2,\cdots$ with a finite mean $\mathbb{E}[X_1]<\infty$. It can be proved that
$$
\frac{1}{n}\sum_{k=1}^nX_k\xrightarrow{\text{a.s.}}\mathbb{E}[X_1].
$$
This convergence happens almost surely (also in $L^1$).
Another fundamental LLN is for a stationary Markov chain $\{X_n\}$. If $\{X_n\}$ is ergodic, the convergence also happens in both a.s. and $L^1$ sense. However, things are different for a not necessarily stationary ergodic Markov chain $\{X_n\}$. We can only prove the convergence in probability.
For reference, see Kulik (2018) pp. 175-176.
Strong and Weak Consistency
A statistical estimator that converges to the true value is called consistent in a strong (respectively weak) sense if the convergence happens a.s. (respectly in probability).
There is a long list of strongly or weakly consistent estimators in Chapter 6 of the book by Bhattacharya (2016).