The following is a lecture slide from a machine learning class:
I already have basic understanding of probability, including continuous random variables. And I'm familiar with the typical explanations of why the probability of a continuous random variable at any single value in its support is "almost surely" $0$. However, I'd like further explanation of the limit expression used in the above slide. I've never encountered such an expression (with limits and $dx$), and I think it would increase my understanding by understanding a new way of thinking about the concept.
I would greatly appreciate it if people could please take the time to explain this (in a generalised way).
EDIT: I found a section in my textbook Introduction to Probability, by Blitzstein and Hwang, that indicates that the use of $dx$ in the slide is incorrect: