Consider the following well-known function: $$ \operatorname{sinc}(x) = \begin{cases} \sin(x)/x & \text{for } x \ne 0 \\ 1 & \text{for } x =0 \end{cases} $$ In physics, the sinc function has applications with for example spectrography. Mathematically speaking, there would be no objection against an alternative like this: $$ \operatorname{suck}(x) = \begin{cases} \sin(x)/x & \text{for } x \ne 0 \\ 0 & \text{for } x =0 \end{cases} $$ But in physics such an alternative proposal would be void of applications. It is silently assumed that $\operatorname{sinc}(x)$ is continuous at $x=0$. Physicists do not even think about a discontinuous alternative.
The sinc function is only an example of a far more general claim, uttered by one of my heroes, the great mathematician L.E.J. Brouwer. It is (not very well) known as Brouwer's Continuity Theorem, grossly stating that every real-valued function is continuous. More precisely, as quoted from Strong Counterexamples: "In intuitionistic mathematics, the Brouwer Continuity Theorem states that all total real functions are (uniformly) continuous on the unit interval".
Real valued, physical quantities have uncertainties. That is one of the fundamental properties of physics. And it isn't just due to quantum considerations. Take an average metal bar. It has no exact length. There are for example temperature fluctuations (atoms in motion) which will cause the bar's length to fluctuate. This effectively means that any real number in physics is accompanied with an uncertainty, an error, often denoted as $\delta$ or $\varepsilon$.
Consider the classical mathematical definition of continuity of a function. All numbers are assumed to be real-valued. A function $f(x)$ is said to be continuous at $x=a$ if and only if for all $\varepsilon > 0$ there exists a $\delta > 0$ such that if $|x-a| < \delta$ then $|f(x)-f(a)| < \varepsilon$, where it may be that $x \ne a$.
A physical interpretation of this might be formulated as follows: an error in a continuous function can be made as small as desired by adapting the error in the function's argument accordingly. Due to the errors, $|x-a|<\delta$ is physically the same as $x\approx a$ ($x$ equals $a$ approximately) and this can be said for $f(x)$ and $f(a)$ as well. So we can even write $\; x\approx a \,\Longrightarrow\, f(x)\approx f(a)\;$ , as a (sloppy) definition of continuity. The latter formulation is even closer to Brouwer's Continuity Theorem, if we replace the $\,\approx\,$ by a common equal sign: $\; x=a \,\Longrightarrow\, f(x)=f(a)\;$ , expressing the idea that a function is continuous where it really is .. a function!
Now consider again the above suck function. Whatever small it might be, inevitably there is an error in the argument, meaning that $x=0$ should actually be replaced by an interval $|x-0| < \delta$. There are values $x\ne 0$ in that interval, though, and $\,\lim_{x\to 0} \operatorname{suck}(x) = 1$. Hence, physically speaking, $\operatorname{suck}(0) = 1\,$ and $\operatorname{suck}(0) = 0\,$ must be true at the same time. Which is impossible. IMHO this is the reason why $\operatorname{suck}(0) = 1\,$ is involved automatically in physics, resulting inevitably in our old friend the sinc function and nothing else.
I'm well aware of the fact that this way of physical reasoning does not involve all sorts of continuity that mathematicians might think of. So the question is what sorts of continuity are sensitive to the automatism that is present in the sinc function and what sorts of continuity are distinct from this. It's a somewhat vague question, but I am a humble physicist by education and I do not know of any better way to formulate it.
EDIT. A far more simple example of a function with the same sort of "automatism" as with the $\operatorname{sinc}$ function is given by: $$ f(x) = \begin{cases} (x^2-1)/(x-1) & \text{for } x \ne 1 \\ 2 & \text{for } x=1 \end{cases} $$ Which is physically the same as $\,f(x) = x+1$ . A counter example is the function $\,g(x) = 1/x$ , much like the one given by snulty . So it seems that some singularities are "essential" (physically speaking) while others are not. Can someone be more specific? Because I find it a can of worms, as is exemplified by related Q&A in MSE and elsewhere:
- Cauchy distribution instead of Coulomb law?
- Could this be called Renormalization?
- Does this limit exist and if so what is it's value?
- Can monsters of real analysis be tamed in this way?
- Computability, Continuity and Constructivism
- Delta function that obeys inverse square law
outside its (-1; 1) range and has no 1/0 infinity - Critical Mass Flow