A normally distributed random variable $X$ has an associated probability distribution function (pdf) given by
$$ f(x) = \frac{1}{\sqrt{2\pi\sigma^2}} \mathrm{e}^{-\frac{(x-\mu)^2}{2\sigma^2}},$$
where $\mu$ is the mean or expected value of the random variable, and $\sigma$ is the standard deviation. You might try picking some values for $\mu$ and $\sigma$ and graphing the result—in each case, you should see a bell-shaped curve, though the location and height of the curve will vary depending on the parameters you choose. The probability that the random variable falls between two values is the area under the curve between those two values. We use integrals to find these areas, so
$$ P(x_1 < x < x_2) = \int_{x_1}^{x_2} \frac{1}{\sqrt{2\pi\sigma^2}} \mathrm{e}^{-\frac{(x-\mu)^2}{2\sigma^2}}\, \mathrm{d}x. $$
It turns out that this integral does not have an antiderivative in terms of elementary functions—I'm not going to go into details about what this means, but this basically means that it is very hard (possibly impossible) to compute this integral "exactly". The best that we can do is use numerical methods to approximate the values that this integral takes. We can get approximates that are as good as we like (if we spend enough time and/or computer power on it), but all we'll ever have are approximations. We can then print up tables and tables of these approximations, and use those tables for calculations.
But there is a problem with this idea! The value of the integral will depend on the parameters $\mu$ and $\sigma$! This means that if we change these values even a little, then we have to compute an entirely new table. This is clearly untenable, so we have to use some other tricks. The trick is to "standardize" our normal random variables. It turns out that if $X$ is a normal random variable with mean $\mu$ and standard deviation, then
$$ Z = \frac{X-\mu}{\sigma} $$
is a standard normal random variable—it is normal with mean $0$ and standard deviation $1$. Note that this is the formula used to compute the $z$-score of a normal random variable!
Since we can turn any normal random variable into a standard normal random variable, we only need one table of values! Yay! The basic idea is that we compute a huge table of values for $P(z < z_0)$ (using computers at this point in history), then standardize a normal random variable when we want to work with it.
Long story short: Getting exact values for probabilities associated to normal random variables is generally not possible. Computers can be used to find very good approximations, but we don't want to have a different table for every set of parameters. Since we can standardize any normal random variable, we only need to generate one table in order to work with any normal random variable.
The rest of the story: All of the above basically harkens back to the pre-computer era, or the calculator-free classroom. Modern computers can perform calculations fast enough to get 7- or 15-digit approximations in a fraction of a second, and most statistical (and even spreadsheet!) software have normal distributions built-in. The user inputs the $x$-score, the mean, and standard deviation, and the computer turns the crank and spits out a numerical approximation in a fraction of a second. I would guess that the computer first standardizes the input and actually performs the approximation for a standard normal distribution, but I don't actually know the nitty-gritty of statistical or spreadsheet software, or calculator programming.
The moral of this story is that tables are an anachronism, and have been replaced by computers in real-world use.