If you know that the distribution is in fact normal, then tests derived from normal data will be optimal. The Z-test (with known variance) achieves just such a property for the 1 parameter normal distribution.
Parametric tests can be derived for any old distribution with maximum likelihood. If data are Poisson, Exponential, etc., a likelihood ratio test with 1 degree of freedom can be done as a two sample test. The link between T-tests and regression with adjustment for a binary group variable can be extended to generalized linear models for two-sample tests with known but not normal data.
It's way more interesting to think about when we don't know the distribution of the data. I mean, if you don't know the mean, what sense is to say, "I know this is a 3 parameter bimodal normal mixture model!"
The T-test has some interesting properties that it is also efficient even for a general class of finite-variance distributions. This is because of the central limit theorem. The sampling distribution of the mean converges to normal even in very small samples. Another way of describing the T-test is an asymptotic test because you are approximating the long-run sampling distribution of the mean.
Some test statistics, especially mins and maxes, do not converge to normal distributions, so tests based on their limiting distributions are actually compared to exponential (Huzurbazar), Gumbell, or extreme value distributions as $n \rightarrow \infty$.
In general, we would never apply a parametric test to the data of the wrong parametric form, it's just obviously the less optimal solution.