If you are not working with the asymptotic case, that "around the best fit" is key; if functions are twice differentiable, they are "locally linear" and also "locally quadratic", which latter implies that the quadratic approximation is arbitrarily good as you shrink the region over which you are approximating towards any given point.
This means that all twice-differentiable log likelihood surfaces are approximately quadratic in a sufficiently small region around the best fit. Naturally this does not imply that the MLE is approximately (to any given degree) normally distributed, because, writing loosely, that "sufficiently small region" can be much smaller than the region in which the MLE might plausibly fall.
If you are working with the asymptotic case, then things are different. If, asymptotically, your log-likelihood surface becomes quadratic around the best fit, then the corresponding MLE is, as you suspect, asymptotically Normally distributed. To see this, note that (one of the) standard proofs of asymptotic Normality of the MLE involves taking a Taylor expansion of the log likelihood function and ignoring all terms above the second order term; see for example http://www.stat.cmu.edu/~larry/=stat705/Lecture9.pdf, page 8. Obviously the validity of doing so requires that those terms actually be ignorable, i.e., that the log-likelihood surface becomes quadratic around the true parameter value as the sample size goes to $\infty$.