Suppose that a function $f: \mathbb{R}^n \to \mathbb{R}^n$ is continuously differentiable on $\mathbb{R}^n$. Further, the Jacobian of $f$ at a point $x_0 \in \mathbb{R}^n$ is non-zero, that is, \begin{equation} \begin{vmatrix} \frac{\partial f_1\left(x_{0}\right)}{\partial x_{1}} & \frac{\partial f_1\left(x_{0}\right)}{\partial x_{2}} & \dots & \frac{\partial f_1\left(x_{0}\right)}{\partial x_{n}} \\ \frac{\partial f_2\left(x_{0}\right)}{\partial x_{1}} & \frac{\partial f_2\left(x_{0}\right)}{\partial x_{2}} & \dots & \frac{\partial f_2\left(x_{0}\right)}{\partial x_{n}} \\ \vdots & \vdots & \ddots & \vdots\\ \frac{\partial f_n\left(x_{0}\right)}{\partial x_{1}} & \frac{\partial f_n\left(x_0\right)}{\partial x_{2}} & \dots & \frac{\partial f_n\left(x_0\right)}{\partial x_{n}}\\ \end{vmatrix} \neq 0. \end{equation} How to prove the following proposition: there exists a neighborhood around $x_{0}$ where the Jacobian of $f$ is non-zero? That is, deduction of the following proposition is needed: \begin{equation} \exists \delta > 0, \forall x \in \mathbb{R}^{n},\lVert x-x_{0} \rVert < \delta \longrightarrow \begin{vmatrix} \frac{\partial f_{1}\left(x\right)}{\partial x_{1}} & \frac{\partial f_{1}\left(x\right)}{\partial x_{2}} & \dots & \frac{\partial f_{1}\left(x\right)}{\partial x_{n}} \\ \frac{\partial f_{2}\left(x\right)}{\partial x_{1}} & \frac{\partial f_{2}\left(x\right)}{\partial x_{2}} & \dots & \frac{\partial f_{2}\left(x\right)}{\partial x_{n}} \\ \vdots & \vdots & \ddots & \vdots\\ \frac{\partial f_{n}\left(x\right)}{\partial x_{1}} & \frac{\partial f_{n}\left(x\right)}{\partial x_{2}} & \dots & \frac{\partial f_{n}\left(x\right)}{\partial x_{n}}\\ \end{vmatrix} \neq 0. \end{equation}
Below is my own thinking and doubts about the proof.
Using the condition that $f$ is continuously differentiable, we have the following set of propositions: \begin{equation} \forall \epsilon > 0, \exists \delta > 0, \forall x \in \mathbb{R}^{n}, \lVert x - x_{0} \rVert < \delta \longrightarrow \left\vert \frac{\partial f_{i}\left(x\right)}{\partial x_{j}} - \frac{\partial f_{i}\left(x_{0}\right)}{\partial x_{j}} \right\vert < \epsilon, i,j = 1,2,\dots,n. \end{equation} We should assume the following: \begin{equation} \forall \delta > 0, \exists x \in \mathbb{R}^{n},\lVert x-x_{0} \rVert < \delta \wedge \begin{vmatrix} \frac{\partial f_{1}\left(x\right)}{\partial x_{1}} & \frac{\partial f_{1}\left(x\right)}{\partial x_{2}} & \dots & \frac{\partial f_{1}\left(x\right)}{\partial x_{n}} \\ \frac{\partial f_{2}\left(x\right)}{\partial x_{1}} & \frac{\partial f_{2}\left(x\right)}{\partial x_{2}} & \dots & \frac{\partial f_{2}\left(x\right)}{\partial x_{n}} \\ \vdots & \vdots & \ddots & \vdots\\ \frac{\partial f_{n}\left(x\right)}{\partial x_{1}} & \frac{\partial f_{n}\left(x\right)}{\partial x_{2}} & \dots & \frac{\partial f_{n}\left(x\right)}{\partial x_{n}}\\ \end{vmatrix} = 0, \end{equation} and draw a contradiction. However, I am not sure what trick to use to relate these partial derivatives to the Jacobian. Expanding the Jacobian using its definition in linear algebra looks really formidable. However, it seems that there is little chance to circumvent expanding it.