Here is an algorithm:
input matrix M
(in-place)
divide each row of M by its norm
divide each column of M by its norm
repeat
What will M look like after this has been repeated many times? Can we say anything about what this process does?
Here is an algorithm:
input matrix M
(in-place)
divide each row of M by its norm
divide each column of M by its norm
repeat
What will M look like after this has been repeated many times? Can we say anything about what this process does?
Here is a partial answer:
This iterative procedure of alternating normalisation by row sums and column sums is known as Sinkhorn-Knopp algorithm, which is used to determine the "$DAD$ form" in Sinkhorn's theorem. It is known that $\{A_k\}_{k\ge0}$ converges if and only if $A_0$ contains a positive diagonal, i.e. iff for some permutation matrix $P$, all entries of $A_0$ at the positions of the ones in $P$ are positive. See
R. Sinkhorn and P. Knopp (1967), Concerning Nonnegative Matrices and Doubly Stochastic Matrices, Pacific J. Math., 21(2):343-348.
So, $\{A_k\}_{k\ge0}$ may not converge. In fact, cycles can exist. Example: $$ \pmatrix{0&\frac12&0\\ 1&0&1\\ 0&\frac12&0} \rightarrow\pmatrix{0&1&0\\ \frac12&0&\frac12\\ 0&1&0} \rightarrow\pmatrix{0&\frac12&0\\ 1&0&1\\ 0&\frac12&0} $$ (To get an example for $\{M_k\}_{k\ge0}$, just take the entrywise square roots of the above matrices.)
I will add mvw's analytical answer with a numeric one.
First, I implemented this row and column normalization in Mathematica, and applied it to randomly generated $4\times 4$ matrices with entries in $[-1,1]$. The algorithm converges, it seems, quite quickly. (Note that while this isn't a proof of convergence, all the terms in the sequence except possibly the first lie in the bounded set of matrices with entries in $[-1,1]$, so there exists a convergent subsequence). The mean of the matrix norm of the difference between the 10th iteration and the 100th iteration was less than $0.001$. After many iterations, the matrices were, as you would expect, matrices with nearly normalized rows and columns.
Matrices with exactly normalized rows and columns are, obviously, fixed points of your algorithm. Let us call these unit row-column matrices. An interesting question is: are there any fixed points of your algorithm that are not unit row-column matrices? I expect that if there are any such fixed points, that they are unstable.
For $2\times 2$ matrices, we can pretty easily see what all the unit row-column matrices are. Given the upper left hand element of the matrix, we may make exactly two choices for the upper right element, and exactly two choices for the lower left element. Finally, there are two choices for the lower right element as well. Therefore, we have eight subclasses of matrix: $$\left(\begin{matrix} \cos t & \sin t\\ \sin t & \cos t\end{matrix}\right),$$ $$\left(\begin{matrix} \cos t & \sin t\\ \sin t & -\cos t\end{matrix}\right),$$ $$\left(\begin{matrix} \cos t & \sin t\\ -\sin t & \cos t\end{matrix}\right),$$ $$\left(\begin{matrix} \cos t & \sin t\\ -\sin t & -\cos t\end{matrix}\right),$$ $$\left(\begin{matrix} \cos t & -\sin t\\ \sin t & \cos t\end{matrix}\right),$$ $$\left(\begin{matrix} \cos t & -\sin t\\ \sin t & -\cos t\end{matrix}\right),$$ $$\left(\begin{matrix} \cos t & -\sin t\\ -\sin t & \cos t\end{matrix}\right),$$ $$\left(\begin{matrix} \cos t & -\sin t\\ -\sin t & -\cos t\end{matrix}\right).$$
I have created the following interesting diagram:
Above you see the unit circle with protruding "hairs." The end of the hair not next to the unit circle is the top row of a $2\times 2$ random matrix. The hair is the path taken by the top row of the matrix as it converges to the unit circle. As you can see, the convergence is sensitive to initial conditions.
You have $$ F(A) = C(R(A)) $$ with $$ R(A) = \left(\prod_i \frac{1}{\lVert (e_i^T A)^T\rVert}\right) A = \left(\prod_i \frac{1}{\lVert A^T e_i\rVert}\right) A \\ C(A) = \left(\prod_i \frac{1}{\lVert Ae_i\rVert}\right) A $$ which seems a non-linear operation.
In practice it defines an iteration $$ A_n = F^n(A) $$ which seems to have at least the fixed point $A = I$.
The algorithm you proposed is essentially the same as Ruiz's Matrix equilibration scheme (see http://www.numerical.rl.ac.uk/reports/drRAL2001034.pdf).
In numerical linear algebra, it is often beneficial to reduce the norm of every row and column to 1 before solving it, as this tends to make the solves more numerically stable. Ruiz's equilibration normalizes the max-norm of every row and column to 1. He also shows that this applies to any $p$-norm under certain conditions.
This process will converge towards a matrix R for which both lines and columns are normalized and that share the same values for $\forall (i,i',j,j') \frac{M_{i,j}M_{i',j'}}{M_{i,j'}M_{i',j}}=\frac{R_{i,j}R_{i',j'}}{R_{i,j'}R_{i',j}}$
The proof of the conservation is immediate. I have no prrof at hand for the convergence. But in practice, we use that algorithm and it has always converged.