I try to decomposite an arbitrary quantum state into a matrix product state. For this i follow this paper by U. Schollwöck where especially section 4.1.3 is relevant.
So far I did the following:
- created a random vector $x \in \mathbb{C}^{2^L}$ and normalized it
- reshaped it to $\sigma_1$,($\sigma_2$,$\sigma_3$...,$\sigma_L$)
- used the singular value decomposition routine from numpy and split it in $U_{\sigma_1,a_1}$ $S_{a_1,a_1}$ $V^\dagger_{a_1,(\sigma_2\ldots\sigma_L)}$
- repeat the same with $V^\dagger_{a_1,(\sigma_2\ldots\sigma_L)}$ but this time with the reshape ($a_1$,$\sigma_2$),($\sigma_3$,$\ldots$,$\sigma_L$) (actually not sure if I really did exactly this separation).
This is the code I implemented:
import numpy as np
L = 6
np.random.seed(1234)
x = np.round(np.random.normal(0, 1, size=2*(2**L)).view(np.complex128),1)
x /= np.linalg.norm(x)
def convert2mps(x,chi):
L = int(np.log2(len(x)))
v = x
u = {}
s = {}
r = 1
for i in range(1,L):
v = v.reshape((2*r,2**(L-i)))
u[i],s[i],v = np.linalg.svd(v, compute_uv=True, full_matrices = False)
r = len(s[i])
print("u",u[i].shape)
print("v",v.shape)
convert2mps(x,4)
The output (the shape of u and v) looks to following:
u (2, 2)
v (2, 32)
u (4, 4)
v (4, 16)
u (8, 8)
v (8, 8)
u (16, 4)
v (4, 4)
u (8, 2)
v (2, 2)
I think the dimensions in the steps are right (at least the $2\to 4\to 8\to4\to2$ shape of the $V^\dagger$). The expected outcome of the whole function should be that I have L rank 3 tensors which I can contract to re-obtain the original vector $x$ ($\chi$ in the code is the maximal bond dimension and I think not yet relevant).
My problem is, that after the reshaping and the singular value decomposition, I don't know which of my indices are physical ones ($\sigma_i$ in the paper) and which one are the internal ($a_i$ in the paper). I know that I need to contract the singular values with the $V^\dagger$ in every step, but without the correct knowledge about the indices, I'm also not able to do that. Also related to that is the problem to transform the $U$ tensors to $A$ tensors. The paper described it as follows:
We now decompose the matrix $U$ into a collection of $d$ row vectors $A^{\sigma_1}$ with entries $A_{a_1}^{\sigma_1}=U_{\sigma_1,a_1}$.
If I understand it correctly it is just another reshape, so that we have one row for $\sigma_1=\uparrow$ and one row for $\sigma_1=\downarrow$ but I'm not sure how exactly and again I have the problem with the confusion of the indices.
Also I read that the boundary tensors get an additional index which is set trivially to 1 for consistency with the others. (e.g. here in figure 3a). I have problems to visualize this in my head. Is it just the the whole numpy nd-array is put into another array, so that we need one more index to access a specific value inside? And if that is so, where does the 1 appear?