1
$\begingroup$

I have a coupled system of ODEs:

$$\cases{ i\frac{\text{d}y_1}{\text{d}t}=A f(t)y_2(t)+E_1 y_1(t)\\ i\frac{\text{d}y_2}{\text{d}t}=A f(t)y_1(t)+E_2 y_2(t) }\tag1$$

Here $f(t)$ is a periodic function with frequency $\omega$.

This is a system of equations governing energy levels population in a two-level system under perturbation of $A f(t)$. Here $E_1$ and $E_2$ are energies, $y_1$ and $y_2$ are probability amplitudes for levels 1 and 2 ($|y_1|^2+|y_2|^2=1$). For small perturbation amplitudes $A$, $|y_1|^2$ and $|y_2|^2$ go up and down almost periodically with frequency $\nu=\sqrt{A^2+(\lambda-\omega)^2}$ which is known as generalized Rabi frequency (here $\lambda=E_2-E_1$). But they don't change really periodically - something highly oscillatory at frequency of $\omega$ disturbs them. For higher $A$ these solutions no longer resemble any Rabi cycles.

I've tried setting $y_1(t)=p(t)e^{-i E_1 t}$, $y_1(t)=q(t)e^{-i E_2 t}$ so that for $A=0$ the result was as for stationary states: $p(t)=q(t)=1$. Now the system transforms to something more symmetric and more explicitly showing independence of the system of $E_1+E_2$; here $E_2-E_1$ is denoted by $\lambda$:

$$\cases{ i\frac{\text{d}p}{\text{d}t}=f(t)e^{-i\lambda t}q(t)\\ i\frac{\text{d}q}{\text{d}t}=f(t)e^{i\lambda t}p(t) \tag2 }$$

I've then tried decoupling this system to get the following independent equations:

$$q(t)=i\frac{e^{i\lambda t}}{f(t)}\frac{\text{d}p}{\text{d}t} \tag3$$ $$-\frac{e^{-i\lambda t}}{f(t)}\frac{\text{d}}{\text{d}t}\left(\frac{e^{i\lambda t}}{f(t)}\frac{\text{d}p}{\text{d}t}\right)=p(t) \tag4$$

I've successfully solved numerically the system $(1)$, and even semi-analytically taking $f(t)$ as piecewise-constant, and then taking pieces to have very small length, and this gave me results close to numerical solution, but this still doesn't give any good understanding of the analytical form of the solution.

Here're examples of how the solutions look ($|y_1|^2$, $\Re y_1 \& \Im y_1$, $\Re y_2 \& \Im y_2$):

|y_1|^2

\Re y_1, \Im y_1

\Re y_2, \Im y_2

My question is: is there any way to further simplify this problem, e.g. extract that (quasi)periodic Rabi cycle part? I'm thinking of some analogue of Bloch theorem, but don't really see how exactly to do this.

Or even better, maybe this could be solved completely analytically? If yes, how?

$\endgroup$

1 Answer 1

0
$\begingroup$

Such systems of equations like

$$\mathbf x'(t)=\mathbf A(t)\mathbf x(t),\tag I$$

where $\mathbf A(t)$ is a periodic matrix with period $T$, can be described using Floquet theory. This won't allow to solve the system analytically, but will at least give some insight into the structure of the solutions, as well as let avoid the need to numerically integrate the ODEs for $t>T$.

Structure of any fundamental solution (i.e. complete set of linearly independent column-vector solutions) $\mathbf X(t)$ of system $(\mathrm I)$ is

$$\mathbf X(t)=\mathbf P(t)\exp(\mathbf Ct),\tag{II}$$

where $\mathbf P(t)$ is a $T$-periodic matrix function, $\mathbf C$ is a matrix constant defined by

$$\exp(\mathbf CT)=\mathbf X(0)^{-1}\mathbf X(T).\tag{III}$$

Eigenvalues $\mu_i$ of the matrix $\mathbf C$ are called characteristic exponents of the equation, and the values $\rho_i=e^{\mu_i T}$ are known as characteristic multipliers.

Any general solution vector $\mathbf x(t)$ can be represented in the form

$$\mathbf x(t)=\sum\limits_{i=1}^N a_i e^{\mu_i t}\mathbf p_i(t),\tag{IV}$$

where $\mathbf p_i(t)$ are $T$-periodic vector functions.


Now to our particular system $(1)$. It can be formulated in matrix form with

$$\mathbf x(t)=\pmatrix{y_1(t)\\ y_2(t)},\tag{V}$$

$$\mathbf A=\pmatrix{ -iE_1 & -iAf(t)\\ -iAf(t) &-iE_2}. \tag{VI}$$

Having some initial values for our system of equations, we can choose some additional, linearly independent from the original one, set of initial values to form initial value of the fundamental solution matrix $\mathbf X(0)$, and solve (numerically) the system to get $\mathbf X(t)$ for $t\in[0,T]$. Then we can calculate the characteristic exponents $\mu_1$ and $\mu_2$ using $\mathbf X(0)$ and $\mathbf X(T)$ as given by $(\mathrm{III})$. They are not unique due to the properties of complex logarithm, but that's not a problem: we can choose any branch.

Now, since the general solution vector can be represented as

$$\mathbf x(t)=a_1 e^{\mu_1 t}\mathbf p_1(t)+a_2 e^{\mu_2 t}\mathbf p_2(t), \tag{VII}$$

we can find e.g. $\mathbf p_2$ using the following:

$$\begin{align} \mathbf x(t+T)&=a_1 e^{\mu_1 (t+T)}\mathbf p_1(t+T)+a_2 e^{\mu_2 (t+T)}\mathbf p_2(t+T)=\\ &=a_1 e^{\mu_1 (t+T)}\mathbf p_1(t)+a_2 e^{\mu_2 (t+T)}\mathbf p_2(t). \end{align} \tag{VIII}$$

So

$$a_2\mathbf p_2(t)=\frac{\mathbf x(t+T)e^{-\mu_1(t+T)}-\mathbf x(t)e^{-\mu_1 t}}{\exp\left((\mu_2-\mu_1)(t+T)\right)-\exp\left((\mu_2-\mu_1)t\right)}. \tag{IX}$$

Then, knowing $a_2\mathbf p_2(t)$, we can easily find $a_1\mathbf p_1(t)$. Now, using $(\mathrm {VII})$ and the obtained functions, we can easily calculate our solution for any $t\in\mathbb R$.

$\endgroup$

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .