I don't think there's a lot of value in the "path start" mechanism. Just pass in a starting path vector, and assume N to be the size of this vector.
Don't capitalise method names in Python.
This form of comment:
#MONTE CARLO SIMULATION
def Metropolis_HO(start,N,eta,delta,ntimes):
should actually be a docstring:
def metropolis_ho(path: np.ndarray, eta: float, delta: float, ntimes: int) -> tuple[
np.ndarray,
np.ndarray,
]:
"""Monte Carlo Simulation"""
Add PEP484 typehints to your signatures (example above).
obs1=np.zeros
should actually use np.empty()
.
This:
c1=1./eta
c2=(1./eta+eta/2.)
is really just
c1 = 1/eta
c2 = c1 + eta/2
repeat
is unused, so name it _
(the convention for unused loop variables).
This:
p_ratio=c1*y_p*force-c2*(y_p**2)-c1*y*force+c2*(y**2)
is clearer as
p_ratio = c1*force*(y_p - y) + c2*(y*y - y_p*y_p)
In this expression, the min
is not necessary:
if np.random.rand()<min(np.exp(p_ratio),1):
because rand()
itself ranges from 0 through 1, and so an exp
producing a value above 1 will not make the behaviour any different.
This expression:
obs1[i]=np.average(path**2)
can be re-expressed as a self-dot product which might help marginally with speed; it will look like
obs1[i] = np.dot(path, path)/n
Avoid this loop:
temp=0.
for k in range(N):
temp+=(path[k]-path[(k+1)%N])**2
obs2[i]=temp/N
Instead, use the same self-dot product trick, but on a roll()
ed array:
diff = path - np.roll(path, -1)
obs2[i] = np.dot(diff, diff)/n
Your non-equilibrium filter is trouble. It arbitrarily starts the output at element 10,000 when that should be parametric (especially for callers that use a small value of ntimes
). Since n_corr
is always 1, delete it. And since the slice always terminates at the end of the array, remove that, too. That leaves us with
n_term = equilibrium_start
obs1 = obs1[n_term:]
obs2 = obs2[n_term:]
Add tests, at least for regression. This ties in with another important concept: even though your algorithm relies on random behaviour, it should be repeatable based on a seed. Best to pass in the newer Numpy random generator interface.
If you can turn the loop-pair of for j in range(n):
/ for _ in range(3):
inside out so that j
becomes the innermost index, then this can be further vectorised. If not, you're stuck.
Suggested
import numpy as np
from numpy.random import default_rng, Generator
def metropolis_ho(
path: np.ndarray,
rand: Generator,
eta: float,
delta: float,
ntimes: int,
equilibrium_start: int = 10_000,
) -> tuple[
np.ndarray, # observables 1
np.ndarray, # observables 2
]:
"""Monte Carlo Simulation"""
n = len(path)
# Initialize arrays of observables
obs1 = np.empty(ntimes)
obs2 = np.empty(ntimes)
# Useful constants
c1 = 1/eta
c2 = c1 + eta/2
# Iterate loop on all sites
for i in range(ntimes):
for j in range(n):
for _ in range(3):
# Set y as j-th point on path
y = path[j]
# Propose modification
y_p = rand.uniform(y - delta, y + delta)
# Calculate accept probability
force = path[(j + 1) % n] + path[j - 1]
p_ratio = c1*force*(y_p - y) + c2*(y*y - y_p*y_p)
# Accept-reject
if rand.random() < np.exp(p_ratio):
path[j] = y_p
# Average of y^2 on the path
obs1[i] = np.dot(path, path)/n
# Average of Delta y^2 on the path
diff = path - np.roll(path, -1)
obs2[i] = np.dot(diff, diff)/n
# Get rid of non-equilibrium states and decorrelate
n_term = equilibrium_start
obs1 = obs1[n_term:]
obs2 = obs2[n_term:]
return obs1, obs2
def main() -> None:
hot = True
n = 400
rand: Generator = default_rng(seed=0)
if hot:
start = rand.uniform(low=-1, high=1, size=n)
else:
start = np.zeros(n)
obs1, obs2 = metropolis_ho(
path=start,
rand=rand,
eta=0.6,
delta=0.1,
ntimes=50,
equilibrium_start=10,
)
assert np.allclose(
obs1,
np.array([
0.33481635, 0.33975680, 0.33848696, 0.33565933, 0.34104504,
0.33577587, 0.34060038, 0.34019111, 0.34048678, 0.34476147,
0.34417650, 0.34058942, 0.34307716, 0.34851236, 0.33542469,
0.33176036, 0.32263985, 0.33208625, 0.33240874, 0.32467590,
0.32252395, 0.32424555, 0.32694504, 0.33374541, 0.32667225,
0.32566617, 0.31967787, 0.32302223, 0.31925758, 0.32326829,
0.32998249, 0.33500381, 0.34054321, 0.34033330, 0.33718049,
0.33962281, 0.33585350, 0.34389458, 0.34816599, 0.34695869,
]),
)
assert np.allclose(
obs2,
np.array([
0.60471689, 0.60145567, 0.59598397, 0.57394761, 0.58472186,
0.56454233, 0.56965071, 0.55589532, 0.55379822, 0.54761523,
0.5447842 , 0.54175528, 0.53238427, 0.53223468, 0.51024062,
0.48941357, 0.47810786, 0.50182631, 0.48789695, 0.47479243,
0.46272811, 0.45823329, 0.45743116, 0.46647511, 0.45989627,
0.46653626, 0.45287477, 0.44861666, 0.43814114, 0.44599284,
0.44905000, 0.46638112, 0.46607791, 0.45834453, 0.44513739,
0.44357120, 0.43453803, 0.43544248, 0.44287501, 0.42685933,
]),
)
if __name__ == '__main__':
main()
force[j]
depends on the recently-updated value fromforce[j-1]
. Can you accept the results if this is made to depend only on the previous iteration and not the current one? If so this can be given a large speedup. If not I think you're basically stuck. \$\endgroup\$