Billiards on surfaces
In the previous post, I showed how to integrate Hamiltonian systems
$$\begin{align} \dot q & = +\frac{\partial H}{\partial p} \\ \dot p & = -\frac{\partial H}{\partial q} \end{align}$$using methods that approximately preserve the energy. Here I'd like to look at what happens when there are non-trivial constraints
$$g(q) = 0$$on the configuration of the system. The simplest example is the pendulum problem, where the position $x$ of the pendulum is constrained to lie on the circle of radius $L$ centered at the origin. These constraints are easy to eliminate by instead working with the angle $\theta$. A more complicated example is a problem with rotational degrees of freedom, where the angular configuration $Q$ is a 3 $\times$ 3 matrix. The constraint comes from the fact that this matrix has to be orthogonal:
$$Q^\top Q = I.$$We could play similar tricks to the case of the pendulum and use Euler angles, but these introduce other problems when used for numerics. For this or other more complex problems, we'll instead enforce the constraints using a Lagrange multiplier $\lambda$, and working with the constrained Hamiltonian
$$H' = H - \lambda\cdot g(q).$$We're then left with a differential-algebraic equation:
$$\begin{align} \dot q & = +\frac{\partial H}{\partial p} \\ \dot p & = -\frac{\partial H}{\partial q} + \lambda\cdot\nabla g \\ 0 & = g(q). \end{align}$$If you feel like I pulled this multiplier trick out of a hat, you might find it more illuminating to think back to the Lagrangian formulation of mechanics, which corresponds more directly with optimization via the stationary action principle. Alternatively, you can view the Hamiltonian above as the limit of
$$H_\epsilon' = H + \frac{|p_\lambda|^2}{2\epsilon} - \lambda\cdot g(q)$$as $\epsilon \to 0$, where $p_\lambda$ is a momentum variable conjugate to $\lambda$. This zero-mass limit is a singular perturbation, so actually building a practical algorithm from this formulation is pretty awful, but it can be pretty helpful conceptually.
For now we'll assume that the Hamiltonian has the form
$$H = \frac{1}{2}p^*M^{-1}p + U(q)$$for some mass matrix $M$ and potential energy $U$. The 2nd-order splitting scheme to solve Hamilton's equations of motion in the absence of any constraints are
$$\begin{align} p_{n + \frac{1}{2}} & = p_n - \frac{\delta t}{2}\nabla U(q_n) \\ q_{n + 1} & = q_n + \delta t\cdot M^{-1}p_{n + \frac{1}{2}} \\ p_{n + 1} & = p_{n + \frac{1}{2}} - \frac{\delta t}{2}\nabla U(q_{n + 1}). \end{align}$$To enforce the constraints, we'll add some extra steps where we project back onto the surface or, in the case of the momenta, onto its cotangent space. In the first stage, we solve the system
$$\begin{align} p_{n + \frac{1}{2}} & = p_n - \frac{\delta t}{2}\left\{\nabla U(q_n) - \lambda_{n + 1}\cdot \nabla g(q_n)\right\} \\ q_{n + 1} & = q_n - \delta t\cdot M^{-1}p_{n + \frac{1}{2}} \\ 0 & = g(q_{n + 1}). \end{align}$$If we substitute the formula for $p_{n + 1/2}$ into the second equation and then substitute the resulting formula for $q_{n + 1}$ into the constraint $0 = g(q_{n + 1})$, we get a nonlinear system of equations for the new Lagrange multiplier $\lambda_{n + 1}$ purely in terms of the current positions and momenta. Having solved this nonlinear system, we can then substitute the value of $\lambda_{n + 1}$ to obtain the values of $p_{n + 1/2}$ and $q_{n + 1}$. Next, we compute the momentum at step $n + 1$, but subject to the constraint that it has to lie in the cotangent space of the surface:
$$\begin{align} p_{n + 1} & = p_{n + \frac{1}{2}} - \frac{\delta t}{2}\left\{\nabla U(q_{n + 1}) - \mu_{n + 1}\cdot \nabla g(q_{n + 1})\right\} \\ 0 & = \nabla g(q_{n + 1})\cdot M^{-1}p_{n + 1}. \end{align}$$Once again, we can substitute the first equation into the second to obtain a linear system for the momentum-space multiplier $\mu$. Having solved for $\mu$, we can then back-substitute into the first equation to get $p_{n + 1}$. This is the RATTLE algorithm. (I'm pulling heavily from chapter 7 of Leimkuhler and Reich here if you want to see a comparison with other methods and proofs that it's symplectic.)
Surfaces¶
Next we have to pick an example problem to work on. To start out, we'll assume that the potential energy for the problem is 0 and focus solely on the free motion of a particle on some interesting surface. The simplest surface we could look at is the sphere:
$$g(x, y, z) = x^2 + y^2 + z^2 - R^2$$or the torus:
$$g(x, y, z) = \left(\sqrt{x^2 + y^2} - R\right)^2 + z^2 - r^2.$$Just for kicks, I'd like to instead look at motion on surfaces of genus 2 or higher. There are simple parametric equations for tracing out spheres and tori in terms of the trigonometric functions, so the machinery of explicitly enforcing constraints isn't really necessary. There is no such direct parameterization for higher-genus surfaces, so we'll actually need to be clever in defining the surface and in simulating motion on it. As an added bonus, the ability to trace out curves on the surface will give us a nice way of visualizing it.
To come up with an implicit equation for a higher-genus surface, we'll start with an implicit equation for a 2D curve and inflate it into 3D. For example, the equation for the torus that we defined above is obtained by inflating the implicit equation $\sqrt{x^2 + y^2} - R = 0$ for the circle in 2D. What we want to generate higher-genus surfaces is a lemniscate. An ellipse is defined as the set of points such that the sum of the distances to two foci is constant. Likewise, a lemniscate is defined as the set of points such that the product of the distances to two or more foci is constant. The Bernoulli leminscate is one such example, which traces out a figure-8 in 2D. The Bernoulli leminscate is the zero set of the polynomial
$$f(x, y) = (x^2 + y^2)^2 - a^2(x^2 - y^2)$$and it also has the parametric equation
$$\begin{align} x & = a\frac{\sin t}{1 + \cos^2t} \\ y & = a\frac{\sin t\cdot\cos t}{1 + \cos^2t} \end{align}$$which gives us a simple way to visualize what we're starting with.
import numpy as np
from numpy import pi as π
a = 1
t = np.linspace(0, 2 * π, 256)
xs = a * np.sin(t) / (1 + np.cos(t) ** 2)
ys = a * np.sin(t) * np.cos(t) / (1 + np.cos(t) ** 2)
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.set_aspect("equal")
ax.plot(xs, ys);
We've loosely referred to the idea of inflating the zero-contour of a function $f(x, y)$ into 3D. The 3D function defining the desired implicit surface is
$$g(x, y, z) = f(x, y)^2 + z^2 - r^2,$$where $r$ is a free parameter that we'll have to tune. I'm going to guess that $r < \frac{a}{\sqrt{2}}$ but it could be much less; beyond that we'll have to figure out what $r$ is by trial and error.
The code below uses the sympy software package to create a symbolic representation of the function $g$ defining our surface. Having a symbolic expression for $g$ allows us to evaluate it and its derivatives, but to actually visualize the surface we'll have to sample points on it somehow.
import sympy
x, y, z = sympy.symbols("x y z")
f = (x ** 2 + y ** 2) ** 2 - a ** 2 * (x ** 2 - y ** 2)
r = a / 6
g = f ** 2 + z ** 2 - r **2
dg = sympy.derive_by_array(g, [x, y, z])
Symbolically evaluating $g$ every time is expensive, so the code below uses the lambdify
function from sympy to convert our symbolic expression into an ordinary Python function.
I've added some additional wrappers so that we can pass in a numpy array of coordinates rather than the $x$, $y$, and $z$ values as separate arguments.
g_fn = sympy.lambdify([x, y, z], g, modules="numpy")
def G(q):
return np.array([g_fn(*q)])
dg_fn = sympy.lambdify([x, y, z], dg, modules="numpy")
def dG(q):
return np.array(dg_fn(*q)).reshape([1, 3])
One of the first algorithms for constrained mechanical systems was called SHAKE, so naturally some clever bastard had to make one called RATTLE and there's probably a ROLL out there too. The code below implements the RATTLE algorithm. You can view this as analogous to the Stormer-Verlet method, which does a half-step of the momentum solve, a full step of the position solve, and finally another half-step of the momentum solve again. In the RATTLE algorithm, we have to exercise a bit of foresight in the initial momentum half-step and position full-step in order to calculate a Lagrange multiplier to project an arbitrary position back onto the zero-contour of $g$. Solving for the position multiplier is a true nonlinear equation, whereas the final momentum half-step is just a linear equation for the momentum and its multiplier, which we've written here as $\mu$. Here we only have one constraint, so each multiplier is a scalar, which is a convenient simplification.
import tqdm
import scipy.linalg
import scipy.optimize
def trajectory(q, v, dt, num_steps, f, g, dg, progressbar=False):
qs = np.zeros((num_steps + 1,) + q.shape)
vs = np.zeros((num_steps + 1,) + q.shape)
g_0 = g(q)
λs = np.zeros((num_steps + 1,) + g_0.shape)
μs = np.zeros((num_steps + 1,) + g_0.shape)
def project_position(λ_0, q, v):
def fn(λ, q, v):
v_n = v + 0.5 * dt * (f(q) - λ @ dg(q))
q_n = q + dt * v_n
return g(q_n)
result = scipy.optimize.root(fn, λ_0, args=(q, v))
return result.x
def project_velocity(q, v):
J = dg(q)
# TODO: Don't solve the normal equations, you're making Anne G sad
A = J @ J.T
b = J @ v
return scipy.linalg.solve(A, b, assume_a="pos")
qs[0] = q
μs[0] = project_velocity(q, v)
vs[0] = v - μs[0] @ dg(q)
iterator = (tqdm.trange if progressbar else range)(num_steps)
for t in iterator:
λs[t + 1] = project_position(λs[t], qs[t], vs[t])
v_mid = vs[t] + 0.5 * dt * (f(qs[t]) - λs[t + 1] @ dg(qs[t]))
qs[t + 1] = qs[t] + dt * v_mid
v_prop = v_mid + 0.5 * dt * f(qs[t + 1])
μs[t + 1] = project_velocity(qs[t + 1], v_prop)
vs[t + 1] = v_mid + 0.5 * dt * f(qs[t + 1]) - μs[t + 1] @ dg(qs[t + 1])
return qs, vs, λs, μs
I'll add that this algorithm was exceedingly fiddly to implement and I had to debug about 5 or 6 times before I got it right. The sanity checking shown below was essential to making sure it was right.
def potential(q):
return q[2]
def force(q):
return np.array((0, 0, -1))
num_trajectories = 25
θs = 2 * π * np.linspace(0, 1, num_trajectories)
num_steps = 2000
Qs = np.zeros((num_steps + 1, 3 * num_trajectories))
Vs = np.zeros((num_steps + 1, 3 * num_trajectories))
Λs = np.zeros((num_steps + 1, num_trajectories))
for i, θ in tqdm.tqdm(enumerate(θs), total=num_trajectories):
q = np.array((0, 0, r))
v = np.array((np.cos(θ), np.sin(θ), 0))
dt = 1e-2
qs, vs, λs, μs = trajectory(q, v, dt, num_steps, force, G, dG)
Qs[:, 3 * i : 3 * (i + 1)] = qs
Vs[:, 3 * i : 3 * (i + 1)] = vs
Λs[:, i] = λs.flatten()
As a sanity check, we'll evaluate the change in energy throughout the course of the simulation relative to the mean kinetic energy. The relative differences are on the order of 1%, which suggests that the method is doing a pretty good job. I re-ran this notebook with half the timestep and the energy deviation is cut by a factor of four, indicative of second-order convergence.
fig, ax = plt.subplots()
for i in range(num_trajectories):
qs, vs = Qs[:, 3 * i : 3 * (i + 1)], Vs[:, 3 * i : 3 * (i + 1)]
K = 0.5 * np.sum(vs ** 2, axis=1)
U = np.array([potential(q) for q in qs])
energies = K + U
ax.plot((energies - energies[0]) / np.mean(K))
Finally, let's make a movie of the results.
from mpl_toolkits import mplot3d
from mpl_toolkits.mplot3d.art3d import Line3DCollection
from matplotlib.animation import FuncAnimation
def make_animation(
Qs, depth=25, duration=30.0, start_width=0.1, end_width=1.5, ax=None
):
num_steps = Qs.shape[0]
num_particles = Qs.shape[1] // 3
widths = np.linspace(start_width, end_width, depth)
collections = []
for i in range(num_particles):
q_i = Qs[:depth, 3 * i : 3 * (i + 1)]
points = q_i.reshape(-1, 1, 3)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
collection = Line3DCollection(segments, linewidths=widths)
collections.append(collection)
ax.add_collection(collection)
def update(step):
start = max(0, step - depth)
for i in range(num_particles):
q_i = Qs[step - depth : step, 3 * i : 3 * (i + 1)]
points = q_i.reshape(-1, 1, 3)
segments = np.concatenate([points[:-1], points[1:]], axis=1)
collections[i].set_segments(segments)
interval = 1e3 * duration / num_steps
frames = list(range(depth, num_steps))
return FuncAnimation(
ax.figure, update, frames=frames, interval=interval, blit=False
)
My Riemannian geometry kung fu is weak is but I think that the geodesic flow on this surface is ergodic (see these notes).
%%capture
fig = plt.figure()
ax = fig.add_subplot(projection="3d")
ax.set_xlim((-a, a))
ax.set_ylim((-a, a))
ax.set_zlim((-a, a))
ax.set_axis_off()
animation = make_animation(Qs, depth=100, ax=ax)
from IPython.display import HTML
HTML(animation.to_html5_video())
It's also interesting to have a look at what the respective Lagrange multipliers for position and velocity are doing.
fig, ax = plt.subplots()
ts = np.linspace(0.0, num_steps * dt, num_steps + 1)
ax.plot(ts, Λs[:, 6].reshape(-1));
Note how the Lagrange multipliers aren't smooth -- they have pretty sharp transitions. If you think of the Lagrange multipliers as fictitious "forces" that push the trajectories back onto the constraint manifold, then their amplitude is probably some kind of indicator of the local curvature of the constraint surface.
More interesting now¶
This all worked well enough for a single particle on the surface. Now let's see what happens if we put several particles on the surface and make them interact. I'd like to find some potential that's repulsive at distances shorter than equilibrium, attractive at longer distances, and falls off to zero at infinity. We could use the Lennard-Jones potential shown in the last demo but the singularity at the origin is going to create more difficulty than necessary. Instead, I'll use a variant of the Ricker wavelet, which is plotted below.
r_e = a / 6
U_e = 0.5
r = sympy.symbols("r")
ρ = r / r_e
potential = U_e / 2 * (1 - 3 * ρ ** 2) * sympy.exp(3 / 2 * (1 - ρ ** 2))
rs = np.linspace(0.0, 3 * r_e, 61)
Us = sympy.lambdify(r, potential, modules="numpy")(rs)
fig, ax = plt.subplots()
ax.set_xlabel("distance / equilibrium")
ax.set_ylabel("potential")
ax.plot(rs / r_e, Us);
I'm using this potential just because it's convenient -- no one thinks there are real particles that act like this.
Now that we're looking at a multi-particle system, we have to evaluate the constraint on every single particle. The derivative matrix has a block structure which a serious implementation would take advantage of.
def G(q):
return np.array([g_fn(*q[3 * i: 3 * (i + 1)]) for i in range(len(q) // 3)])
# TODO: Make it a sparse matrix
def dG(q):
n = len(q) // 3
J = np.zeros((n, 3 * n))
for i in range(n):
q_i = q[3 * i: 3 * (i + 1)]
J[i, 3 * i: 3 * (i + 1)] = dg_fn(*q_i)
return J
The code below calculates the total forces by summation over all pairs of particles.
I added this silly extra variable force_over_r
to avoid any annoying singularities at zero distance.
force = sympy.diff(potential, r)
force_over_r = sympy.lambdify(r, sympy.simplify(force / r), modules="numpy")
def F(q):
n = len(q) // 3
f = np.zeros_like(q)
for i in range(n):
q_i = q[3 * i: 3 * (i + 1)]
for j in range(i + 1, n):
q_j = q[3 * j: 3 * (j + 1)]
r_ij = q_i - q_j
r = np.sqrt(np.inner(r_ij, r_ij))
f_ij = force_over_r(r) * r_ij
f[3 * i: 3 * (i + 1)] += f_ij
f[3 * j: 3 * (j + 1)] -= f_ij
return f
To initialize the system, we'll take every 100th point from one of the trajectories that we calculated above.
skip = 100
particle = 3
q = Qs[::skip, 3 * particle : 3 * (particle + 1)].flatten()
v = np.zeros_like(q)
dt = 1e-2
N = 2000
qs, vs, λs, μs = trajectory(q, v, dt, N, F, G, dG, progressbar=True)
%%capture
fig = plt.figure()
ax = fig.add_subplot(projection="3d")
ax.set_xlim((-a, a))
ax.set_ylim((-a, a))
ax.set_zlim((-a, a))
ax.set_axis_off()
animation = make_animation(qs, depth=100, ax=ax)
HTML(animation.to_html5_video())
Some of the particles fall into each others' potential wells and become bound, developing oscillatory orbits, while others remain free. For two particles to bind, they have to have just the right momenta and relative positions; if they're moving too fast, they may scatter off of each other, but will ultimately fly off in opposite directions.
Conclusion¶
Enforcing constraints in solvers for Hamiltonian systems introduces several new difficulties. The basic second-order splitting scheme for unconstrained problems is pretty easy to implement and verify. While the RATTLE algorithm looks to be not much more complicated, it's very easy to introduce subtle off-by-one errors -- for example, accidentally evaluating the constraint derivative at the midpoint instead of the starting position. These mistakes manifest themselves as slightly too large deviations from energy conservation, but these deviations aren't necessarily large in any relative sense. The resulting scheme might still converge to the true solution, in which case the energy deviation will go to zero for any finite time interval. So measuring the reduction in the energy errors asymptotically as $\delta t \to 0$ probably won't catch this type of problem. It may be possible to instead calculate what the next-order term is in the Hamiltonian for the modified system using the Baker-Campbell-Hausdorff formula, but that may be pretty rough in the presence of constraints.
The implementation may be fiddly and annoying, but it is still possible to preserve much of the symplectic structure when constraints are added. The fact that structure-preserving integrators exist at all shouldn't be taken as a given. For example, there don't appear to be any simple structure-preserving adaptive integration schemes; see chapter 9 of Leimkuhler and Reich. The shallow water equations are a Hamiltonian PDE and deriving symplectic schemes that include the necessary upwinding is pretty hard.
There are several respects in which the code I wrote above is sub-optimal. For the multi-particle simulation, the constraints are applied to each particle and consequently the constraint derivative matrix $J$ is very sparse and $J\cdot J^\top$ is diagonal. For expediency's sake, I just used a dense matrix, but this scales very poorly to more particles. A serious implementation would either represent $J$ as a sparse matrix or would go matrix-free by providing routines to calculate the product of $J$ or $J^\top$ with a vector. I also implemented the projection of the momentum back onto the cotangent space by solving the normal equations, which is generally speaking a bad idea. The matrix $J\cdot J^\top$ was diagonal for our problem, so this approach will probably work out fine. For more complex problems, we may be better off solving a least-squares problem with the matrix $J^\top$ using either the QR or singular value decomposition.
Finally, I used a simple interaction potential just so we could see something interesting happen. The potential goes to a finite value at zero separation, which is a little unphysical. A much more serious deficiency was that the potential is defined using the particles' coordinates in 3D Cartesian space. Ideally, we would do everything in a way that relies as little on how the surface is embedded into Euclidean space as possible, which would mean using the geodesic distance instead.