11
$\begingroup$

I'm working on a largeish system of differential equations where I encounter the NDSolve::ndsz step size is effectively zero; singularity or stiff system suspected error.

Since this error arises from the complexity of the system I'm not able to distill a minimum example, but there might still be some useful general information we can share.

In particular I'd like to know:

  • What's the difference between stiffness and a singularity? How can I tell whether my sysetm is singular or stiff?

  • What quantity is Mathematica calculating to decide if the system is stiff/singular? Can I calculate this quantity myself to see what property of my system is causing problems?

$\endgroup$
2
  • 5
    $\begingroup$ Have you seen this? reference.wolfram.com/mathematica/tutorial/… $\endgroup$ Commented Dec 18, 2013 at 16:00
  • 2
    $\begingroup$ Ad.1) Both stiffness and singularity will limit the step size used by the integration method. Ad. 2) "A convenient way of detecting stiffness is to directly estimate the dominant eigenvalue of the Jacobian of the problem ..." quote from reference given by belisarius. $\endgroup$
    – mmal
    Commented Dec 19, 2013 at 0:07

1 Answer 1

16
$\begingroup$

Stiffness and singularities

Around 1960, things became completely different and everyone became aware that the world was full of stiff problems. (G. Dahlquist in Aiken 1985)

Shampine flame ERK5
Shampine flame IRK4
Shampine's "flame" model $y'=y^2-y^3$, $y(0)=10^{-3}$, $0 \le x \le 2/y(0)$, integrated with a relative precision goal of $10^{-4}$. The red dots show the steps and highlight the difference between the nonstiff (left) and stiff (right) halves of the interval and how the explicit (order-5 ERK) and implicit (order-4 IRK) solvers handle them.

I've been feeling for a while I could answer this question, at least for problems in the normal form $$y'=f(x,y), \quad x \in {\Bbb R},\ y(x) \in {\Bbb R}^n \,.\tag{1}$$ I'm pretty sure I know what NDSolve::ndsz indicates (zero step size) and what a singularity is, especially in the case of a pole in which both a function blows up and its derivative grows even faster. I'm less sure about stiffness, although preparing this answer has helped. The picture is clearer to me for one-dimensional ODEs but gets more complicated and less clear as the dimension increases. Another message directly related to stiffness, NDSolve::ndstf, indicates a (repeated) stiffness test failure for methods that implement a stiffness test. A pretty good overview of how this test works is given in the tutorial Stiffness Detection.

Two sources of error relate to stiffness. There is the "truncation error" at a given step, the difference between $y_{k+1}$ and the true solution at $x_{k+1}$, given that the solution starts at $(x_k,y_k)$. Truncation error comes from and is usually estimated by the integration method. The step size $x_{k+1}-x_k$ is usually adapted to make the truncation error meet the precision and accuracy goals specified by the user. Generally, the faster a solution varies, the smaller the step size has to be. The other source of error is how the accumulated error is propagated: If a solution $y=y(x)$ to (1) tends toward a given solution $y=p(x)$, then numerically the integration should tend toward the same solution. For that to happen the error should not be amplified; rather, it should be damped. Numerical solutions sometimes diverge from, rather than converge to, the given solution. This in turn is connected to the notion of the "linear stability" of a numerical method (which seems to start with Dahlquist in the 1950s Wanner). It comes from considering $$y'-p' = f(x,y)-f(x,p) \approx f_y(x,p)(y-p) \approx J\,(y-p) \,,$$ where $J$ is the Jacobian of $f$ with respect to $y$ (which will be an $n\times n$ matrix if $y$ is an $n$-vector). We will say more about this below. For now, let us note only that the linear model implies that if the step size is small enough, then a numerical method will be stable, which means the numerical steps $y_k(x_k)$ will converge to $p(x_k)$. Therefore, both truncation error and stability can limit step size.

On a Mathematica note, one limitation to investigating how stiffness works is that the Automatic method of NDSolve, which is LSODA on ODEs and IDA on DAES, seems inaccessible to inspection and allows only limited monitoring.

What is stability?

Consider the differential equation $y'=\lambda y$. If the real part of $\lambda$ is positive, solutions go to infinity as $x$ goes to infinity: they're unstable. OTOH, if the real part is negative, solutions go to zero: they're stable. Now in the stable case, let's apply a linear numerical method with step size $h>0$ to the differential equation. The update has the form $$y_{k+1} = R(z) \,y_k, \quad z = h\lambda\,.$$ In a nontrivial case, if $|R(z)| < 1$, $y_0 \ne 0$, $y_k \rightarrow 0$ and is stable or $A$-stable (Dahlquist 1963); and if $|R(z)| > 1$, $|y_k| \rightarrow \infty$ and is unstable. The region in the complex plane for which $|R(z)| < 1$ is called the linear stability region, $R(z)$ is called the linear stability function. Since $h>0$ and $\mathop{\text{Re}}(\lambda)<0$, $z$ lies in left half-plane. The intersection of the linear stability region with the negative real axis is called the linear stability boundary (LSB). For a real negative $\lambda$, the method is unstable for $z = h\lambda < \text{LSB}$. In a nonlinear ODE system, the LSB is based on an approximation, and one usually wants to choose a step size $h$ so that $z$ is not close to the LSB.

The $\lambda \in {\Bbb C}$ plane, showing the behavior of $y'=\lambda y$.

What is stiffness?

Stiffness is not a clearly defined concept, except that it is connected to efficiency, especially of explicit methods. It can depend on the integration method, the differential equation, and the precision and accuracy goals. Here are some descriptions I have found:

1. A practical definition [Byrne, Hindmarsh]:

Now compare the costs of the two solutions over the same time interval. If the stiff ODE solver was substantially less expensive to use than the non-stiff solver, then the problem was stiff. If the non-stiff solver was the less expensive, then the problem is non-stiff. Between these extremes are mildly stiff problems and, perhaps, other categories.

2. A more technical one [Shampine, Gear]:

By a stiff problem we mean one for which no solution component is unstable (no eigenvalue [of the Jacobian] has a real part which is at all large and positive) and at least some component is very stable (at least one eigenvalue has a real part which is large and negative). Further, we will not call a problem stiff unless its solution is slowly varying with respect to the most negative real part of the eigenvalues....Consequently, a problem may be stiff for some intervals of the independent variable and not for others.

3. Another (here or here) [Shampine]:

If the requested truncation error can be achieved with a step size corresponding to being outside the stability region, we say the problem appears stiff because it is then necessary to reduce the step size solely to make the computation stable.

4. Yet another definition [Moler]:

A problem is stiff if the solution being sought varies slowly, but there are nearby solutions that vary rapidly, so the numerical method must take small steps to obtain satisfactory results.

5. Mathematica, at least for explicit methods that have a "StiffnessTest" option, uses the following: $$|h\,\lambda| \le s \, | LSB | \,, \tag{2}$$ where $h$ is the step size, $\lambda$ is an estimate of the dominant eigenvalue of the Jacobian, $s$ is a "safety factor," and $LSB$ is the linear stability boundary. If a step size fails to meet the criterion (2) "a specified number of times" (Stiffness Detection), then the problem is judged to be stiff. This is similar to definition 3 above. The Automatic method, LSODA, which is part of ODEPACK, adjusts the order and step size, monitors stability, and switches between non-stiff (Adams) and stiff (BDF) multi-step methods when there is a sufficient advantage to do so. This is a sort-of on-the-fly version of definition 1 above. LSODA does this internally, and its operation does not seem open to inspection.

What is a singularity?

I think the notion of a singularity is less problematic and familiar to many. A singularity in the solution means the function, or some of its derivatives, has a discontinuity, often a pole. If the derivatives exist and are continuous up to the order of the ODE, then a singularity is less likely to pose a problem.

NDSolve::ndsz

The message NDSolve::ndsz usually indicates a singularity, but users on this site usually ask about stiffness. A singularity is especially probable when the Method is Automatic. In this case, the solver used is LSODA, and it switches between a nonstiff and a stiff solver, which handles many stiff problems.

The error NDSolve::ndsz occurs because in order to meet the precision and accuracy goals, the step size h is reduced to the point that t + h == t. How small h has to be depends on the size and precision of t. At machine precision h is about 64 $MachineEpsilon t or less, since Equal compares with tolerance. This is what is meant by the step size being "effectively zero." In my experience, which does not seem to involve a lot of stiff IVPs, a singularity is a much more common cause of NDSolve::ndsz than stiffness. A singularity is fairly easy to check, after the fact, by plotting. Sometimes, though, the problem turns out to be stiffness.

The reason for the predominance of singularities causing NDSolve::ndsz is, I surmise, the stiffness switching of the default Automatic/LSODA method used by NDSolve. Many stiff problems are solved by LSODA without any error message. The principal errors LSODA emits are NDSolve::ndsz (step size effectively zero), NDSolve::ndcf (the Adams corrector repeatedly failed to converge), and NDSolve::nderr (repeated error test failure). I don't have a lot of insight into the last two.

Even with the methods "DoubleStep", "Extrapolation", and "ExplicitRungeKutta", the error NDSolve::ndsz usually indicates a singularity, because by default the method performs a stiffness test. If the stiffness test (2) fails, a NDSolve::ndstf message is given. The test is of the sort described in 5 in the section "What is stiffness?" above. Usually stiffness will be detected before the step size is effectively zero.

NDSolve::ndstf

A NDSolve::ndstf error is the result of a repeated "StiffnessTest" failure, that is, if the condition $|h\,\lambda| \le s \, | LSB | $ in (2) fails to be satisfied. The number of repetitions is controlled by the suboption "MaxRepetitions". There are tools for examining this condition.

Step size. The step size $h$ is determined by the method from the current state and the precision and accuracy goals. The steps are stored in an InterpolatingFunction solution, call it solIF, and can be obtained from

steps = solIF@"Coordinates" // First;
stepsizes = Differences[steps];

Linear stability. For the stability region and boundary, there are

Needs["DifferentialEquations`NDSolveUtilities`"];
(* for ERK method with A matrix  amat  and weight vector  bvec *)
lsf = RungeKuttaLinearStabilityFunction[amat, bvec, z] (* think of z = h λ *)
lsf = RungeKuttaLinearStabilityFunction[##, z] & @@
  Take[NDSolve`EmbeddedExplicitRungeKuttaCoefficients[5, Infinity], 2]
lsb = z /. First@NSolve[Abs@lsf == 1 && z < 0, z] (* linear stability boundary *)
(*  -3.98793  *)

Needs["FunctionApproximations`"];
OrderStarPlot[lsf, 1, z, FrameTicks -> True] (* plots stability region for lsf *)

order = 5;
lsb = NDSolve`LinearStabilityBoundary[order] (* default approximation to the LSB *)
lsb == z / First@NSolve[Abs@Normal@Series[Exp[z], {z, 0, order}] == 1 && z < 0, z, Reals]
(*  -3.21705,  True  *)

NDSolve methods can define the "LinearStabilityBoundary" themselves instead of using the default approximation. The method object can be accessed through the MethodMonitor option as NDSolve`Self:

NDSolve[{y'[x] == -y[x], y[0] == 1}, y, {x, 0, 50}, 
 Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 5}, 
 MethodMonitor :> Return[NDSolve`Self["LinearStabilityBoundary"], NDSolve]
 ]
(*  -3.98793  *)

Safety factor. The safety factor is an option the user can control:

Options@NDSolve`StiffnessTest
(*  {"MaxRepetitions" -> {3, 5}, "SafetyFactor" -> 4/5}  *)

NDSolve[..., 
  Method -> {"ExplicitRungeKutta", 
    "StiffnessTest" -> {True, "SafetyFactor" -> 1}}]

The dominant eigenvalue $\lambda$. Estimating λ is discussed at length in the tutorial Stiffness Detection. After an NDSolve run is finished, you can compute it directly from the Jacobian, but that won't give you exactly what NDSolve does. The tutorial suggests NDSolve does the following, at least for some explicit methods. If $v$ is an approximation to the eigenvector corresponding to the dominant eigenvalue $\lambda$, then $$\lambda \approx {\| f(t, y+v) - f(t, y)\| \over \| v\|} \,.$$ To get the Jacobian directly, one can adapt the following (see Components and Data Structures):

{state} = NDSolve`ProcessEquations[sys, y, {t, t1, t2}, options];
NDSolve`Iterate[state, t2]; (* not initialized until integration starts *)
solutionData = state@"SolutionData"["Forward"]; (* or "Backward" depending *)
NDSolve`EvaluateJacobianWithSolutionData[
 state@"NumericalFunction", solutionData, "X"]

OrderStarPlot. OrderStarPlot was used to visualize the linear stability region in the complex plane above for the section "What is stability?". The region is typical for an explicit Runge-Kutta method. The linear model $y'=\lambda y$ is stable or unstable according as $\lambda$ is negative or positive. A numerical method might be stable only in a bounded region such as that shown above.

Needs["FunctionApproximations`"];
OrderStarPlot[1 + z + 1/2 z^2 + z^3/6, 1, z,
 ColorFunction -> (GrayLevel[#/2 + 1/2] &), OrderStarZeros -> {False, False}]
(*  See the image under "What is stability?" above *)

MonitorMethod. Aside from the NDSolve option MethodMonitor, there is the plugin MonitorMethod found in the "Invoking Another Method" section of the tutorial NDSolve Method Plugin Framework. It can be adapted to calculate just about anything about each integration step. It can be applied to most or all of the one-step solvers as submethods. Unsurprisingly, LSODA and IDA are not available. [This restriction is undocumented but was observed by trial and error.]

Examples

Below are some examples of ODEs that report either NDSolve::ndsz or NDSolve::ndstf errors, when using an explicit Runge-Kutta method and integrating over a long enough interval. A standard class of examples of stiffness has the form $$y'(x) = \lambda(y(x) - p(x)) + p'(x) \,,\tag{3}$$ where $\lambda$ has a large negative real part. The solution is $$y(x)=(y(0)-p(0))\,e^{\lambda x} + p(x) \,.$$ For a given $y(0) \ne p(0)$, there is a rapidly vanishing transient part $e^{\lambda x} (y(0)-p(0))$ and any solution converges to $p(x)$. If $p(x)$ is slowly varying, then in principle, integration should be able to track $p(x)$ easily with large step sizes. On the other hand, if the Jacobian $\lambda$ is relatively large, then in combination with large step sizes stability becomes an issue and the problem becomes stiff.

  • stiffIVP = {y'[x] == -10^3 (y[x] - Exp[-x]) - Exp[-x], y[0] == 0}, {x, 0, 20}
    This example of the form (3) with $p(x)=e^{-x}$ comes from Byrne and Hindmarsh, 1986.

  • expIVP = {y'[x] == -y[x], y[0] == 1}, {x, 0, 50}
    Shampine, 1977 notes that $y' = \lambda y$ where $\lambda$ has a negative real part becomes stiff if absolute error is used limit step size, whereas relative error tends not to lead to stiffness. In the first case, the step size grows as the solution $y \approx e^{-x}$ approaches zero until the step size exceeds the linear stability boundary. In the second case, to keep the truncation error within certain relative error bounds, the step size would remain constant. This example corresponds to the standard form (3) with $p(x)=0$.

  • singIVP = {y'[x] == y[x]^2, y[0] == 1}, {x, 0, 2}
    The solution is y[x] = 1/(1-x) and has a singularity at x == 1. Numerical integration will give an NDSolve::ndsz message when x approaches 1 because the truncation error is hard to keep down as the function and derivatives approach infinity. One can plot the solution or inspect the last step to verify that it is probably a singularity.

  • oscNonstiffIVP = {y''[x] == -10^8 y[x]^3, y[0] == 2, y'[0] == 0}, {x, 0, 1}
    This is a highly oscillatory ODE, which NDSolve reports as stiff with an explicit Runge-Kutta method. It is not stiff according to (2), but my guess is that the estimate of the eigenvalue is thrown off by the oscillations. Since the eigenvalues are pure imaginary, the stiffness test will be unable to estimate it, especially if it assumes the dominant eigenvalue is real. Byrne and Hindmarsh, 1986, p. 6 take this view:

The latter point relates closely to a common misunderstanding of stiffness. Problems which have undamped high frequency oscillations in the solution, whether attributable to forcing functions or to eigenvalues with large imaginary parts, are called stiff by some authors. We (and most authors) do not call such problems stiff. One reason is that highly oscillatory problems require numerical approaches that differ radically from those for stiff problems.

Computational example

The default setting for StiffnessTest is "MaxRepetitions" -> {3, 5}. Below we raise it by one to show the next step that leads to the integration being stopped. We can see that the step size for the last couple of steps has exceeded the linear stability boundary. We can also see that the computed solution diverges from the asymptotic solution for those steps.

(* Needs utilities getLSB, getSF from below *)
ClearAll[analyzeStiffness];
SetAttributes[analyzeStiffness, HoldFirst];
analyzeStiffness[call_] := Module[{lsb, s, λ = -1},
   lsb = getLSB@call;
   s = getSF@call;
   Grid[{
     {Style[
       Reap[Hold[call] /. 
          HoldPattern[Method -> m_] /; TrueQ@Sow@m :> {}][[2, 1, 1]], 
       "Label"], SpanFromLeft},
     {{"LSB", "s"}, {lsb, s} -> s*lsb},
     {{"λ", "h"}, {λ, "h"} -> (Style[#,
           Which[λ #/lsb > 1, Darker@Orange,(*Abs[λ#]>
            Abs[s*lsb]*)λ #/lsb > s, Orange, True, Black]
           ] & /@ Take[
          Quiet[Sow[call, "Solution"], NDSolveValue::ndstf]["Coordinates"] //
            Flatten // Differences,
          -4])}
     }, Dividers -> {{False, True, False}, {False, True, False, True}}]
   ];

Reap[
  analyzeStiffness@
   NDSolveValue[expIVP, y, {x, 0, 50}, AccuracyGoal -> 10, 
    Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4, 
      "StiffnessTest" -> {True, "MaxRepetitions" -> {4, 5}}}],
  "Solution",
  (sol4 = #2[[1]]) &] // First

The table shows the safety factor, eigenvalue, LSB and the last four step sizes. The orange step violates the safety factor. The darker orange-brown steps violate the linear stability boundary. When the step after this was attempted, a fourth stiffness test failure occurred and the step was not integrated into the solution.
(* sol4 is defined in the previous Reap[..] *)
With[{x2 = (sol4["Domain"])[[1, 2]]},
 With[{pr = {{x2 - 12 - 0.1, x2 + 0.2}, 
     Max@sol4[{x2, x2 - 12}] {-0.03, 1}}},
  Show[
   Plot[Exp[-x], {x, x2 - 12, x2 + 0.05}, 
    PlotStyle -> {Gray, AbsoluteThickness[1.]}, PlotRange -> pr],
   ListLinePlot[{sol4(*,sol3*)}, PlotRange -> pr, Mesh -> All, 
     MeshStyle -> {PointSize@Medium, Red}] /. 
    Line[{pp___, p0_, p1_, p2_, p3_}] :> {Black, Line[{pp, p0}], 
      Orange, Line[{p0, p1}], Darker@Orange, Line[{p1, p2}], 
      Line[{p2, p3}]},
   Frame -> True, FrameStyle -> Darker@Blue
   ]
  ]]

The last two steps show the instability of the numerical method by the divergence of the solution from `y == 0`.

Tools and utilities

The OP asks how to compute whether a system is stiff or singular. The ideas for doing so have been given above, but over time I've amassed many utilities for exploring differential equations. Below I share some related to this Q&A. A few need some work as noted in the comments. As mentioned above, these properties are transient and usually require integrating the system until it reaches a region where stiffness or a singularity develops.

Here is a summary:

  • Stability:
    • getSF[]: Returns the "SafetyFactor" of "StiffnessTest".
    • getLSB[], nLSB[]: Returns the linear stability boundary.
    • getLSF[], nLSF[]: Returns the linear stability function.
  • Properties:
    • stiffQ[]: Returns whether a system is is stiff at a step.
    • singQ[]: Returns whether a system is singular at a step.
    • oscQ[]: Returns whether a system is oscillatory at a step.
    • unstableQ[]: Returns whether a system is unstable at a step.

Code:

(* LINEAR STABILITY UTILITIES *)
    
(* Returns the "SafetyFactor" of "StiffnessTest" *)
getSF // ClearAll;
getSF[ndopts : OptionsPattern[NDSolve]] := "SafetyFactor" /.
    DeleteCases[
     Flatten@{"StiffnessTest" /.
         DeleteCases[
          Flatten@{Method /. Flatten@{ndopts}},
          Except[_Rule]
          ] /.
        "StiffnessTest" -> {}},
     Except[_Rule]
     ] /. Options@NDSolve`StiffnessTest;

(* Returns the linear stability boundary
*   Not available for some methods including Automatic
 *)
getLSB // ClearAll;
SetAttributes[getLSB, HoldFirst];
getLSB::noLSB = "Method returns no linear stability boundary.";
getLSB[call : (NDSolve | NDSolveValue)[ivp_, y_, {t_, t1_, t2_}, ndopts___]] :=
    NDSolve[ivp, y, {t, t1, t2},
   MethodMonitor :> Return[NDSolve`Self["LinearStabilityBoundary"], NDSolve],
   StepMonitor :> Return[Automatic, NDSolve],
   ndopts];
getLSB[state_] /; MatchQ[state, _NDSolve`StateData] :=
  With[{meth = DeleteCases[state["MethodData"], None]}, (* returns {backward, forward} methods *)
   Replace[First[meth]["LinearStabilityBoundary"],
     {lsb_?NumericQ :> lsb, _ :> Automatic}] /; meth =!= {}
   ];

(* Returns the linear stability function
*   Not available for some methods including Automatic; see nLSF[] below
 *)
getLSF // ClearAll;
SetAttributes[getLSF, HoldFirst];
getLSF::noLSF = "Method returns no linear stability function.";
getLSF[call : (NDSolve | NDSolveValue)[ivp_, y_, {t_, t1_, t2_}, ndopts___]] :=
    NDSolve[ivp, y, {t, t1, t2},
   MethodMonitor :> Return[getLSF@NDSolve`Self, NDSolve], 
   StepMonitor :> Return[Missing["NotAvailable"], NDSolve],
   ndopts];
getLSF[state_] /; MatchQ[state, _NDSolve`StateData] :=
  With[{meth = DeleteCases[
      Replace[
       state["MethodData"], (* returns {backward, forward} methods *)
       {m : _NDSolve`ExplicitRungeKutta | _NDSolve`ImplicitRungeKutta :> 
         m, _ -> None},
       1],
      None]},
   getLSF @@ meth
   ];
getLSF[rkmeth_, ___] /; 
   MatchQ[rkmeth, _NDSolve`ExplicitRungeKutta | _NDSolve`ImplicitRungeKutta] :=
    Function[z,
   RungeKuttaLinearStabilityFunction[#1, #2, z] & @@ rkmeth@"Coefficients" // 
    Evaluate];
getLSF[___] := Missing["NotAvailable"];


(* NUMERICAL LINEAR STABILITY UTILITIES *)

(* Computes linear stability function for the given method
*   TBD: Make it work on implicit methods (return -Infinity)
 *)
nLSF // ClearAll;
With[{n = 9},(* number of steps averaged should be odd to preserve sign *)
 nLSF[z_?Negative, method_] := Block[{x, y},
    NDSolveValue[{y'[x] == -y[x], y[0] == 1}, 
     Surd[y[-2 n z]/y[-n z], n], {x, 0, -2 n z}, 
     Method -> {"FixedStep", method}, StartingStepSize -> -z, 
     MaxStepFraction -> 1]
    ];
 ]
nLSF[z_?PossibleZeroQ, method_] := 1.;

(* Returns linear stability boundary
*   nLSF[] computes an average
*   Large values of z may lead to instability
 *)
nLSB // ClearAll;
nLSB[method_] := Block[{z},
   With[{z0 = NestWhile[1.5 # &, -2., nLSF[#, method]^2 < 1 &, 1, 18]}, 
    z /. FindRoot[nLSF[z, method]^2 == 1, {z, z0, z0, 0}]
    ]];

(* stateEigenvalues[] caches eigenvalues so that they are not recomputed
*   on multiple calls for the same NDSolve`StateData state   *)
stateEigenvalues // ClearAll;
Clear[foo`state, foo`eigs];
stateEigenvalues[state_ /; MatchQ[state, foo`state]] := foo`eigs;
stateEigenvalues[state_] := Module[{sd, j0},
   foo`state = state;
   sd = state@"SolutionData";
   j0 = NDSolve`EvaluateJacobianWithSolutionData[
       state@"NumericalFunction",
       #,
       "X"] & /@ sd;
   foo`eigs = Eigenvalues /@ j0
   ];


(* PROPERTY UTILITIES *)

(* Returns whether a system has an eigenvalue
*   with positive real part in the "Forward" direction
*   with negative real part in the "Backward" direction
 *)
unstableQ // ClearAll;
unstableQ[state_] := Module[{eigs, unstable},
   eigs = stateEigenvalues@state;
   (* TBD: Check/test sign/Min/Max for backward direction *)
   unstable = Max /@ Re@eigs;(* mult. by {-1,1} before Max ??? *)
   unstable = {-1, 1} unstable // Positive
   ];

(* Compute stiffness ratio h λ / LSB
 *   stiffQ[] return whether the ratio is greater
 *     than the linear stability boundary lsb
 *)
ClearAll[stiffQ, stiffnessRatios];
(* ratio = h λ / LSB =  1 is the linear stability boundary *)
stiffnessRatios[state_, lsb_] :=
  Module[{h0, stiff, eigs, lambdas},
   eigs = stateEigenvalues@state;
   lambdas = Min /@ Re@eigs;(* min real component *)
   h0 = state@"TimeStep"; 
   (* Some methods (ERK e.g.) return positive step size for backward direction *)
   If[h0[[1]] > 0, h0[[1]] = -h0[[1]]];
   stiff = MapThread[
     Function[{h, λ},
      Replace[h, hh_?NumericQ :> λ*h/lsb]
      ], {h0, lambdas}];
   stiff
   ];
stiffQ[state_, lsb_, safetyfactor_ : 4/5] := 
   # > safetyfactor & /@ stiffnessRatios[state, lsb];

(* Returns whether system has an eigenvalue with
 *   a large imaginary component
 *)
oscillatoryQ // ClearAll;
oscillatoryQ[state_, threshold_ : 1/100] := 
  Module[{eigs, osc},
   eigs = stateEigenvalues@state;
   (* TBD: Criteria for osc => nonstiff *)
   osc = Select[#, # != 0 && Abs[Abs@Arg[#] - Pi/2] < 0.1 &] & /@ eigs;
   osc = Replace[
     MaximalBy[DeleteCases[#, _Real], Abs] & /@ osc,
     {} -> {0.}, 1];
   (* TBD: Develop validated criteria *)
   osc = First@MaximalBy[#, Abs[Im@#] &] & /@ osc;
   Abs@Im@# >= threshold & /@ osc
   ];

(* Returns whether a system's solution and its
 *   its derivative have become very large
 *)
singQ // ClearAll;
singQ[state_, threshold_ : 1.*^6] := 
 Module[{sd, y0, yp0, sing},
  sd = state@"SolutionData";
  {y0, yp0} = 
   NDSolve`SolutionDataComponent[#, {"X", "X'"}] & /@ sd // Transpose;
  sing = With[{ynorm = Max[1, Norm[#]] & /@ y0},
    MapThread[Min,
     {ynorm/Max[1, Norm[First@ynorm]], (* y/y(t0) ->  Infinity? *)
      (Norm /@ yp0)/ynorm}] (* y'/y -> Infinity? *)
    ];
  (* TBD: Make threshold dependent on WorkingPrecision? *)
  # >= threshold & /@ sing
  ];
$\endgroup$
3
  • $\begingroup$ Shampine's "flame propagation" example is a great demonstration, not only because it's a simple stiffness model, but also because there's an exact solution to compare against: Function[x, 1/(1 + ProductLog[(1/C[1] - 1) Exp[1/C[1] - 1 - x]])]. As for working definitions of stiffness, my favorite is the one by Byrne and Hindmarsh. $\endgroup$ Commented Jan 10, 2021 at 22:55
  • 1
    $\begingroup$ @J.M. I agree. Probably why they're first. I did have a discussion of Shampine's example under "Examples," but I decided to present it as the lead-off figure, since it so clearly illustrates the idea of Byrne and Hindmarsh. (And the answer initially exceeded the character limit on posts. :) $\endgroup$
    – Michael E2
    Commented Jan 12, 2021 at 16:16
  • $\begingroup$ @Michael: Your discussion would, I believe, make a nice addition to the section "Advanced Numerical Differential Equation Solving" in the Mathematica Tutorials. Really unfortunate your presentation has to get burried in the threads here. $\endgroup$
    – josh
    Commented Dec 16, 2021 at 12:51

Not the answer you're looking for? Browse other questions tagged or ask your own question.