9
$\begingroup$

So I questioned why there are differences between Wolfram Cloud and Windows 11 Evaluate $\int_{0}^{\pi/2} \cos^a (x)\sin(ax) dx$ using Mathematica

E^(-((4 I \[Pi])/5)) Beta[-1, -(6/5), 11/5] // N

I FullSimplified as that gets better results (even though it should not, this is school math, there difference just happens somehow when -0.8090169943749475/0.587785252292473 changes to -0.8090169943749473/0.5877852522924732 with no reason whatsoever, but that is not what this question is about):

E^(-((4 I \[Pi])/5)) Beta[-1, -(6/5), 11/5] //FullSimplify//N

So, the result on windows is -6.6974 - 8.88178*10^-16 I (actually true value is -6.697403197294095 - 8.881784197001252-16 I) vs result on linux cloud that is -6.6974 - 1.33227*10^-15 I.

Why do you think that happens?

Well, I did trace as here: https://mathematica.stackexchange.com/a/271955/82985

and got the following result after 20 minutes of debug. Apparently it gets to

1/(-1.)^1.2 and not only lowers accuracy (why?) and thus loses one 1 in the complex part (on windows and linux), but it also makes a mistake in the real part (only on Linux).

That, that is why. You are aware this exists, right? https://core-math.gitlabpages.inria.fr/

Libm comparisions also exist: https://members.loria.fr/PZimmermann/papers/accuracy.pdf

Trace[-(-1)^(1/5) Beta[-1, -(6/5), 11/5] // N, TraceInternal -> True, 
 TraceDepth -> 10]

will give this on my Windows 11: https://pastebin.com/h9espi8G (wolfram cloud: https://pastebin.com/DGHQzaQX).

P.S. So there are 3 bugs here: on Linux rounding for floating precision (that cannot be controlled by N as in for 1/(-1.)^1.2) is horrible, even on Windows the error is just off-by-one in last digit, maybe because of AMD/Microsoft libm. Still an error. 2nd bug is that the general solver has somehow variable precision, not equal to this floating default and yet the precision in complex part is lower on one digit (or it just gets an error and sees 0 not 1). 3rd bug is that on windows it can sometimes just copy values through trace wrong with no reason (maybe it is some precision autotracking, dunno).

P.S.2 Opened a thread in community: https://community.wolfram.com/groups/-/m/t/2851685

$\endgroup$
10
  • $\begingroup$ You can specify the numerical accuracy when using N. For instance, N[E^(-((4 I [Pi])/5)) Beta[-1, -(6/5), 11/5], 100] gives it to 100 decimal places. $\endgroup$
    – bill s
    Commented Mar 14, 2023 at 21:05
  • $\begingroup$ And btw, it is not going to work with 1/(-1.)^1.2, only with N[1/(-1)^(12/10), 50], yet there is an error on linux inside the trace of the calculation. $\endgroup$ Commented Mar 14, 2023 at 22:09
  • $\begingroup$ (1) On "13.2.0 for Mac OS X ARM (64-bit) (November 18, 2022)" versus "13.2.0 for Linux x86 (64-bit) (December 12, 2022)" I get a 1 ulp difference in numericizing one factor N[ Beta[-1, -(6/5), 11/5]] and no difference in numericizing the -(-1)^(1/5) factor from the full-simplified expression. This accounts for the results on Macos and Linux. (2) Different rounding errors from different expressions should be expected (it's a chapter in most num. anal. books), so if there is a point to comparing simplified and unsimplified expressions, it needs to be made clearer. $\endgroup$
    – Michael E2
    Commented Mar 15, 2023 at 21:58
  • $\begingroup$ (2) We are not comparing FullSimplified and not FullSimplified expressions. At least I do not. We compare only FullSimplified expressions, but on Linux (Cloud) and Windows. $\endgroup$ Commented Mar 15, 2023 at 22:00
  • $\begingroup$ BTW, you can get to perfect result by doing -1. (0.8090169943749475 + 0.587785252292473 I) (5.418313004792032 - 3.936634828025925 I). $\endgroup$ Commented Mar 15, 2023 at 22:55

3 Answers 3

5
$\begingroup$

I'm not sure I understand the problem, despite attempts get clarification.

So far I'm not sure there is much more to this than to the following, this-is-school-math, behavior:

Sqrt[3. (4./3. - 1.) - 1.]
(*  0. + 1.49012*10^-8 I  *)

But in this instance, this behavior is predicted. In the OP's case, there is the added variability of different systems using different libraries on different CPUs.

About the variability, my Mac M1 Pro and the online Linux compute different values for $1/z$ for some complex values of $z$, and neither produces a correctly rounded result all the time. So far, all I've witnessed is incorrect rounding (<1ulp error).

If we consider complicated expressions (that is, involving more than one arithmetic operation), such as the OP's 1/(-1.)^1.2 that arises in Trace[N@Beta[-1, -(6/5), 11/5], TraceInternal -> True], we have to consider error propagation and additional rounding errors at each step. Even if operations are correctly rounded, the errors get propagated, and subtractive cancellation may lead to large relative error. One may also have correlated errors that cancel each other out. Different formulas have different numerical properties, even if they are school-math equivalent. For instance (-1.)^(-1.2) is a better way to compute 1/(-1.)^1.2, if you have a good Power function. (Possibly the Beta[] algorithm should be criticized for this.)

Now, interesting things happen in evaluating 1/(-1.)^1.2 on the Mac M1 and on Linux. The Mac computes z1 = (-1.)^1.2 correctly rounded and Linux does not; and neither the Mac nor Linux compute 1/z1 correctly rounded; however, on Linux, the two rounding errors cancel each other out, so that the result of 1/(-1.)^1.2 ends up correctly rounded. One might criticize the complex arithmetic of both systems for division not being correctly rounded; however, if complex division is done in real double-precision, then it is a complicated operation (in the above sense) and entails multiple roundings and error propagation.

That said, neither system computes 1/(-1)^(6/5) correctly rounded. That should be no surprise, since 6/5 is not the same as 1.2, since 1.2 equals 5404319552844595/4503599627370496 exactly, in binary double precision. (This applies to 11/5 and 2.2 in the Beta[] calculation; indeed, the error is 4 times as great, due to the way the bits line up in the binary expansion of the fractions. This rounding error is then propagated through the computation of Beta[].)

It is not made clear in the OP what is so horrible, but I infer it is that the imaginary part of a real result is nonzero in the first expression in the OP, whether simplified or not. I have not looked into how expensive it is to make complex arithmetic be correctly rounded, and that seems to be a significant source of error. (The error in the Beta[] cannot be completely traced.) The sources cited seem to consider real arithmetic and functions only. IEEE 754 does not treat complex functions (e.g. "$pow (x, y)$ signals the invalid operation exception for finite $x < 0$ and finite non-integer $y$"). I think most applications can tolerate errors of a couple ulp. This seems inevitable to me in computations that involve several operations such 1/(-1.)^1.2 or the Beta[] one (unless it is possible to write a special routine for your problem). Mathematica provides arbitrary precision for applications in which such small errors are too large.

Finally, I will point out that it is easy to write a numerically better routine in Mathematica for the OP's initial problem:

myBeta[x_, a_, b_] = 
  MeijerGReduce[Beta[x, a, b], x] // Activate // FunctionExpand;

Block[{Beta = myBeta},
 E^(-((4 I π)/5)) Beta[-1, -(6/5), 11/5] // FullSimplify // N
 ]

(*  -6.6974  *)

Appendix

What $1.2$ and $2.2$ equal in binary floating-point

Table of bit expansions of 1.2, 2.2

Binary expansions of 1.2, 2.2. The color squares indicate a 1, the white squares indicate a 0, and the gray squares indicate a nonexistant or insignificant bit.

$\endgroup$
14
  • $\begingroup$ "such as the OP's 1/(-1.)^1.2, which arises in Trace[N@Beta[-1, -(6/5), 11/5], TraceInternal -> True]" Nice! Yes, apparently the issue is that it cannot see that this is just 1: N[1/(-1.)^1.2*E^(-((4 I [Pi])/5))], even if you increase second parameter of N. Yet if you do FullSimplify of the second part it sees it is 1. + 0. I. $\endgroup$ Commented Mar 16, 2023 at 23:44
  • $\begingroup$ "It is not made clear in the OP what is so horrible, but I infer it is that the imaginary part of a real result is nonzero in the first expression in the OP, whether simplified or not." That too, but I mostly do not like different behaviour on different OS. Is MeijerGReduce faster than out of the box or slower? $\endgroup$ Commented Mar 16, 2023 at 23:52
  • 2
    $\begingroup$ (Note to poster, not Michael E2): Different behaviors on different operating systems is generally due to (1) different libraries, (2) different arithmetic processors and (3) different compilers. But you knew this. As indicated in responses, one can use software arithmetic instead. It's slower by far, but will generally not show such system dependencies. As for those dependencies, I really do not see a way to avoid them. $\endgroup$ Commented Mar 17, 2023 at 0:05
  • $\begingroup$ So you use Intel libm from OneAPI on windows? Cause it has complex.h. Ideally you would use that on linux too. intel.com/content/www/us/en/docs/dpcpp-cpp-compiler/… $\endgroup$ Commented Mar 17, 2023 at 0:24
  • $\begingroup$ I have only one computer to use, plus whatever is on WolframCloud. I don't have access to Windows (on either Intel or AMD). (Or maybe you're asking Daniel?) MeijerGReduce converts Beta to ${}_2F_1$, so it probably is of similar speed since the trace shows Beta calls ${}_2F_1$. $\endgroup$
    – Michael E2
    Commented Mar 17, 2023 at 0:47
10
$\begingroup$

This made me curious.

I work under macos on Apple Silicon. I get these results:

a = E^(-((4 I \[Pi])/5)) Beta[-1, -(6/5), 11/5] // N
b = E^(-((4 I \[Pi])/5)) Beta[-1, -(6/5), 11/5] // FullSimplify // N
Abs[Re[a] - Re[b]]/Abs[Re[b]]
Abs[Im[a] - Im[b]]/Abs[Im[b]]
Abs[a - b]/Abs[b]

-6.697403197294095 - 1.7763568394002505*^-15 I

-6.697403197294095 - 8.881784197001252*^-16 I

0.

1.

1.3261534262398443*^-16

So it looks as if we had 100% relative error in the imaginary part. But as a complex number, the relative error is almost as low as it can be. I agree, in a perfect world this should not happen, as double complex numbers have separate matissas and exponents for real and imaginary part.

Let's see where this error might come from; let's check the "complicated" functions first:

a1 = E^(-((4 I \[Pi])/5)) // N
b1 = E^(-((4 I \[Pi])/5)) // FullSimplify // N
Abs[Re[a1] - Re[b1]]/Abs[Re[b1]]
Abs[Im[a1] - Im[b1]]/Abs[Im[b1]]
Abs[a1 - b1]/Abs[b1]

-0.8090169943749473 - 0.5877852522924732 I

-0.8090169943749475 - 0.5877852522924731 I

1.3723111286221163*^-16

1.8888242266968722*^-16

1.5700924586837752*^-16

So both results are basically as good as it can get with double precision.

a2 = Beta[-1, -(6/5), 11/5] // N
b2 = Beta[-1, -(6/5), 11/5] // FullSimplify // N
Abs[Re[a2] - Re[b2]]/Abs[Re[b2]]
Abs[Im[a2] - Im[b2]]/Abs[Im[b2]]
Abs[a2 - b2]/Abs[b2]

5.418313004792032 - 3.936634828025925 I

5.418313004792032 - 3.936634828025925 I

Of course, the FullSimplify cannot do anything here.

Okay, but we have

a - a1 a2
b - b1 b2

0.+0.I

0.+0.I

So where the heck does the error in a - b come from? Just from the multiplication? Indeed! The simple "naive" way to compute the products would be

Re[a] == Re[a1] Re[a2] - Im[a1] Im[a2]
Im[a] == Re[a1] Im[a2] + Im[a1] Re[a2]

And apprently, that's what Mathematica does:

Re[a1] Re[a2] - Im[a1] Im[a2] - Re[a]
Re[a1] Im[a2] + Im[a1] Re[a2] - Im[a]
Re[b1] Re[b2] - Im[b1] Im[b2] - Re[b]
Re[b1] Im[b2] + Im[b1] Re[b2] - Im[b]

The problem with this approach is catastrophic cancellation in the imaginary parts:

Re[a1] Im[a2]
Im[a1] Re[a2]

3.1848044765212715

-3.1848044765212733

Re[b1] Im[b2]
Im[b1] Re[b2]

3.184804476521272

-3.184804476521273

As you can see the two summands are of almost equal magnitudes, but with opposite signs.

So it seems that this can be blamed solely on Mathematica's product routine for complex function. But I am pretty sure, this would also happen in C or C++ with the -Ofast or -ffast-math flags...

$\endgroup$
18
  • $\begingroup$ "I work under macos on Apple Silicon. I get these results:" Do you use ARM native compilation? Same as in Windows 11 up to last digit in hidden floats. That means the bug comes from the difference "-0.8090169943749475/0.587785252292473 changes to -0.8090169943749473/0.5877852522924732 with no reason whatsover" You need to use Trace to understand where the error comes from and to see this error. Trace is very small in this case. $\endgroup$ Commented Mar 15, 2023 at 13:41
  • $\begingroup$ I do use ARM compilation, yes. The problems lies solely in the product; the factors are computed accurately. I come more and more to the conclusion that compontent-wise precision in complex multiplication cannot be guaranteed; only norm-wise precision. The actual exact imaginary part of your number is 0. So computing the relative error would always entail some division by 0 (or by a very small number). $\endgroup$ Commented Mar 15, 2023 at 15:54
  • $\begingroup$ That is not the main problem I am conserned with here. That error is small and improved with FullSimplify; we all know how slow FullSimplify is, so it cannot just be auto used. It makes the order of operations very different. Again, operations inside Trace have variable precision, because there is autotracking of it... What you showed here is not interesting, because all interesting things happen in C code, not even inside Trace. Lower level, x86_64 assembly. I can reproduce my problem with Linux vs windows even without Trace though. See: community.wolfram.com/groups/-/m/t/2851685 $\endgroup$ Commented Mar 15, 2023 at 16:10
  • 2
    $\begingroup$ I didn't see much of a question in the original post. As for the basic example, some of the difference can be seen comparing In[282]:= InputForm[N[1/(-1)^(6/5)]] Out[282]//InputForm= -0.8090169943749473 + 0.5877852522924732*I, which is computed under the hood as In[281]:= InputForm[N@1/Exp[I*Pi*6/5]] Out[281]//InputForm= -0.8090169943749473 + 0.5877852522924732*I, with In[283]:= InputForm[1/(-1)^N[(6/5)]] Out[283]//InputForm= -0.8090169943749475 + 0.587785252292473*I. Once the approximate value is in that power, chances for to-the-last-ULP computation drop considerably. $\endgroup$ Commented Mar 15, 2023 at 17:21
  • $\begingroup$ "Why do you think that happens?" That is the question. Again you are all jumping on the change after FullSimplify, while the question is about Linux (Wolfram Cloud) bug vs windows 11, both after FullSimplify. And again, that difference is not important because the bug happens on one of the operations only, as you can see in Trace. $\endgroup$ Commented Mar 15, 2023 at 19:14
0
$\begingroup$

Okay, so the answer is that on Windows it is possible to use Intel C/C++ math.h libm library, which is much better than glibc on windows, indeed accuracy.pdf paper above shows that pow is just 0.817 for glibc (and gcc has its own pow, will not use glibc one) compared to 0.515 ulp for Intel C++ compiler library version 2023.0. And again, here we have complex numbers, that paper does not actually check those. That is worse than core-math, which is ideal 0.5 ulp.

I would love to clarify this and have https://gitlab.inria.fr/core-math/core-math/-/blob/master/src/binary32/pow/powf.c to be used on Mathematica on Linux.

$\endgroup$

Not the answer you're looking for? Browse other questions tagged or ask your own question.