Skip to main content

All Questions

4 votes
2 answers
110 views

pow and its relative error

Investigating the floating-point implementation of the $\operatorname{pow}(x,b)=x^b$ with $x,b\in\Bbb R$ in some library implementations, I found that some pow ...
emacs drives me nuts's user avatar
1 vote
2 answers
64 views

How to transform this expression to a numerically stable form?

I have this function $$f(x, t)=\frac{\left(1+x\right)^{1-t}-1}{1-t}$$ Where $x \ge 0$ and $t \ge 0$. I want to use it in neural network, and thus need it to be differentiable. While it has a ...
yuri kilochek's user avatar
1 vote
0 answers
49 views

Proof that $\epsilon_{mach} \leq \frac{1}{2} b^{1-n}$

I have a question about the proof of the following statement: For each set of machine numbers $F(b, n, E_{min}, E_{max})$ with $E_{min} < E_{max}$ the following inequality holds: $\epsilon_{mach} \...
Felix Gervasi's user avatar
2 votes
1 answer
73 views

Numerically stable way to compute ugly double fraction

I am looking for a numerically stable version of this (ugly) equation $$ s^2=\frac{1}{\frac{1}{\beta_1}+\frac{1}{\beta_2}W} $$ where $$ \beta_1 = c_1-c_2m+(m-c_2)b\\ \beta_2 = \frac{1}{2}\left((a-m)^2-...
mto_19's user avatar
  • 272
0 votes
1 answer
59 views

Proof of `TWOSUM` implementation in "double-double" arithmetic

"double-double" / "compensated" arithmetic uses unevaluated sums of floating point numbers to obtain higher precision. One of the basic algorithms is ...
Claude's user avatar
  • 5,707
1 vote
0 answers
159 views

Show that $x+1$ is not backward stable

Suppose we use $\oplus$ to compute $x+1$, given $x \in \mathbb{C}$. $\widetilde{f(x)} = \mathop{\text{fl}}(x) \oplus 1$. This algorithm is stable but not backward stable. The reason is that for $x \...
clay's user avatar
  • 2,783
1 vote
2 answers
173 views

Another way to compute the epsilon machine

Why the next program computes the machine precision? I mean, it can be proved that the variable $u$ will give us the epsilon machine. But I don't know the reason of this. Let $a = \frac{4}{3}$ $b = a −...
xenuti's user avatar
  • 153
0 votes
2 answers
99 views

Tricks in the floating point operations for better numerical results

I'm attempting to comprehend a passage from the book "Computational Modeling and Visualization of Physical Systems with Python" which I may be mentally fatigued to grasp. Here's the issue: ...
Fitzroy's user avatar
  • 15
2 votes
1 answer
182 views

Is there a stable algorithm for every well-conditioned problem?

Reading these notes on condition numbers and stability, the summary states: If the problem is well-conditioned then there is a stable way to solve it. If the problem is ill-conditioned then there is ...
Thanks for flying Vim's user avatar
0 votes
0 answers
60 views

Secant method optimization - initial guesses with floating point precision?

Say I want to find the root of $f(x) = e^{-x} - 5$, and assume I start with initial guesses $x_0 = -3$ and $x_1 = 3$. I define my update function as $x_i = x_{i-1} - f(x_{i-1}) * \frac{x_{i-1} - x_{i-...
rb612's user avatar
  • 3,588
1 vote
1 answer
172 views

Does using smaller floating-point numbers decrease rounding errors?

I started learning about floating point by reading "What Every Computer Scientist Should know About Floating-Point Arithmetic" by David Goldberg. On page 4 he presents a proof for the ...
Thanks for flying Vim's user avatar
0 votes
1 answer
60 views

Representation of rounding error in floating point arithmetic. [duplicate]

It is well known that in a Floating point number system: $$ \mathbb{F}:=\{\pm \beta^{e}(\frac{d_1}{\beta}+\dots +\frac{d_t}{\beta^t}): d_i \in \{0,\dots,\beta-1\},d_1\neq 0, e_{\min}\leq e \leq e_{\...
Henry T.'s user avatar
  • 1,356
3 votes
1 answer
109 views

How to compute this "smooth max operator"?

I was seeking for an alternate way to activate each neuron of a neural network non-linearly. Eventually, I came up with the following binary operation: $$ x \lor y = \log (\exp x + \exp y) $$ With $-\...
Dannyu NDos's user avatar
  • 2,049
0 votes
0 answers
44 views

Is converting between roots and coefficients of a polynomial numerically stable?

Assume we're on a computer using 32-bit floats (or something similar), and I'm converting back and forth between the $n$ coefficients of a polynomial and the corresponding $n$ roots of the polynomial. ...
chausies's user avatar
  • 2,230
0 votes
1 answer
53 views

storing decimal number into computer with finite mantissa

I am learning about numerical methods and the following link caught my attention: https://www.iro.umontreal.ca/~mignotte/IFT2425/Disasters.html So from what I understand 0.1 is not exactly ...
neo's user avatar
  • 109

15 30 50 per page
1
2 3 4 5
11