Nice story. An even more powerful way to express numbers is as a continued fraction (https://en.wikipedia.org/wiki/Continued_fraction). You can express both real and rational numbers efficiently using a continued fraction representation.
As a fun fact, I have a not-that-old math textbook (from a famous number theorist) that says that it is most likely that algorithms for adding/multiplying continued fractions do not exist. Then in 1972 Bill Gosper came along and proved that (in his own words) "Continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.", see https://perl.plover.com/yak/cftalk/INFO/gosper.txt.
I have been working on a Python library called reals (https://github.com/rubenvannieuwpoort/reals). The idea is that you should be able to use it as a drop-in replacement for the Decimal or Fraction type, and it should "just work" (it's very much a work-in-progress, though). It works by using the techniques described by Bill Gosper to manipulate continued fractions. I ran into the problems described on this page, and a lot more. Fun times.
The link at the end is both shortened (for tracking purposes?) and unclickable… so that’s unfortunate. Here is the real link to the paper, in a clickable format: https://dl.acm.org/doi/pdf/10.1145/3385412.3386037
Unrelated to the article, but this reminds me of being an intrepid but naive 12-year-old trying to learn programming. I had already taught myself a bit using books, including following a tutorial to make a simple calculator complete with a GUI in C++. However I wasn't sure how to improve further without help, so my mom found me an IT school.
The sales lady gave us a hard sell on their "complete package" which had basic C programming but also included a bunch of unnecessary topics like Microsoft Excel, etc. When I tried to ask if I could skip all that and just skip to more advanced programming topics, she was adamant that this wasn't an option; she downplayed my achievements trying to say I basically knew nothing and needed to start from the beginning.
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
However in the end I was naive, she was good at sales, and I was desperate for knowledge, so we signed up. However sure enough the curriculum was mostly focused on learning basic Microsoft Office products, and the programming sections barely scraped the surface of computer science; in retrospect, I doubt there was anybody there qualified to teach it at all. The only real lesson I learned was not to trust salespeople.
Thank god it's a lot easier for kids to just teach themselves programming these days online.
As soon as I read the title, I chuckled, because coming from the computational mathematics background I already knew what it roughly is going to be about. IEEE 754 is like democracy in a sense that it is the worst, except for all the others. Immediately when I saw the example I thought: it is either going to be either a Kahan summation or full-scale computer algebra system. It turned out to be some subset of the latter and I have to admit I have never heard of Recursive Real Arithmetic (I knew of Real Analysis though).
If anything that was a great insight about one of my early C++ heroes, and what they did in their professional life outside of the things they are known for. But most importantly it was a reminder how deep seemingly simple things can be.
The NYC subway fare is $2.90. I was using PCalc on iOS to step through remaining MetroCard values per swipe and discovered that AC, 8.7, m+, 2.9, m-, m-, m- evaluates to -8.881784197E-16 instead of zero. This doesn't happen when using Apple's calculator. I wrote to the developer and he replied, "Apple has now got their own private maths library which isn't available to developers, which they're using in their own calculator. What I need to do is replace the Apple libraries with something else - that's on my list!"
> And almost all numbers cannot be expressed in IEEE floating points.
It is a bit stronger than that. Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
> Some problems can be avoided if you use bignums.
Or that. My momentary existential angst has been assuaged. Thanks bignums.
That's pretty cool, but the downsides of switching to RRA are not only about user experience. When the result is 0.0000000..., the calculator cannot decide whether it's fine to compute the inverse of that number.
For instance, 1/(atan(1/5)-atan(1/239)-pi/4) outputs "Can't calculate".
Well alright, this is a division by zero. But then you can try 1/(atan(1/5)-atan(1/239)-pi/4+10^(-100000)), and the output is still "Can't calculate" even though it should really be 10^100000.
I played around with the calculator source code from the Android Open Source Project after a previous submission[1]. I think Google moved it from AOSP to the Google Play Services several years ago, but the old source is still available.
It does solve some real problems that I'd love to have available in a library. The discussion on the previous article links to some libraries, but my recollection is that the calculator code is more accessible to an innumerate person like myself.
Edit: the previous article under discussion doesn't seem to be available, but it's on archive.org[2].
The way this article talks about using "recursive real arithmetic" (RRA) reminds me of an excellent discussion with Conal Elliot on the Type Theory For All podcast. He talked about moving from representing things discretely to representing things continually (and therefore more accurately). For instance, before, people represented fonts as blocks of pixels, (discrete.) They were rough approximations of what the font really was. But then they started to be recognized as lines/vectors (continual), no matter the size, they represented exactly what a font was.
Conal gave a beautiful case for how comp sci should be about pursuing truth like that, and not just learning the latest commercial tool. I see the same dogged pursuit of true, accurate representation in this beatiful story.
Some quick research yields a couple of open source CAS, such as OpenAxiom, which uses the Modified BSD license. Granted that Google has strong "NIH" tendencies, but I'm curious why something like this wasn't adapted instead of paying several engineers some undisclosed amount of time to develop a calculation system.
The article mentions that a CAS is an order of magnitude (or more!) more complex than the bifurcated rational + RRA approach, as well as slower, but: the complexity would be solved by adapting an open source solution, and the computation speed wouldn't seem to matter on a device like an Android smartphone. My HP Prime in CAS mode runs at 400MHz and solves every problem the Android calculator solves with no perceptible delay.
Is it a matter of NIH? A legal issue with the 3-clause BSD license I don't understand? Reducing binary size? The available CAS weren't up to snuff for one reason or another? Some other technical issue? Or, if not that, why not use binary-coded decimal?
These are just questions, not criticisms. I have very very little experience in the problem domain and am curious about the answers :)
To make any type of app really good is super hard.
I have yet to see a good to-do list tool.
I'm not kidding. I tried TickTick, Notion, Workflowy ... everything I tried so far feels cumbersome compared to how I would like to handle my To-Do list. The way you create, edit, browse, drag+drop items is not as all as fluid as I imagine it.
So if anyone knows a good To-Do list software (must be web based, so I can use it anywhere without installing something) - let me know!
There’s a pleasantly elegant “hey, we’ve solved the practical functional complement to this category of problems over here, so let’s just split the general actual user problem structurally” vibe to this journey.
It often pays off to revisit what the actual “why” is behind the work that you’re doing, and this story is a delightful example.
I wrote an arbitrary precision arithmetic C++ library back in the 90’s. We used it to compute key pairs for our then new elliptic-curve based software authentication/authorization system. I think the full cracks of the software were available in less than two weeks, but it was definitely a fun aside and waaaay too strong of a solution to a specific problem. I was young and stupid… now I’m old and stupid, so I’d just find an existing tool chain to solve the problem.
All the calculators that I just tried for the article's expression give the wrong answer (HP Prime, TI-36X Pro, some casio thing). Even google's own online calculator gives the wrong answer, which is mildly ironic. [https://www.google.com/search?q=1e101%2B1-1e101&oq=1e101%2B1]
I played around with the macOS calculator and discovered that the dividing line seems to be at 1e33. I.e. 1e33+1-1e33 gives the correct answer of 1 but 1e34+1-1e34 gives 0. Not sure what to make of that.
I enjoyed the article, but it seems Apple has since improved their calculator app slightly. The first example is giving me the correct result today. However, the second example with the “Underflow” result is still occurring.
Find a representation of finite memory to represent points, which allows exact addition, multiplication and rotation between them, (with all the nice standard math property like associativity and commutativity).
For example your representation should be able to take a 2d point A, aka two coordinates, and rotate it around the origin by an angle theta to obtain the point B. Take the original point and rotate it by pi + theta, then reflect it around the origin to obtain the point C. Now answer the question whether B is coincident with C.
years ago the daily wtf had a challenge for writing the worst calculator app. my submission maintained calculation state by emitting it's own source code, recompiling and running the new executable.
Interesting article, and kudos to Boehm for going the extra mile(s), but it seems like overkill to me.
I wouldn't expect, or use, a calculator for any calculation requiring more accuracy than the number of digits it can display. I'm OK with with iPhone's 10^100 + 1 = 1e100.
If I really needed something better, I'd try Wolfram Alpha.
One of the first ideas I had for an app was a calculator that represented digits like shown in the article but allowed you to write them with variables and toggle between symbolic and actual responses.
A use case would be: in a spreadsheet like interface you could verify if the operations produced the final equation you were modeling in order to help validate if the number was correct or not. I had a TI-89 that could do something close and even in 2006 that was not exactly brand new tech. I figured surely some open source library available on the desktop must get me close. I was wildly wrong. I stuck with programming but abandoned the calculator idea. Even nearly 20 years later, such a task doesn’t seem that much easier to me.
At the risk of coming across as being a spoilsport, I think when someone says "anyone can write a calculator app", they just mean an app that simulates a pocket calculator (which is indeed pretty easy) as opposed to one which always gives precisely the right answer (which is indeed impossible). Also, you can avoid the most embarrassing errors just by rearranging the terms to do cancellation where possible, e.g. sqrt(2) * 3 * sqrt(2) is absolutely precisely 6, not 6 within some degree of approximation.
> 1 is not equal to 1 - e^(-e^1000). But for Richardson and Fitch's algorithm to detect that, it would require more steps than there are atoms in the universe.
> They needed something faster.
I'm disappointed after this paragraph I expected a better algorithm and instead they decided to give up. Fredrik Johansson in his paper "Calcium: computing in exact real and complex fields" gives a partial algorithm for the problem and writes "Algorithm 2 is inspired by Richardson’s algorithm, but incomplete: it will
find logarithmic and exponential relations, but only if the extension tower is flattened (in
other words, we must avoid extensions such as e^log(z) or √z^2), and it does not handle all
algebraic functions.
Much like the Risch algorithm, Richardson’s algorithm has apparently never been implemented fully. We presume that Mathematica and Maple use similar heuristics to ours,
but the details are not documented [6], and we do not know to what extent True/False
answers are backed up by a rigorous certification in those system".
I use python repl as my primary calculator on my computer.
1. I don't have problems like the IOS problem documented here. This requires me to know the difference between an int and a float, but pythons ints have unbounded precision(except if you overflow your entire memory), so that kind of precision loss isn't a big deal.
2. History is a lot better. Being able to scroll back seems like a thing calculators ought to offer you, but they don't.
3. In the 1-in-a-hundred times I need to repeat operations on the calculator, hey, we've already got loops, this is python
4. Every math feature in the windows default calculator is available in the math library.
5. As bad as python's performance reputation is, it's not at all going to be noticeable for simple math.
> Obviously we'll want a symbolic representation for the real number 1
Sorry, why is this obvious? A basic int type can store the value of 1, let alone the more complicated Rational (BigNum/BigNum) type they have. I can absolutely see why you want symbolic representations for pi, e, i, trig functions, etc., but why one?!
Off topic, but I believe naming this specific kind of numbers "real" is a misnomer. Nothing in reality is expression of a real number. Real numbers pop up only when we abstract reality into mathematical models.
In Polish language rational numbers are called something more like "measurable" numbers and in my opinion that's the last kind of numbers that is expressed in reality in any way. Those should be called "real" and real should be called something like "abstract" or "limiting" because they pop-up first as limits of some process working on rational numbers for infinite number of steps.
I really hate when people put cat images and memes in a serious article.
Don't get me wrong, the content is good and informative. But I just hate the format.
That reminds me when SideFX started putting memes into their official tutorial youtube channel. At least this is just a webpage and we can scroll through them...
I think I understand why, from the article, but wouldn't it be "easy" (probably not, but curious about why) to simplify the first expression to (1-1)π + 1 then 0π + 1 and finally just 1 before calculating a result?
Due to backwards compatibility modern PC CPUs have some mathematical constants in hardware, one of them Pi https://www.felixcloutier.com/x86/fld1:fldl2t:fldl2e:fldpi:f... Moreover, that FLDPI instruction delivers 80 bits of precision, i.e. more precise than FP64.
That’s pretty much useless in modern world because the whole x87 FPU is deprecated. Modern compilers are generating SSE1 and SSE2 instructions for floating-point arithmetic, instead of x87.
As far as I know, windows calculator have a similar approach. It use rational, and switch to Taylor expansion to try to avoid cancellation errors. Microsoft open sourced it some times ago on GitHub
lowkey this is why ieee 754 floating point is both a blessing and a curse, like yeah it’s fast n standardized but also introduces unavoidable precision loss, esp w iterative computations where rounding errors stack up in unpredictable ways. ppl act like increasing precision bits solves everything. but u just push the problem further down, still dealing w truncation, cancellation, etc. (and edge cases where numerical stability breaks down.)
… and this is why interval arithmetic and arbitrary precision methods exist, so it gives guaranteed bounds on error instead of just hoping fp rounding doesn’t mess things up too bad. but obv those come w their own overhead: interval methods can be overly conservative, which leads to unnecessary precision loss, and arbitrary precision is computationally expensive, scaling non-linearly w operand size.
wonder if hybrid approaches could be the move, like symbolic preprocessing to maintain exact forms where possible, then constrained numerical evaluation only when necessary. could optimize tradeoffs dynamically. so we’d keep things efficient while minimizing precision loss in critical operations. esp useful in contexts where precision requirements shift in real time. might even be interesting to explore adaptive precision techniques (where computations start at lower precision but refine iteratively based on error estimates).
This article was really well written. Usually in such articles i understand about 50%, maybe if I'm lucky 70% but this one I've understood nearly everything. It's not much of a smartness thing but an absolute refusal on my part to learn the jargon of programming as well as my severe lack of knowledge of all the big words that are thrown around lol. But really simply written love it
If you accept that Pi and Sqrt(2) will be represented as a terminating series of digits (say, 30), then 99% of the problems stated go away. My HP calculator doesn't represent the square root of 2 as a magic number, it's 1.414213562.
At some point, when I get a spare 5 years (and/or if people start paying for software again), I will start to work on a calculator application. Number system wrangling is quite fun and challenging, and I am hoping to incorporate units as a first-class citizen.
This is really cool, but it does show how Google works. They’ll pay this guy ~$3million a year (assuming stock appreciation) to do this but almost no end user will appreciate it in the calculator app itself.
Does anyone know if this was the system used by higher end TI calculators like the TI-92? It had a 'rational' mode for exact answers and I suspect that it used RRA for that.
I doubt that most people using the calc app expect it to handle such situations. It's nice that it does of course but IMO it misses the point that the inputs to a lot of real world calculations are inaccurate to start with.
i.e it's more likely that I've made a few mm mistake when measuring the radius of my table than that I'm not using an precise enough version of Pi. The area of the table will have more error because one is squaring the radius, obviously.
It would be interesting to have a calculator that let you add in your estimated measurement error (or made a few reasonable guesses about it for you) and told you the error in your result e.g. the standard deviation.
I sometimes want to buy stuff at a hardware shop and I think : "how much paint do I need to buy?" I haven't planned so I'm thinking "it's about 4m by 5m...I think?" I try to do a couple of calculations with worst case numbers so I at least get enough paint and save another trip to the shop but not comically too much so that I have a tin of it for the next 5 years.
I remember having to estimate error in results that were calculated from measured values for physics 101 and it was a pain.
I really do think we should just use the symbolic systems of math rather than trying to bring natural world numbers into a digital number space. It's this mapping that inherently leads to compensating strategies. I guess this is called an algebraic system like the author mentioned.
But I view math as more of a string manipulation function with position-dependent mapping behavior per character and dependency graphs, combined with several special functions that form the universal constants.
Just because data is stored in digitization as 1 and O, don't forget it's more like charged and not charged. Computers are not numeric systems, they are binary systems. Not the same thing.
I really wonder what the business case for spending so much effort on such precision was. Who are the users who need such accuracy but are using android calculator?
Really interesting article. I noticed that my Android calculator app could display irrational numbers like PI to an impressive amount of digits, if I hold it sideways.
Was given the task to build a simple calculator app as a project for a Java class I took in college.
No parens or anything like that, nothing nearly so fancy. Classic desk calculator where you set the infix operation to apply to the previous value, followed by the second value of the operation.
It was frankly an unexpected challenge. There's a lot more to it than meets the eye.
I only got as far as rational numbers though. PI accurate to the 8 digit display was good enough for me.
Honestly though, I think it was a great exercise for students, showing how seemingly simple tasks can actually be more complex than they seem. I'm still here thinking about it some twenty years later.
I don't care if it gives me "Underflow" for bs like e^-1000, just give me a text field that will be calculated into result that's represented in the way I want (sci notation, hex, binary, ascii etc whatever).
All standard calculators are imitations of a desktop calculator, It's insane that we're still dragging this UI into desktop. Why don't we use rotary dial on mobile phones then?
It's great that at least OSX have cmd+space where I can type an expression and get a quick result.
And yes, I did develop my own calculator, and happily used it for many years.
TLDR: the real problem of calculators is their UI, not arithmetic core.
Slightly disappointing: The calculator embedded in Google's search page also gives the wrong answer (0) for (10^100) + 1 − (10^100). So apparently they don't use the insights they gained from their Android calculator.
And yet Android's calculator is quite bad. Despite being able to correctly calculate stuff that 99.99% of the population don't care about, it lacks many scientific operations that a good chunk of accountants, engineers and coders would make use of regularly. This is a classic situation of engineers solving the fun/challenging problems before the customer's actual problems.
I removed telemetry on my Win10 system and now calc.exe crashes on basic calculations. I've reported this but nobody cares because the next step in troubleshooting is to reinstall Windows. So if telemetry fails, calc.exe will silently explode. Therefore no, anyone cannot make it.
lol I ran into this when making a calculator program because Google's calculator didn't do certain operations (such as adding clock time results like 1:23+1:54) and also because Google occasionally accuses me of being a bot when I search for too many equations.
Maybe I'll get back to the project and finish it this year.
Interesting article but that feels like wasted effort for what is probably the most bare-bones calculator app out there. The Android calc app has the 4 operations, sin cos tan ^ ln log √ ! And that's it. I think most people serious about calculator usage either have a physical one or use another more featureful app and the others don't need such precision.
In the year he did this he easily could have just done some minor interface tweaks to a ruby repl which includes the BigDecimal library. In fact I bet this post to an AI could result in such a numerically accurate calculator app. maybe as a Sinatra single file ruby web app designed to format to phone resolutions natively.
“A calculator app? Anyone could make that”
(chadnauseam.com)1786 points by pie_flavor 16 February 2025 | 428 comments
Comments
As a fun fact, I have a not-that-old math textbook (from a famous number theorist) that says that it is most likely that algorithms for adding/multiplying continued fractions do not exist. Then in 1972 Bill Gosper came along and proved that (in his own words) "Continued fractions are not only perfectly amenable to arithmetic, they are amenable to perfect arithmetic.", see https://perl.plover.com/yak/cftalk/INFO/gosper.txt.
I have been working on a Python library called reals (https://github.com/rubenvannieuwpoort/reals). The idea is that you should be able to use it as a drop-in replacement for the Decimal or Fraction type, and it should "just work" (it's very much a work-in-progress, though). It works by using the techniques described by Bill Gosper to manipulate continued fractions. I ran into the problems described on this page, and a lot more. Fun times.
The sales lady gave us a hard sell on their "complete package" which had basic C programming but also included a bunch of unnecessary topics like Microsoft Excel, etc. When I tried to ask if I could skip all that and just skip to more advanced programming topics, she was adamant that this wasn't an option; she downplayed my achievements trying to say I basically knew nothing and needed to start from the beginning.
Most of all, I recall her saying something like "So what, you made a calculator? That's so simple, anybody could make that!"
However in the end I was naive, she was good at sales, and I was desperate for knowledge, so we signed up. However sure enough the curriculum was mostly focused on learning basic Microsoft Office products, and the programming sections barely scraped the surface of computer science; in retrospect, I doubt there was anybody there qualified to teach it at all. The only real lesson I learned was not to trust salespeople.
Thank god it's a lot easier for kids to just teach themselves programming these days online.
If anything that was a great insight about one of my early C++ heroes, and what they did in their professional life outside of the things they are known for. But most importantly it was a reminder how deep seemingly simple things can be.
I am using, in Android, and emulator for the TI-89 calculator.
Because no Android app has half the features, and works as well.
It is a bit stronger than that. Almost all numbers cannot be practically expressed and it may even be that the probability of a random number being theoretically indescribable is about 100%. Depending on what a number is.
> Some problems can be avoided if you use bignums.
Or that. My momentary existential angst has been assuaged. Thanks bignums.
For instance, 1/(atan(1/5)-atan(1/239)-pi/4) outputs "Can't calculate".
Well alright, this is a division by zero. But then you can try 1/(atan(1/5)-atan(1/239)-pi/4+10^(-100000)), and the output is still "Can't calculate" even though it should really be 10^100000.
It does solve some real problems that I'd love to have available in a library. The discussion on the previous article links to some libraries, but my recollection is that the calculator code is more accessible to an innumerate person like myself.
Edit: the previous article under discussion doesn't seem to be available, but it's on archive.org[2].
[1] https://news.ycombinator.com/item?id=24700705
[2] https://web.archive.org/web/20250126130328/https://blog.acol...
Conal gave a beautiful case for how comp sci should be about pursuing truth like that, and not just learning the latest commercial tool. I see the same dogged pursuit of true, accurate representation in this beatiful story.
- https://www.typetheoryforall.com/episodes/the-lost-elegance-...
- https://www.typetheoryforall.com/episodes/denotational-desig...
I hated reading this buzzfeedy style (or apparently LinkedIn-style?) moron-vomit.
I shouldn't complain, just ask my nearest LLM to rewrite this article^W scribbling to a less obnoxious form of writing..
The article mentions that a CAS is an order of magnitude (or more!) more complex than the bifurcated rational + RRA approach, as well as slower, but: the complexity would be solved by adapting an open source solution, and the computation speed wouldn't seem to matter on a device like an Android smartphone. My HP Prime in CAS mode runs at 400MHz and solves every problem the Android calculator solves with no perceptible delay.
Is it a matter of NIH? A legal issue with the 3-clause BSD license I don't understand? Reducing binary size? The available CAS weren't up to snuff for one reason or another? Some other technical issue? Or, if not that, why not use binary-coded decimal?
These are just questions, not criticisms. I have very very little experience in the problem domain and am curious about the answers :)
I have yet to see a good to-do list tool.
I'm not kidding. I tried TickTick, Notion, Workflowy ... everything I tried so far feels cumbersome compared to how I would like to handle my To-Do list. The way you create, edit, browse, drag+drop items is not as all as fluid as I imagine it.
So if anyone knows a good To-Do list software (must be web based, so I can use it anywhere without installing something) - let me know!
https://chachatelier.fr/chalk/chalk-home.php
I tried to explain what was going on https://chachatelier.fr/chalk/article/chalk.html, but it's not a very popular topic :-)
It often pays off to revisit what the actual “why” is behind the work that you’re doing, and this story is a delightful example.
I wrote an arbitrary precision arithmetic C++ library back in the 90’s. We used it to compute key pairs for our then new elliptic-curve based software authentication/authorization system. I think the full cracks of the software were available in less than two weeks, but it was definitely a fun aside and waaaay too strong of a solution to a specific problem. I was young and stupid… now I’m old and stupid, so I’d just find an existing tool chain to solve the problem.
https://qalculate.github.io/
I played around with the macOS calculator and discovered that the dividing line seems to be at 1e33. I.e. 1e33+1-1e33 gives the correct answer of 1 but 1e34+1-1e34 gives 0. Not sure what to make of that.
The real fun begins when you do geometry.
Find a representation of finite memory to represent points, which allows exact addition, multiplication and rotation between them, (with all the nice standard math property like associativity and commutativity).
For example your representation should be able to take a 2d point A, aka two coordinates, and rotate it around the origin by an angle theta to obtain the point B. Take the original point and rotate it by pi + theta, then reflect it around the origin to obtain the point C. Now answer the question whether B is coincident with C.
https://thomaspark.co/projects/calc-16/
I wouldn't expect, or use, a calculator for any calculation requiring more accuracy than the number of digits it can display. I'm OK with with iPhone's 10^100 + 1 = 1e100.
If I really needed something better, I'd try Wolfram Alpha.
One of the first ideas I had for an app was a calculator that represented digits like shown in the article but allowed you to write them with variables and toggle between symbolic and actual responses.
A use case would be: in a spreadsheet like interface you could verify if the operations produced the final equation you were modeling in order to help validate if the number was correct or not. I had a TI-89 that could do something close and even in 2006 that was not exactly brand new tech. I figured surely some open source library available on the desktop must get me close. I was wildly wrong. I stuck with programming but abandoned the calculator idea. Even nearly 20 years later, such a task doesn’t seem that much easier to me.
It is suprisingly hard problem.
https://recomputer.github.io/
> 1 is not equal to 1 - e^(-e^1000). But for Richardson and Fitch's algorithm to detect that, it would require more steps than there are atoms in the universe.
> They needed something faster.
I'm disappointed after this paragraph I expected a better algorithm and instead they decided to give up. Fredrik Johansson in his paper "Calcium: computing in exact real and complex fields" gives a partial algorithm for the problem and writes "Algorithm 2 is inspired by Richardson’s algorithm, but incomplete: it will find logarithmic and exponential relations, but only if the extension tower is flattened (in other words, we must avoid extensions such as e^log(z) or √z^2), and it does not handle all algebraic functions. Much like the Risch algorithm, Richardson’s algorithm has apparently never been implemented fully. We presume that Mathematica and Maple use similar heuristics to ours, but the details are not documented [6], and we do not know to what extent True/False answers are backed up by a rigorous certification in those system".
1. I don't have problems like the IOS problem documented here. This requires me to know the difference between an int and a float, but pythons ints have unbounded precision(except if you overflow your entire memory), so that kind of precision loss isn't a big deal. 2. History is a lot better. Being able to scroll back seems like a thing calculators ought to offer you, but they don't. 3. In the 1-in-a-hundred times I need to repeat operations on the calculator, hey, we've already got loops, this is python 4. Every math feature in the windows default calculator is available in the math library. 5. As bad as python's performance reputation is, it's not at all going to be noticeable for simple math.
Sorry, why is this obvious? A basic int type can store the value of 1, let alone the more complicated Rational (BigNum/BigNum) type they have. I can absolutely see why you want symbolic representations for pi, e, i, trig functions, etc., but why one?!
In Polish language rational numbers are called something more like "measurable" numbers and in my opinion that's the last kind of numbers that is expressed in reality in any way. Those should be called "real" and real should be called something like "abstract" or "limiting" because they pop-up first as limits of some process working on rational numbers for infinite number of steps.
Don't get me wrong, the content is good and informative. But I just hate the format.
That reminds me when SideFX started putting memes into their official tutorial youtube channel. At least this is just a webpage and we can scroll through them...
I've taken multiple numerical analysis courses, including at the graduate level.
The only thing I've learnt was: be afraid, very afraid.
HP scientific calculators goes back to the 60's and can presumably add 0.6 to 3 without adding small values to the 20th significant digit.
But
π−π = 0
I think I understand why, from the article, but wouldn't it be "easy" (probably not, but curious about why) to simplify the first expression to (1-1)π + 1 then 0π + 1 and finally just 1 before calculating a result?
That’s pretty much useless in modern world because the whole x87 FPU is deprecated. Modern compilers are generating SSE1 and SSE2 instructions for floating-point arithmetic, instead of x87.
Because it's such a difficult problem to solve that it required elite coders and Masters/PhD level knowledge to even make an attempt?
(Apple Finally Plans To Release a Calculator App for iPad Later This Year)[https://www.macrumors.com/2024/04/23/calculator-app-for-ipad...]
… and this is why interval arithmetic and arbitrary precision methods exist, so it gives guaranteed bounds on error instead of just hoping fp rounding doesn’t mess things up too bad. but obv those come w their own overhead: interval methods can be overly conservative, which leads to unnecessary precision loss, and arbitrary precision is computationally expensive, scaling non-linearly w operand size.
wonder if hybrid approaches could be the move, like symbolic preprocessing to maintain exact forms where possible, then constrained numerical evaluation only when necessary. could optimize tradeoffs dynamically. so we’d keep things efficient while minimizing precision loss in critical operations. esp useful in contexts where precision requirements shift in real time. might even be interesting to explore adaptive precision techniques (where computations start at lower precision but refine iteratively based on error estimates).
Seems like Apple got lazy with their calculator didn't even realize they had so many flaws... Math Notes is pretty cool though.
The link in the paper to their Java implementation is now broken: does anyone have a current link?
Now it does the running ticker tape thing, which means you can't use the AC button to quickly start over, because there is no AC button anymore!
I know it's supposed to be easier/better for the user, but they didn't even give me a way to go back to the old behavior.
i.e it's more likely that I've made a few mm mistake when measuring the radius of my table than that I'm not using an precise enough version of Pi. The area of the table will have more error because one is squaring the radius, obviously.
It would be interesting to have a calculator that let you add in your estimated measurement error (or made a few reasonable guesses about it for you) and told you the error in your result e.g. the standard deviation.
I sometimes want to buy stuff at a hardware shop and I think : "how much paint do I need to buy?" I haven't planned so I'm thinking "it's about 4m by 5m...I think?" I try to do a couple of calculations with worst case numbers so I at least get enough paint and save another trip to the shop but not comically too much so that I have a tin of it for the next 5 years.
I remember having to estimate error in results that were calculated from measured values for physics 101 and it was a pain.
But I view math as more of a string manipulation function with position-dependent mapping behavior per character and dependency graphs, combined with several special functions that form the universal constants.
Just because data is stored in digitization as 1 and O, don't forget it's more like charged and not charged. Computers are not numeric systems, they are binary systems. Not the same thing.
Or you can do what the Windows 11 calculator does and not even get 1+2*3 right.
crag 'say (10*100) + 1 − (10*100)' #1
Raku uses Rats by default (Rational numbers) unless you ask for floating point.
> They realized that it's not the end of the world if they show "0.000000..." in a case where the answer is exactly 0
so... devs self-made a requirement, got into trouble (complexity) - removed the requirement, trouble didn't go anywhere
just keep saying "it's a win" and you'll be winning, I guess
No parens or anything like that, nothing nearly so fancy. Classic desk calculator where you set the infix operation to apply to the previous value, followed by the second value of the operation.
It was frankly an unexpected challenge. There's a lot more to it than meets the eye.
I only got as far as rational numbers though. PI accurate to the 8 digit display was good enough for me.
Honestly though, I think it was a great exercise for students, showing how seemingly simple tasks can actually be more complex than they seem. I'm still here thinking about it some twenty years later.
I don't care if it gives me "Underflow" for bs like e^-1000, just give me a text field that will be calculated into result that's represented in the way I want (sci notation, hex, binary, ascii etc whatever).
All standard calculators are imitations of a desktop calculator, It's insane that we're still dragging this UI into desktop. Why don't we use rotary dial on mobile phones then?
It's great that at least OSX have cmd+space where I can type an expression and get a quick result.
And yes, I did develop my own calculator, and happily used it for many years.
TLDR: the real problem of calculators is their UI, not arithmetic core.
On another note. Since Calculator is so complex are there any open source cross platform library that makes it easier to implement?
Won't fix: https://github.com/microsoft/calculator/issues/148
Maybe I'll get back to the project and finish it this year.
i) Answer is 0 if you cancel out two expression (10^100)
ii) Answer is 1 if you compute 10^100 and then add 1 which is insignificant.
How do you even cater for these scenarios? This needs more than arithmetic.
In the year he did this he easily could have just done some minor interface tweaks to a ruby repl which includes the BigDecimal library. In fact I bet this post to an AI could result in such a numerically accurate calculator app. maybe as a Sinatra single file ruby web app designed to format to phone resolutions natively.