If you type something like this in a Python interpreter,
0.1 + 0.1 + 0.1 == 0.3
Surprisingly it outputs False.
But instead, if try and do something like this,
The answer lies in the imperfection (rather a limitation) in storing floating-point numbers.
It turns out computers are not that accurate with numbers as they are supposed to be.
Growing with calculators, we expect the answer to be accurate always, but once in a lifetime a programmer will do something like this and discover what is called binary floating-point arithmetic and it will start to turn their heads around.
Before we understand what is happening here, let’s just take a moment to acknowledge how the computer interprets/stores (what we call) the decimal numbers.
In our world, we use base 10 notation; every place in a number is 10 to the power of something,
But when it comes to computers (digital systems in particular), they interpret numbers as base 2 notation, i.e., every place is a power of 2.
Ex. in decimal representation: 13 = (1 x 10¹) + (3 x 10⁰)
But in binary notation: 13 = (1 x 2³) + (1 x 2²) + (0 x 2¹) + (1 x 2⁰)
So, when you try and represent (say) 1/10 in here, we get that perfectly 0.1 in the base 10 system, but what about base 2? It doesn’t have any 1/10, so what we get is 0.0001100110011…. repeating indefinitely.
This happens with decimal representation as well, when you try and represent 1/3, we get 0.333333… repeating indefinitely, which in jargon is called a recurring number.
When you try doing
you know the answer is 1 because you are equipped with tools for rounding up. But without you knowing how to round off, you are only to reach 0.99999….
When you gave 0.1 as input to the computer, it interpreted it as 0.000110011… (if you are on a 32-bit machine, it will keep repeating it until 28 bits and cut-off from the 29th digit ahead, simply because it ran out of space).
Note: if Python were to print the true decimal value of the binary approximation stored for 0.1, it would have to display 0.1000000000000000055511151231257827021181583404541015625.
What happens is that the computer has limited resources, and we were trying to fit in an indefinite number.
So, you typed in 0.1 + 0.1 + 0.1 but it took it as 0.000110011… + 0.000110011… + 0.000110011… (in binary) and computed the answer to be 0.30000000000000004 (in decimal).
Notice the 4 at the end of the answer, this is because with 0.1 the computer ran out of space to store any more integers, cut off the indefinite sequence at that point, and did a binary addition to the remnants.
This is what is known as a Floating-point rounding error.
This is why every programmer and computer scientist should be aware of the problem of space in computer systems.
You can read about this in more detail (particularly in the context of python) here: https://docs.python.org/3/tutorial/floatingpoint.html
Interestingly enough, there is a famous case that is similar to this resource problem,
On June 4, 1996, an unmanned Ariane 5 rocket launched by the European Space Agency exploded just forty seconds after its lift-off from Kourou, French Guiana.
The rocket was on its first voyage, after a decade of development costing $7 billion. The destroyed rocket and its cargo were valued at $500 million.
The fault was identified as a software bug in the rocket’s Inertial Reference System (IRS). The rocket used this system to determine whether it was pointing up or down, the horizontal bias. This value was represented by a 64-bit floating variable.
Problems began to occur when the software attempted to convert this 64-bit variable (which can represent billions of potential values) into a 16-bit integer (which can only represent 65,535 potential values).
For the first few seconds of flight, the rocket’s acceleration was low, so the conversion between these two values was successful.
However, as the rocket’s velocity increased, the 64-bit variable exceeded 65,535 and became too large to fit in a 16-bit variable. It was at this point that the processor encountered an operand error, and populated the horizontal bias variable with a diagnostic error value.
This caused the Inertial Reference System (IRS) to fail to mean that at T+37 the horizontal bias variable contained a diagnostic error value from the processor, intended for debugging purposes only.
This was mistakenly interpreted as actual flight data and caused the engines to immediately over-correct by thrusting in the wrong direction, destroying the rocket seconds later.
Here is a more detailed YouTube video explaining Ariane 5 explosion.