Here's something odd I noticed while playing around with the Haskell programming language. Sometimes, a==b does not imply f(a)==f(b). Look:
> 1/0 Infinity > 1/(-0) -Infinity > 0==(-0) True > (1/0)==(1/(-0)) False |
Here's something odd I noticed while playing around with the Haskell programming language. Sometimes, a==b does not imply f(a)==f(b). Look:
> 1/0 Infinity > 1/(-0) -Infinity > 0==(-0) True > (1/0)==(1/(-0)) False |
I don't see where this is odd.
Positive and negative infinity are two very different things. Positive and negative zero are by definition the same thing (it has do to with the definition of real numbers as limits of series), only the sign is different.
This is the same in every other language, it has nothing to do with Haskell. Everything breaks down as soon as you have numbers that are infinitely large or small.
-Darkstar
Perhaps the odd thing is allowing division by zero and giving it a special "Infinity" value instead of raising an error as many computer languages would do in this situation (or, from the mathematical point of view, saying "the result is undefined, zero is not in the domain of the reciprocal function").
Sometimes in mathematics both +infinity and -infinity are treated as a single "point at infinity" - i.e. the number line is curled into a (very large) circle (or the complex plane is curled into a very large sphere). So they're not always different. I've never heard of any area of mathematics where a==b does not imply f(a)==f(b) though - that's just plain weird. You're right that it's nothing to do with Haskell, though - it's a behaviour of IEEE754 floating point (one of many ways in which such values behave differently from numbers as we normally think of them).