Thou shalt not compare numeric values (except when it works)

By Jonathan Carroll

Thou shalt not compare numbers.

(This article was first published on R – Irregularly Scheduled Programming, and kindly contributed to R-bloggers)

This was just going to be a few Tweets but it ended up being a bit of a rollercoaster of learning for me, and I haven’t blogged in far too long, so I’m writing it up quickly as a ‘hey look at that’ example for newcomers.

I’ve been working on the ‘merging data’ part of my book and, as I do when I’m writing this stuff, I had a play around with some examples to see if there was anything funky going on if a reader was to try something slightly different. I’ve been using dplyr for the examples after being thoroughly convinced on Twitter to do so. It’s going well. Mostly.

## if you haven't already done so, load dplyr
suppressPackageStartupMessages(library(dplyr))

My example involved joining together two tibbles containing text values. Nothing too surprising. I wondered though; do numbers behave the way I expect?

Now, a big rule in programming is ‘thou shalt not compare numbers’, and it holds especially true when numbers aren’t exactly integers. This is because representing non-integers is hard, and what you see on the screen isn’t always what the computer sees internally.

Thou shalt not compare numbers.

If I had a tibble where the column I would use to join had integers

dataA 
## # A tibble: 4 x 2
##       X     Y
##   
## 1     0   100
## 2     1   101
## 3     2   102
## 4     3   103

and another tibble with numeric in that column

dataB 
## # A tibble: 4 x 2
##       X     Z
##   
## 1     0  1000
## 2     1  1001
## 3     2  1002
## 4     3  1003

would they still join?

full_join(dataA, dataB)
## Joining, by = "X"
## # A tibble: 4 x 3
##       X     Y     Z
##   
## 1     0   100  1000
## 2     1   101  1001
## 3     2   102  1002
## 4     3   103  1003

Okay, sure. R treats these as close enough to join. I mean, maybe it shouldn’t, but we’ll work with what we have. R doesn’t always think these are equal

identical(0L, 0)
## [1] FALSE
identical(2L, 2)
## [1] FALSE

though sometimes it does

0L == 0
## [1] TRUE
2L == 2
## [1] TRUE

(== coerces types before comparing). Well, what if one of these just ‘looks like’ the other value (can be coerced to the same?)

dataC 
## # A tibble: 4 x 2
##       X     Z
##   
## 1     0   100
## 2     1   101
## 3     2   102
## 4     3   103
full_join(dataA, dataC) 
## Joining, by = "X"
## Error in full_join_impl(x, y, by$x, by$y, suffix$x, suffix$y, check_na_matches(na_matches)): Can't join on 'X' x 'X' because of incompatible types (character / integer)

That’s probably wise. Of course, R is perfectly happy with things like

"2":"5"
## [1] 2 3 4 5

and == thinks that’s fine

"0" == 0L
## [1] TRUE
"2" == 2L
## [1] TRUE

but who am I to argue?

Anyway, how far apart can those integers and numerics be before they aren’t able to be joined? What if we shift the ‘numeric in name only’ values away from the integers just a teensy bit? .Machine$double.eps is the built-in value for ‘the tiniest number you can produce’. On this system it’s 2.22044610^{-16}.

dataBeps 
## # A tibble: 4 x 2
##              X     Z
##          
## 1 2.220446e-16  1000
## 2 1.000000e+00  1001
## 3 2.000000e+00  1002
## 4 3.000000e+00  1003
full_join(dataA, dataBeps) 
## Joining, by = "X"
## # A tibble: 6 x 3
##              X     Y     Z
##          
## 1 0.000000e+00   100    NA
## 2 1.000000e+00   101    NA
## 3 2.000000e+00   102  1002
## 4 3.000000e+00   103  1003
## 5 2.220446e-16    NA  1000
## 6 1.000000e+00    NA  1001

Well, that’s… weirder. The values offset from 2 and 3 joined fine, but the 0 and 1 each got multiple copies since R thinks they’re different. What if we offset a little further?

dataB2eps 
## # A tibble: 4 x 2
##              X     Z
##          
## 1 4.440892e-16  1000
## 2 1.000000e+00  1001
## 3 2.000000e+00  1002
## 4 3.000000e+00  1003
full_join(dataA, dataB2eps)
## Joining, by = "X"
## # A tibble: 8 x 3
##              X     Y     Z
##          
## 1 0.000000e+00   100    NA
## 2 1.000000e+00   101    NA
## 3 2.000000e+00   102    NA
## 4 3.000000e+00   103    NA
## 5 4.440892e-16    NA  1000
## 6 1.000000e+00    NA  1001
## 7 2.000000e+00    NA  1002
## 8 3.000000e+00    NA  1003

That’s what I’d expect. So, what’s going on? Why does R think those numbers are the same? Let’s check with a minimal example: For each of the values 0:4, let’s compare that integer with the same offset by .Machine$double.eps

suppressPackageStartupMessages(library(purrr)) ## for the 'thou shalt not for-loop' crowd
map_lgl(0:4, ~ as.integer(.x) == as.integer(.x) + .Machine$double.eps)
## [1] FALSE FALSE  TRUE  TRUE  TRUE

And there we have it. Some sort of relative difference tolerance maybe? In any case, the general rule to live by is to never compare floats. Add this to the list of reasons why.

For what it’s worth, I’m sure this is hardly a surprising detail to the dplyr team. They’ve dealt with things like this for a long time and I’m sure it was much worse before those changes.

To leave a comment for the author, please follow the link and comment on their blog: R – Irregularly Scheduled Programming.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.