Streamgraphs in R

By David Smith

(This article was first published on Revolutions, and kindly contributed to R-bloggers)

It’s not easy to visualize a quantity that varies over time and which is composed of more than two subsegments. Take, for example, this stacked bar chart of religious affiliation of the Australian population, by time:

While it’s easy to see the how the share of Anglicans (at the bottom of the chart) has changed over time, it’s much more difficult to assess the change in the “No religion” category: the separated bars coupled with the (necessarily) uneven positioning makes it hard to judge changes from year to year.

There’s no easy solution, but one increasingly popular idea is to use streamgraphs, which connect categories together into continuous polygons, and can even be aligned to the middle so that no one category gets favoured status (like the Anglican category above). Here’s a pioneering example of streamgraphs from the New York Times in 2008:

If you click the image above, you’ll see that one nice feature is that you can hover your mouse over a segment and see it highlighted, which makes it a little easier to observe changes over time within a segment.

You can easily make interactive streamgraphs like this in R, with the streamgraph package, available on GitHub. The streamgraph function makes use of on htmlwidgets, and has a ggplot2-style object interface which makes it easy to create and customize your chart. There’s a nice demo on RPubs, from which I took this example:

stocks %>% 
  mutate(date=as.Date(quarter, format="%m/%d/%y")) %>% 
  streamgraph(key="ticker", value="nominal", offset="expand") %>% 
  sg_fill_manual(stock_colors) %>% 
  sg_axis_x(tick_interval=10, tick_units="year") %>% 
  sg_legend(TRUE, "Ticker: ")

The resulting streamgraph is shown below. (Update: Thanks to Bob Rudis in the comments for the tip on embedding htmlwidgets into blog posts.)

To learn more about streamgraphs in R, check out the blog post linked below.

rud.is: Introducing the streamgraph htmlwidget R Package

To leave a comment for the author, please follow the link and comment on his blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Rendering LaTeX Math Equations in GitHub Markdown

By strictlystat

(This article was first published on A HopStat and Jump Away » Rbloggers, and kindly contributed to R-bloggers)

The Problem: GitHub README.md won’t render LaTeX

I have many times wondered about getting LaTeX math to render in a README file on GitHub. Apparently, many others ( 1, 2, 3 ), have asked the same question.

The common answers are:

  1. It cannot (and in some cases, shouldn’t) be done. GitHub parsing is done by SunDown and is secure, therefore won’t do LaTeX.
  2. Use http://latex.codecogs.com/ or iTex2Img. These are good options, but 1) they may go away at any time, and 2) require you to rewrite your md file.
  3. Use unicode if possible.
  4. Use LaTeXIt (for Mac OS) or other converter to make your equations and embed them.

A hackey, but working solution

I opted to try a more generic solution for (4.) using some very hackey text parsing. I have done a bit of parsing in the past, but I was either too lazy to think about the right regex to do, couldn’t think of it easily, or thought my solution was sufficient even if not elegant.

Caveat

Two main caveats abound:

  1. This only works for inline equations marked with dollar signs ($) or equations marked by double dollar signs ($$). I could encorporate other delimiters such as [, but I did not. I only had a bit of time on Wednesday.
  2. I assume any code that involves dollar signs be demarcated by chunks starting with three backticks (“). I wrote this for R code, which can use dollar signs for referencing and never has double dollar signs. If your code does, no guarantees.
  3. This generally assumes you have a GitHub repository (have no idea what others use), and that you’re OK with the figures being located in that GitHub repository. I didn’t allow options for putting them in a sub-folder, but may incorporate that.
  4. Some text won’t be sized correctly.

How do I do it already

I wrote an R package that would parse a README.md (or README.rmd if it’s RMarkdown). The package is located at https://github.com/muschellij2/latexreadme.

You can install the package using:

library(devtools)
install_github("muschellij2/latexreadme")

You would then load the package:

library(latexreadme)

The main function is parse_latex. It’s not the best function name for what it does, but I don’t really care. Let’s see it’s arguments:

args(parse_latex)

You must put in a README file as the rmd argument. If the README has an rmd or Rmd extension, the README is first knitted using knit(rmd) and then the resultant md file is used. This md is located in a temporary directory and won’t write to the directory of the README. The new_md is the filename for the output md file that you wish to create. One example would be rmd = "README_with_latex.md" and md = "README.md". The git_username and git_reponame must be specified with your username and repository name, respectively. The git_branch allows you to specify which branch you are on, if necessary. If you don’t know what that means, just leave as master.

The rest of the arguments are for inserting the LaTeX into the document. The text_height is how large the LaTeX should be (this may be bad for your document), the insert_string is the HTML the LaTeX is subbed for, the raw_git_site uses https://rawgit.com to reference the figures directly with proper content-type headers (so that they show up). The bad_string is something I’m using in the code. You only need to change bad_string if you happen to have text in your README that matches this (should be rare as they are a bunch of Z’s, unless you write like someone sleeping). I’ll get to the ... in a minute.

I still don’t get it – show me an example

I thought you’d never ask. The parse_latex command has an example from one of my other repos and you can run it as follows (need curl):

rmd = file.path(tempdir(), "README_unparse.rmd")
download.file(
"https://raw.githubusercontent.com/muschellij2/Github_Markdown_LaTeX/master/README_unparse.rmd",
destfile = rmd, method = "curl")
new_md = file.path(tempdir(), "README.md")
parse_latex(rmd,
            new_md,
            git_username = "muschellij2",
            git_reponame = "Github_Markdown_LaTeX")
library(knitr)
new_html = pandoc(new_md, format = "html")

And you can view the html using browseURL:

browseURL(new_html)

You can see the output of the example (only a little bit of LaTeX) at this repo: https://github.com/muschellij2/Github_Markdown_LaTeX or at Kristin Linn‘s README, which was used as an example here: https://github.com/kalinn/IPW-SVM/blob/master/README_2.md

What is the function actually doing

So what is the function actually doing? Something convoluted I can assure you. The process is as follows:

  1. Find the equations using ($$ and $) parse them out, throwing out any code demarcated with backticks (”).
  2. Put this LaTeX into a simple LaTeX document with begin{document}. Note, the ... argument can be a character vector of other packages to load in that document. See png_latex documentation.
  3. Run pdflatex on the document. Note, this must be in your path. This creates a PDF.
  4. Run knitr::plot_crop on this document. This will crop out anything that’s not the LaTeX equation you wanted.
  5. Convert the PDF to a PNG using animation::im.convert. This is so that they will render in the README. The file will be something like eq_no_01.png in the same folder as the rmd argument.
  6. Replace all the LaTeX with the insert_string, which is raw HTML now.
  7. Write out the parsed md file, which was named using new_md.

Wow – that IS convoluted

My best shot in one day. If you have better solutions, please post in the comments.

Nothing shows up! Read this

NB: The replacement looks for equations (noted by eq_noSOMETHING.png) in your online GitHub repository. If you run this command and don’t push these png files, then nothing will show up.

Conclusions

You can have LaTeX “rendered” in a GitHub README file! The sizes of the text may be weird. This is due to the cropping. I could probably use some bounding box or better way to get only the equations, but I didn’t. If you want to help, please sumbit a Pull Request to my repository and I’d gladly merge it if it works.

NB: GitHub may override a README.md if a README.rmd (or README.Rmd) exists. I’m not 100% sure on that, but if that’s the case, rename the Rmd and just have README.md.

Happy parsing!

To leave a comment for the author, please follow the link and comment on his blog: A HopStat and Jump Away » Rbloggers.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Building a user interface for spatstat

By spatialRecology – r

(This article was first published on spatialRecology – r, and kindly contributed to R-bloggers)

Contents

Introduction

RStudio developed shiny, an R package that, quoting from their website, “makes it super simple for R users like you to turn analyses into interactive web applications that anyone can use”.
It leverages the power of R and its vast collection of packages to allow users to efficiently perform predesigned data tasks, such as visualization and/or statistics.
Plus, shiny seems to be the new hypetrain of R, which is why I will jump right on it.

This post will introduce “shiny_spatstat”, a user interface for spatstat.
It is meant to be accessible for people that want to perform basic point pattern analyses.

In general are there two main areas for integrating shiny in my work:

  • Building an user interface for ecological models to control parameters and observe simulation runnings (comparable to NetLogo)
  • Writing an application for an audience that is not that familiar with R or the method you want to present, i.e. as standalone analysis tool for tailored questions or a chance to let students explore a dataset in an interactive way

The aim of this post is not to take you step-by-step through the construction of a reasonably full-featured simulation app.
It is rather publishing the first, rough version (v0.1) of an app I had in mind for quite some time now.
At this time, I view this app (with different data) as a companion to a poster I will present at an ecology conference in September.
However, during coding the app I had a lot of ideas on how to further develop it and I hope to be able to produce something comparable to Programita in the end.

Enough of words, see shiny_spatstat here:

You can view the running app in fullscreen here (recommended).

Get and run it!

A Shiny basically consists of two files: a ui.r file and a server.r file. The ui.r file, as it says, provides the user interface, and the server.r file provides the server logic. These are the two main files you want to customize shiny_spatstat.
Rstudio has a great tutorial for the basics.

To run the user interface locally:

shiny::runGitHub('marcosci/shiny_spatstat')

… or download it from:

https://github.com/marcosci/shiny_spatstat

Roadmap

In conclusion, the idea is to develop a shiny app as a user interface for spatstat.
shiny_spatstat (v1) should therefore be able to load datasets dynamically and allow to carry out the appropriate statistics.

Further things to do:

Find a hipper name

Load dynamically input files

More statistics

Progressbar for simulation runs

Exchange base R plots with interactive plots

Own CSS, for a distinct look

In subsequent posts I’ll outline the updates in the development being made in this app.
If you have feedback or suggestions for improvements, comment here or post an Issue on the GitHub site.

To leave a comment for the author, please follow the link and comment on his blog: spatialRecology – r.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

15 Questions All R Users Have About Plots

By DataCamp

question

(This article was first published on The DataCamp Blog » R, and kindly contributed to R-bloggers)

R allows you to create different plot types, ranging from the basic graph types like density plots, dot plots, bar charts, line charts, pie charts, boxplots and scatter plots, to the more statistically complex types of graphs such as probability plots, mosaic plots and correlograms.

In addition, R is pretty known for its data visualization capabilities: it allows you to go from producing basic graphs with little customization to plotting advanced graphs with full-blown customization in combination with interactive graphics. Nevertheless, not always do we get the results that we want for our R plots: Stack Overflow is flooded with our questions on plots, with many recurring questions.

This is why DataCamp decided to put all the frequently asked questions and their top rated answers together in a blog post, completed with additional, short explanations that could be of use for R beginners.

If you are rather interested in learning how to plot with R, you might consider reading our tutorial on histograms, which covers basic R, ggplot2 and ggvis, or this shorter tutorial which offers an overview of simple graphs in R. However, if you’re looking to learn everything on creating stunning and informative graphical visualizations, our interactive course on (interactive) data visualization with ggvis will definitely interest you!

1. How To Draw An Empty R Plot?

How To Open A New Plot Frame

You can open an empty plot frame and activate the graphics device in R as follows:

plot.new() # or frame()

Note that the plot.new() and frame() functions define a new plot frame without it having any axes, labels, or outlining. It indicates that a new plot is to be made: a new graphics window will open if you don’t have one open yet, otherwise the existing window is prepared to hold the new plot. You can read up on these functions here.

  • x11() can also opens a new windows device in R for the X Window System (version 11)!
  • quartz() starts a graphics device driver for the OS X System.
  • windows() starts a new graphics device for Windows.

How To Set Up The Measurements Of The Graphics Window

You can also use the plot.window() function to set the horizontal and vertical dimensions of the empty plot you just made:

pWidth = 3
pHeight = 2
plot.window(c(0,pWidth),
            c(0,pHeight))

How To Draw An Actual Empty Plot

You can draw an empty plot with the plot() function:

plot(5, 
     5, 
     type="n", 
     axes=FALSE, 
     ann=FALSE, 
     xlim=c(0, 10), 
     ylim = c(0,10))

You give the coordinates (5,5) for the plot, but that you don’t show any plotting by putting the type argument to "n".

(Tip: try to put this argument to "p" or "b" and see what happens!)

What’s more, you don’t annotate the plot, but you do put limits from 0 to 10 on the x- and y-axis. Next, you can fill up your empty plot with axes, axes labels and a title with the following commands:

mtext("x-axis", 
      side=1) #Add text to the x-axis
mtext("y-axis",
      side=2) 
title("An R Plot") #Add a title

Note that if you want to know more about the side argument, you can just keep on reading! It will be discussed in more detail below, in question 3 about R plots.

Lastly, you may choose to draw a box around your plot by using the box() function and add some points to it with the points() function:

box() #Draw a box
points(5,  #Put (red) point in the plot at (5,5)
       5, 
       col="red") 
points(5, 
       7, 
       col="orange", 
       pch=3, 
       cex=2)
points(c(0, 0, 1), 
       c(2, 4, 6), 
       col="green", 
       pch=4)

Note that you can put your x-coordinates and y-coordinates in vectors to plot multiple points at once. The pch argument allows you to select a symbol, while the cex argument has a value assigned to it that indicates how much the plotted text and symbols should be scaled with respect to the default.

Tip: if you want to see what number links to what symbol, click here.

2. How To Set The Axis Labels And Title Of The R Plots?

The axes of the R plots make up one of the most popular topics of Stack Overflow questions; The questions related to this topic are very diverse. Keep on reading to find out what type of questions DataCamp has found to be quite common!

How To Name Axes (With Up- Or Subscripts) And Put A Title To An R Plot?

You can easily name the axes and put a title in place to make your R plot more specific and understandable for your audience.

This can be easily done by adding the arguments main for the main title, sub for the subtitle, xlab for the label of the x-axis and ylab for the label of the y-axis:

x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,
     y,
     main="main title", 
     sub="sub-title", 
     xlab="x-axis label", 
     ylab="y-axis label")

Note that if you want to have a title with up-or subscripts, you can easily add these with the following commands:

plot(1,
     1, 
     main=expression("title"^2)) #Upscript
plot(1,
     1, 
     main=expression("title"[2])) #Subscript

This all combined gives you the following plot:

Good to know: for those who want to have Greek letters in your axis labels, the following code can be executed:

x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,
     y,
     xlab = expression(paste("Greek letter ", phi)),
     ylab = expression(paste("Greek letter ",mu)))

How To Adjust The Appearance Of The Axes’ Labels

To adjust the appearance of the x-and y-axis labels, you can use the arguments col.lab and cex.lab. The first argument is used to change the color of the x-and y-axis labels, while the second argument is used to determine the size of the x-and y-axis labels, relative to the (default) setting of cex.

x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,
     y,
     main=expression("main title"^2), 
     sub=expression("sub-title"[2]), 
     xlab="x-axis label", 
     ylab="y-axis label",
     col.lab="blue",
     cex.lab=0.75)

Rplot2

For more information on these arguments, go to this page.

How To Remove A Plot’s Axis Labels And Annotations

If you want to get rid of the axis values of a plot, you can first add the arguments xaxt and yaxt, set as "n". These arguments are assigned a character which specifies the x-axis type. If you put in an "n", like in the command below, you can suppress the plotting of the axis.

Note that by giving any other character to the xaxt and yaxt arguments, the x-and y-axes are plotted.

Next, you can add the annotation argument ann and set it to FALSE to make sure that any axis labels are removed.

x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,
     y,
     xaxt="n",
     yaxt="n",
     ann=FALSE)

Rplot3

Tip: not the information you are looking for? Go to this page.

How To Rotate A Plot’s Axis Labels

You can add the las argument to the axis() function to rotate the numbers that correspond to each axis:

x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,
     y, 
     axes=FALSE)
box()
axis(2, 
     las=2)
axis(1, 
     las=0)

Rplot4

Note how this actually requires you to add an argument to the plot() function that basically says that there are no axes to be plotted (yet): this is the task of the two axis() function that come next.

The las argument can have three values attributed to it. According to whichever option you choose, the placement of the label will differ: if you choose 0, the label will always be parallel to the axis (which is the default); If you choose 1, the label will be put horizontally. Pick 2 if you want it to be perpendicular to the axis and 3 if you want it to be placed vertically.

But there is more. If you want to know more about the possibilities of the axis() function, keep on reading!

How To Move The Axis Labels Of Your R Plot

So, you want to move your axes’ labels around?

No worries, you can do this with the axis() function; As you may have noticed before in this tutorial, this function allows you to first specify where you want to draw the axes. Let’s say you want to draw the x-axis above the plot area and the y-axis to the right of it.

Remember that if you pass 1 or 2 to the axis() function, your axis will be drawn on the bottom and on the left of the plot area. Scroll a bit up to see an example of this in the previous piece of code!

This means that you will want to pass 3 and 4 to the axis() function:

x<-seq(0,2*pi,0.1)
y<-sin(x)
plot(x,
     y,
     axes=FALSE, # Do not plot any axes
     ann=FALSE) # Do not plot any annotations
axis(3)   # Draw the x-axis above the plot area
axis(4)   # Draw the y-axis to the right of the plot area
box()

Rplot5

As you can see, at first, you basically plot x and y, but you leave out the axes and annotations. Then, you add the axes that you want to see and specify their location with respect to the plot.

The flexibility that the axis() function creates for you keeps on growing! Check out the next frequently asked question to see what else you can solve by using this basic R function.

Tip: go to the last question to see more on how to move around text in the axis labels with hjust and vjust!

3. How To Add And Change The Spacing Of The Tick Marks Of Your R Plot

How To Change The Spacing Of The Tick Marks Of Your R Plot

Letting R determine the tick marks of your plot can be quite annoying and there might come a time when you will want to adjust these.

1. Using The axis() Function To Determine The Tick Marks Of Your Plot

Consider the following piece of code:

v1 <- c(0,pi/2,pi,3*pi/2,2*pi) # -> defines position of tick marks.
v2 <- c("0","Pi/2","Pi","3*Pi/2","2*Pi") # defines labels of tick marks.
x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,
     y,
     xaxt = "n")
axis(side = 1, 
     at = v1, 
     labels = v2,
     tck=-.05)

Rplot6

As you can see, you first define the position of the tick marks and their labels. Then, you draw your plot, specifying the x axis type as "n", suppressing the plotting of the axis.

Then, the real work starts:

  • The axis() function allows you to specify the side of the plot on which the axis is to be drawn. In this case, the argument is completed with 1, which means that the axis is to be drawn below. If the value was 2, the axis would be drawn on the left and if the value was 3 or 4, the axis would be drawn above or to the right, respectively;
  • The at argument allows you to indicate the points at which tick marks are to be drawn. In this case, you use the positions that were defined in V1;
  • Likewise, the labels that you want to use are the ones that were specified in V2;
  • You adjust the direction of the ticks through tck: by giving this argument a negative value, you specify that the ticks should appear below the axis.

Tip: try passing a positive value to the tck argument and see what happens!

You can further specify the size of the ticks through tcl and the appearance of the tick labels is controlled with cex.axis, col.axis and font.axis.

v1 <- c(0,pi/2,pi,3*pi/2,2*pi) # -> defines position of tick marks.
v2 <- c("0","Pi/2","Pi","3*Pi/2","2*Pi") # defines labels of tick marks.
x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,
     y,
     xaxt = "n")
axis(side = 1, 
     at = v1, 
     labels = v2,
     tck=-.1,
     tcl = -0.5,
     cex.axis=1.05,
     col.axis="blue",
     font.axis=5)

Rplot7

2. Using Other Functions To Determine The Tick Marks Of Your R Plot

You can also use the par() and plot() functions to define the positions of tickmarks and the number of intervals between them.

Note that then you use the argument xaxp to which you pass the position of the tick marks that are located at the extremes of the x-axis, followed by the number of intervals between the tick marks:

x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,
     y,
     xaxp = c(0, 2*pi, 5))

Rplot8

# Or
x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x, 
     y, 
     xaxt="n")
par(xaxp= c(0, 2*pi, 2)) 
axis(1)

Rplot8b

Note that this works only if you use no logarithmic scale. If the log coordinates set as true, or, in other words, if par(xlog=T), the three values that you pass to xaxp have a different meaning: for a small range, n is negative. Otherwise, n is in 1:3, specifying a case number, and x1 and x2 are the lowest and highest power of 10 inside the user coordinates, 10 ^ par(“usr”)[1:2].

An example:

n <- 1
x <- seq(0, 10000, 1)
y <- exp(n)/(exp(n)+x)
par(xlog=TRUE, 
    xaxp= c(1, 4, 3))
plot(x, 
     y, 
     log="x")

Rplot9
In this example, you use the par() function: you set xlog to TRUE and add the xaxp argument to give the coordinates of the extreme tick marks and the number of intervals between them. In this case, you set the minimal value to 1, the maximal value to 4 and you add that the number of intervals between each tick mark should be 3.

Then, you plot x and y, adding the log argument to specify whether to plot the x-or y-axis or both on a log scale. You can pass "x", "y", and "xy" as values to the log arguments to do this.

An example with both axes in logarithmic scale is:

n <- 1
x <- seq(0, 20, 1)
y <- exp(x)/(x)
par(xlog=TRUE, 
    xaxp= c(1, 4, 3))
par(ylog=TRUE, 
    yaxp= c(1, 11, 2)) 
plot(x, 
     y, 
     log="xy")

Rplot10

How To Add Minor Tick Marks To An R Plot

You can quickly add minor tick marks to your plot with the minor.tick() function from the Hmisc package:

plot.new()
library(Hmisc)
minor.tick(nx = 1.5, 
           ny = 2, 
           tick.ratio=0.75)
  • The nx argument allows you to specify the number of intervals in which you want to divide the area between the major tick marks on the axis. If you pass the value 1 to it, the minor tick marks will be suppressed;
  • ny allows you to do the same as nx, but then for the y-axis;
  • The tick.ratio indicates the ratio of lengths of minor tick marks to major tick marks. The length of the latter is retrieved from par(tck=x).

4. How To Create Two Different X- or Y-axes

The first option is to create a first plot and to execute the par() function with the new argument put to TRUE to prevent R from clearing the graphics device:

set.seed(101)
x <- 1:10
y <- rnorm(10)
z <- runif(10, min=1000, max=10000)
plot(x, y) 
par(new = TRUE)

Then, you create the second plot with plot(). You make one of the type

plot(x, 
     z, 
     type = "l", #Plot with lines
     axes = FALSE, #No axes
     bty = "n", #Box about plot is suppressed
     xlab = "",  #No labels on x-and y-axis
     ylab = "")

Note that the with axes argument has been put to FALSE, while you also lave the x- and y-labels blank.

You also add a new axis on the right-hand side by adding the argument side and assigning it the value 4.

Next, you specify the the at argument to indicate the points at which tick-marks need to be drawn. In this case, you compute a sequence of n+1 equally spaced “round” values which cover the range of the values in z with the pretty() function. This ensures that you actually have the numbers from the y-axis from your second plot, which you named z.

Lastly, you add an axis label on the right-hand side:

axis(side=4, 
     at = pretty(range(z)))
mtext("z", 
      side=4, 
      line=3)

Note that the side argument can have three values assigned to it: 1 to place the text to the bottom, 2 for a left placement, 3 for a top placement and 4 to put the text to the right. The line argument indicates on which margin line the text starts.

The end result will be like this:Rplot11

Tip: Try constructing an R plot with two different x-axes! You can find the solution below:

plot.new()
set.seed(101)
x <- 1:10
y <- rnorm(10)
z <- runif(10, min=1000, max=10000) 
par(mar = c(5, 4, 4, 4) + 0.3)
plot(x, y)
par(new = TRUE)
plot(z, y, type = "l", axes = FALSE, bty = "n", xlab = "", ylab = "")
axis(side=3, at = pretty(range(z)))
mtext("z", side=3, line=3)

Rplot12

Note that twoord.plot() of the plotrix package and doubleYScale() of the latticeExtra package automate this process:

library(latticeExtra)
chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
Age <- chol$AGE
Chol <- chol$CHOL
Smoke <- chol$SMOKE
State <- chol$MORT
a <- xyplot(Chol ~ Age|State)
b <- xyplot(Smoke ~ Age|State)
doubleYScale(a, b, style1 = 0, style2 = 3, add.ylab2 = TRUE, columns=3)

Rplot13

Note that the example above is made with this dataset. If you’re not sure how you can import your data, check out our tutorial on importing data in R.

5. How To Add Or Change The R Plot’s Legend?

Adding And Changing An R Plot’s Legend With Basic R

You can easily add a legend to your R plot with the legend() function:

x <- seq(0,pi,0.1)
y1 <- cos(x)
y2 <- sin(x)
plot(c(0,3), c(0,3), type="n", xlab="x", ylab="y")
lines(x, y1, col="red", lwd=2)
lines(x, y2, col="blue", lwd=2)
legend("topright", 
       inset=.05, 
       cex = 1, 
       title="Legend", 
       c("Cosinus","Sinus"), 
       horiz=TRUE, 
       lty=c(1,1), 
       lwd=c(2,2), 
       col=c("red","blue"), 
       bg="grey96")

Rplot14

Note that the arguments pt.cex and title.cex that are described in the documentation of legend() don’t really work. There are some workarounds:

1.Put the title or the labels of the legend in a different font with text.font

x <- seq(0,pi,0.1)
y1 <- cos(x)
y2 <- sin(x)
plot(c(0,3), c(0,3), type="n", xlab="x", ylab="y")
lines(x, y1, col="red", lwd=2)
lines(x, y2, col="blue", lwd=2)
legend("topright", 
       inset=.05, 
       cex = 1, 
       title="Legend", 
       c("Cosinus","Sinus"), 
       horiz=TRUE, 
       lty=c(1,1), 
       lwd=c(2,2), 
       col=c("red","blue"), 
       bg="grey96",
       text.font=3)

Rplot15

2. Draw the legend twice with different cex values

x <- seq(0,pi,0.1)
y1 <- cos(x)
y2 <- sin(x)
plot(c(0,3), c(0,3), type="n", xlab="x", ylab="y")
lines(x, y1, col="red", lwd=2)
lines(x, y2, col="blue", lwd=2)
legend("topright",
       inset=.05,
       c("Cosinus","Sinus"),
       title="",
       horiz=TRUE,
       lty=c(1,1), 
       lwd=c(2,2), 
       col=c("red","blue"))
legend(2.05, 2.97, 
       inset=.05,
       c("",""),
       title="Legend",
       cex=1.15, 
       bty="n")

Rplot16

Tip: if you’re interested in knowing more about the colors that you can use in R, check out this very helpful PDF document.

How To Add And Change An R Plot’s Legend And Labels In ggplot2

Adding a legend to your ggplot2 plot is fairly easy. You can just execute the following:

chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
library(ggplot2)
ggplot(chol, aes(x=chol$WEIGHT, y=chol$HEIGHT)) + 
  geom_point(aes(colour = factor(chol$MORT), shape=chol$SMOKE)) + 
  xlab("Weight") + 
  ylab("Height") 

Rplot17

And it gives you a default legend. But, in most cases, you will want to adjust the appearance of the legend some more.

There are two ways of changing the legend title and labels in ggplot2:

1. If you have specified arguments such as colour or shape, or other aesthetics, you need to change the names and labels through scale_color_discrete and scale_shape_discrete, respectively:

chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
library(ggplot2)
ggplot(chol, aes(x=chol$WEIGHT, y=chol$HEIGHT)) + 
  geom_point(aes(colour = factor(chol$MORT), 
                 shape = chol$SMOKE)) + 
  xlab("Weight") + 
  ylab("Height") + 
  theme(legend.position=c(1,0.5),
        legend.justification=c(1,1)) + 
  scale_color_discrete(name ="Condition", 
                       labels=c("Alive", "Dead")) +
  scale_shape_discrete(name="Smoker", 
                       labels=c("Non-smoker", "Sigare", "Pipe" ))

Rplot18

Note that you can create two legends if you add the argument shape into the geom_point() function and into the labels arguments!

If you want to move the legend to the bottom of the plot, you can specify the legend.position as "bottom". The legend.justification argument, on the other hand, allows you to position the legend inside the plot.

Tip: check out all kinds of scales that could be used to let ggplot know that other names and labels should be used here.

2. Change the data frame so that the factor has the desired form. For example:

chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
levels(chol$SMOKE)[levels(chol$SMOKE)=="Non-smoker"] <- "Non-smoker"
levels(chol$SMOKE)[levels(chol$SMOKE)=="Sigare"] <- "Sigare"
levels(chol$SMOKE)[levels(chol$SMOKE)=="Pipe"] <- "Pipe"
names(chol)[names(chol)=="SMOKE"]  <- "Smoker"

You can then use the new factor names to make your plot in ggplot2, avoiding the “hassle” of changing the names and labels with extra lines of code in your plotting.

Tip: for a complete cheat sheet on ggplot2, you can go here.

6. How To Draw A Grid In Your R Plot?

Drawing A Grid In Your R Plot With Basic R

For some purposes, you might find it necessary to include a grid in your plot. You can easily add a grid to your plot by using the grid() function:

x <- c(1,2,3,4,5)
y <- 2*x
plot(x,y)
grid(10,10)

Rplot19

Drawing A Grid In An R Plot With ggplot2

chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
library(ggplot2)
ggplot(chol, aes(x=chol$WEIGHT, y=chol$HEIGHT)) + 
  geom_point(aes(colour = factor(chol$MORT), shape = chol$SMOKE)) + 
  xlab("Weight") + 
  ylab("Height") + 
  scale_color_discrete(name ="Condition", labels=c("Alive", "Dead")) +
  scale_shape_discrete(name="Smoker", labels=c("Non-smoker", "Sigare", "Pipe" )) +
  theme(legend.position=c(1,0.5),
        legend.justification=c(1,1),
        panel.grid.major = element_line(colour = "grey40"),
        panel.grid.minor = element_line(colour = "grey40"))

Rplot20

Tip: if you don’t want to have the minor grid lines, just pass element_blank() to panel.grid.minor. If you want to fill the background up with a color, add the panel.background = element_rect(fill = "navy") to your code, just like this:

library(ggplot2)
ggplot(chol, aes(x=chol$WEIGHT, y=chol$HEIGHT)) + 
  geom_point(aes(colour = factor(chol$MORT), shape = chol$SMOKE)) + 
  xlab("Weight") + 
  ylab("Height") + 
  scale_color_discrete(name ="Condition", labels=c("Alive", "Dead")) +
  scale_shape_discrete(name="Smoker", labels=c("Non-smoker", "Sigare", "Pipe" )) +
  theme(legend.position=c(1,0.5),
        legend.justification=c(1,1),
        panel.grid.major = element_line(colour = "grey40"),
        panel.grid.minor = element_line(colour = "grey40"),
        panel.background = element_rect(fill = "navy")
        )

7. How To Draw A Plot With A PNG As Background?

You can quickly draw a plot with a .png as a background with the help of the png package. You install the package if you need to, activate it for use in your workspace through the library function library() and you can start plotting!

install.packages("png")
library(png)

First, you want to load in the image. Use the readPNG() function to specify the path to the picture!

image <- readPNG("<path to your picture>")

Tip: you can check where your working directory is set at and change it by executing the following commands:

getwd()
setwd("<path to a folder>")

If your picture is saved in your working directory, you can just specify readPNG("picture.png") instead of passing the whole path.

Next, you want to set up the plot area:

plot(1:2, type='n', main="Plotting Over an Image", xlab="x", ylab="y")

And you want to call the par() function:

lim <- par()

You can use the par() function to set the graphical parameters in rasterImage(). You use the argument usr to define the extremes of the user coordinates of the plotting region. In this case, you put 1, 3, 2 and 4:

rasterImage(image, lim$usr[1], lim$usr[3], lim$usr[2], lim$usr[4])

Next, you draw a grid and add some lines:

grid()
lines(c(1, 1.2, 1.4, 1.6, 1.8, 2.0), c(1, 1.3, 1.7, 1.6, 1.7, 1.0), type="b", lwd=5, col="red")

This can give you the following result if you use the DataCamp logo:

library(png)
image <- readPNG("datacamp.png")
plot(1:2, type="n", main="Plotting Over an Image", xlab="x", ylab="y", asp=1)
lim <- par()
rasterImage(image, lim$usr[1], lim$usr[3], lim$usr[2], lim$usr[4])
lines(c(1, 1.2, 1.4, 1.6, 1.8, 2.0), c(1.5, 1.3, 1.7, 1.6, 1.7, 1.0), type="b", lwd=5, col="red")

Rplot21

Note that you need to give a .png file as input to readPNG()!

8. How To Adjust The Size Of Points In An R Plot?

Adjusting The Size Of Points In An R Plot With Basic R

To adjust the size of the points with basic R, you might just simply use the cex argument:

x <- c(1,2,3,4,5)
y <- c(6,7,8,9,10)
plot(x,y,cex=2,col="red")

Remember, however, that that R allows you to have much more control over your symbols through the function symbols():

df <- data.frame(x1=1:10, 
                 x2=sample(10:99, 10), 
                 x3=10:1)
symbols(x=df$x1, 
        y=df$x2, 
        circles=df$x3, 
        inches=1/3, 
        ann=F, 
        bg="steelblue2", 
        fg=NULL)

The circles of this plot receive the values of df$x3 as the radii, while the argument inches controls the size of the symbols. When this argument receives a positive number as input, the symbols are scaled to make largest dimension this size in inches.

Adjusting The Size Of Points In Your R Plot With ggplot2

In this case, you will want to adjust the size of the points in your scatterplot. You can do this with the size argument:

chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
ggplot(chol, 
       aes(x=chol$WEIGHT, y=chol$HEIGHT), 
       size = 2) + 
  geom_point()
#or
ggplot(chol, 
       aes(x=chol$WEIGHT, y=chol$HEIGHT)) + 
  geom_point(size = 2)

9. How To Fit A Smooth Curve To Your R Data

The loess() function is probably every R programmer’s favorite solution for this kind of question. It actually “fits a polynomial surface determined by one or more numerical predictors, using local fitting”.

In short, you have your data:

x <- 1:10
y <- c(2,4,6,8,7,12,14,16,18,20)

And you use the loess() function, in which you correlate y and x. Through this, you specify the numeric response and one to four numeric predictors:

lo <- loess(y~x) ### estimations between data

You plot x and y:

plot(x,y)

And you plot lines in the original plot where you predict the values of lo:

lines(predict(lo))

Which gives you the following plot:

Rplot22

10. How To Add Error Bars In An R Plot

Drawing Error Bars With Basic R

The bad news: R can’t draw error bars just like that. The good news: you can still draw the error bars without needing to install extra packages!

#Load the data
chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
#Calculate some statistics for the chol dataset
library(Rmisc)
cholc <- summarySE(chol, 
                   measurevar="CHOL", 
                   groupvars=c("MORT","SMOKE"))
#Plot the data
plot(cholc$N, 
     cholc$CHOL,
     ylim=range(c(cholc$CHOL-cholc$sd, cholc$CHOL+cholc$sd)),
     pch=19, 
     xlab="Cholesterol Measurements", 
     ylab="Cholesterol Mean +/- SD",
     main="Scatterplot With sd Error Bars"
)

#Draw arrows of a "special" type
arrows(cholc$N, 
       cholc$CHOL-cholc$sd, 
       cholc$N, 
       cholc$CHOL+cholc$sd, 
       length=0.05, 
       angle=90, 
       code=3)

If you want to read up on all the arguments that arrows() can take, go here.

Drawing Error Bars With ggplot2

Error Bars Representing Standard Error Of Mean

First summarize your data with the summarySE() function from the Rmisc package:

#Load in the data
chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
#Calculate some statistics for the chol dataset
library(Rmisc)
cholc <- summarySE(chol, 
                   measurevar="CHOL", 
                   groupvars=c("MORT","SMOKE"))

Then, you can use the resulting dataframe to plot some of the variables, drawing error bars for them at the same time, with, for example, the standard error of mean:

Rplot23

If you want to change the position of the error bars, for example, when they overlap, you might consider using the position_dodge() function:

#Load in the data
chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
#Calculate some statistics for the chol dataset
library(Rmisc)
cholc <- summarySE(chol, 
                   measurevar="CHOL", 
                   groupvars=c("MORT","SMOKE"))
#Plot the cholc dataset
library(ggplot2)
pd <- position_dodge(0.1)
ggplot(cholc, aes(x=SMOKE, y=CHOL, colour=MORT)) + 
    geom_errorbar(aes(ymin=CHOL-se, ymax=CHOL+se, group=MORT), 
                  width=.1, 
                  position=pd) +
    geom_line(aes(group=MORT)) +
    geom_point()

Rplot24
Tip: if you get an error like “geom_path: Each group consist of only one observation. Do you need to adjust the group aesthetic?”, it usually requires you to adjust the group aesthetic.

Error Bars Representing Confidence Intervals

Continuing from the summary of your data that you made with the summarySE() function, you can also draw error bars that represent confidence intervals. In this case, a plot with error bars of 95% confidence are plotted.

#Load in the data
chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
#Calculate some statistics for the chol dataset
library(Rmisc)
cholc <- summarySE(chol, 
                   measurevar="CHOL", 
                   groupvars=c("MORT","SMOKE"))
#Plot the cholc dataset
library(ggplot2)
pd <- position_dodge(0.1)
ggplot(cholc, aes(x=SMOKE, y=CHOL, colour=MORT)) + 
    geom_errorbar(aes(ymin=CHOL-ci, ymax=CHOL+ci, group=MORT), 
                  width=.1, 
                  colour="black",
                  position=pd) +
    geom_line(aes(group=MORT)) +
    geom_point()

Rplot25
Note how the color of the error bars is now set to black with the colour argument.

Error Bars Representing The Standard Deviation

Lastly, you can also use the results of the summarySE() function to plot error bars that represent the standard deviation. Specifically, you would just have to adjust the ymin and ymax arguments that you pass to geom_errorbar():

#Load in the data
chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
#Calculate some statistics for the chol dataset
library(Rmisc)
cholc <- summarySE(chol, 
                   measurevar="CHOL", 
                   groupvars=c("MORT","SMOKE"))
#Plot the cholc dataset
library(ggplot2)
pd <- position_dodge(0.1)
ggplot(cholc, aes(x=SMOKE, y=CHOL, colour=MORT)) + 
    geom_errorbar(aes(ymin=CHOL-sd, ymax=CHOL+sd, group=MORT), 
                  width=.1,
                  position=pd) +
    geom_line(aes(group=MORT)) +
    geom_point()

Big tip: also take a look at this for more detailed examples on how to plot means and error bars.

11. How To Save A Plot As An Image On Disc

You can use dev.copy() to copy your graph, made in the current graphics device to the device or folder specified by yourself.

x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,y)
dev.copy(jpeg,
         filename="<path to your file/name.jpg>");
dev.off();
(x)
(y)

12. How To Plot Two R Plots Next To Each Other?

How To Plot Two Plots Side By Side Using Basic R

You can do this with basic R commands:

d0 <- matrix(rnorm(15), ncol=3)
d1 <- matrix(rnorm(15), ncol=3)

limits <- range(d0,d1) #Set limits 

par(mfrow = c(1, 2)) 
boxplot(d0,
        ylim=limits)
boxplot(d1,
        ylim=limits)

By adding the par() function with the mfrow argument, you specify a vector, which in this case contains 1 and 2: all figures will then be drawn in a 1-by-2 array on the device by rows (mfrow). In other words, the boxplots from above will be printed in one row inside two columns.

How To Plot Two Plots Next To Each Other Using ggplot2

If you want to put plots side by side and if you don’t want to specify limits, you can consider using the ggplot2 package to draw your plots side-by-side:

#Load in the data
chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
#Calculate some statistics for the chol dataset
library(Rmisc)
cholc <- summarySE(chol, 
                   measurevar="CHOL", 
                   groupvars=c("MORT","SMOKE"))
#Plot the cholc dataset
library(ggplot2)
ggplot(cholc, aes(x=SMOKE, y=CHOL, colour=MORT)) + 
  geom_errorbar(aes(ymin=CHOL-sd, ymax=CHOL+sd, group=MORT), 
                width=.1, 
                position=pd) + 
  geom_line(aes(group=MORT)) + 
  geom_point() + 
  facet_grid(. ~ MORT)

Rplot26
Note how you just add the facet_grid() function to indicate that you want two plots next to each other. The element that is used to determine how the plots are drawn, is MORT, as you can well see above!

How To Plot More Plots Side By Side Using gridExtra

To get plots printed side by side, you can use the gridExtra package; Make sure you have the package installed and activated in your workspace and then execute something like this:

library(gridExtra)
plot1 <- qplot(1)
plot2 <- qplot(1)
grid.arrange(plot1, 
             plot2, 
             ncol=2)

Note how here again you determine how the two plots will appear to you thanks to the ncol argument.

How To Plot More Plots Side By Side Using lattice

Just like the solution with ggplot2 package, the lattice package also doesn’t require you to specify limits or the way you want your plots printed next to each other.

Instead, you use bwplot() to make trellis graphs with the graph type of a box plot. Trellis graphs display a variable or the relationship between variables, conditioned on one or more other variables.

In this case, if you’re using the chol data set (which you can find here or load in with the read.table() function given below), you display the variable CHOL separately for every combination of factor SMOKE and MORT levels:

#Load in the data
chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
#Plot two plots side by side
library(lattice)
bwplot(~ CHOL|SMOKE+MORT,
       chol)

Rplot27

Plotting Plots Next To Each Other With gridBase

Another way even to put two plots next to each other is by using the gridBase package, which takes care of the “integration of base and grid graphics”. This could be handy when you want to put a basic R plot and a ggplot next to each other.

You work as follows: first, you activate the necessary packages in your workspace. In this case, you want to have gridBase ready to put the two plots next to each other and grid and ggplot2 to actually make your plots:

library(grid)
library(gridBase)
library(ggplot2)
plot.new()
gl <- grid.layout(nrow=1, 
                  ncol=2)
vp.1 <- viewport(layout.pos.col=1, 
                 layout.pos.row=1) 
vp.2 <- viewport(layout.pos.col=2, 
                 layout.pos.row=1)
pushViewport(viewport(layout=gl))
pushViewport(vp.1)
par(new=TRUE, 
    fig=gridFIG())
plot(x = 1:10, 
     y = 10:1)
popViewport()
pushViewport(vp.2)
ggplotted <- qplot(x=1:10,y=10:1, 'point')
print(ggplotted, newpage = FALSE)
popViewport(1)

Rplot28
If you want and need it, you can start an empty plot:

plot.new()

To then set up the layout:

gl <- grid.layout(nrow=1, 
                  ncol=2)

Note that since you want the two plots to be generated next to each other, this requires you to make a grid layout consisting of one row and two columns.

Now, you want to fill up the cells of the grid with viewports. These define rectangular regions on your graphics device with the help of coordinates within those regions. In this case, it’s much more handy to use the specifications of the grid that have just been described above rather than real x-or y-coordinates. That is why you should use the layout.pos.col and layout.pos.row arguments:

vp.1 <- viewport(layout.pos.col=1, 
                 layout.pos.row=1) 
vp.2 <- viewport(layout.pos.col=2, 
                 layout.pos.row=1)

Note again that since you want the two plots to be generated next to each other, you want to put one plot in the first column and the other in the second column, both located on the first row.

Since the viewports are only descriptions or definitions, these kinds of objects need to be pushed onto the viewport tree before you can see any effect on the drawing. You want to use the pushViewport() function to accomplish this:

pushViewport(viewport(layout=gl))

Note the pushViewport() function takes the viewport() function, which in itself contains a layout argument. This last argument indicates “a grid layout object which splits the viewport into subregions”.

Remember that you started out making one of those objects.

Now you can proceed to adding the first rectangular region vp.1 to the ViewPort tree:

pushViewport(vp.1)

After which you tell R with gridFig() to draw a base plot within a grid viewport (vp.1, that is). The fig argument normally takes the coordinates of the figure region in the display region of the device. In this case, you use the fig argument to start a new plot, adding it to an existing plot use by adding new = TRUE in the par() function as well. You plot the base graphic and remove the viewport from the tree:

par(new=TRUE, 
    fig=gridFIG())
plot(x = 1:10, 
     y = 10:1)
popViewport()

Note that you can specify in the popViewport() function an argument to indicate how many viewports you want to remove from the tree. If this value is 0, this indicates that you want to remove the viewports right up to the root viewport. The default value of this argument is 1.

Go to add the second rectangular region vp.2 to the ViewPort tree. You can then make the ggplot and remove the viewport from the tree.

pushViewport(vp.2)
ggplotted <- qplot(x=1:10,
                   y=10:1, 
                   'point')
print(ggplotted, 
      newpage = FALSE)
popViewport(1)

Note that you need to print to print the graphics object made by qplot() in order to actually draw it and get it displayed. At the same time, you also want to specify newpage = FALSE, otherwise you’ll only see the qplot()

Also remember that the default value of viewports to remove in the function popViewport() is set at 1. This makes it kind of redundant to put popViewport(1) in the code.

13. How To Plot Multiple Lines Or Points?

Using Basic R To Plot Multiple Lines Or Points In The Same R Plot

To plot two or more graphs in the same plot, you basically start by making a usual basic plot in R. An example of this could be:

x <- seq(0,pi,0.1)
y1 <- cos(x)
plot(x,
     y1,
     type="l",
     col = "red")

Then, you start adding more lines or points to the plot. In this case, you add more lines to the plot, so you’ll define more y axes:

y2 <- sin(x)
y3 <- tan(x)
y4 <- log(x)

Then, you plot these y axes with the use of the lines() function:

lines(x,y2,col="green")
lines(x,y2,col="green")
lines(x,y3,col="black")
lines(x,y4,col="blue")

This gives the following result:

Rplot29

Note that the lines() function takes in three arguments: the x-axis and the y-axis that you want to plot and the color (represented with the argument col) in which you want to plot them. You can also include the following features:

Feature Argument Input
Line type lty Integer or character string
Line width lwd Integer
Plot type pch Integer or single character
Line end style lend Integer or string
Line join style ljoin Integer or string
Line mitre limit lmitre Integer < 1

Here are some examples:

lines(x,y2,col="green", lty = 2, lwd = 3)
lines(x,y2,col="green", lty = 5, lwd = 2, pch = 2)
lines(x,y3,col="black", lty = 3, lwd = 5, pch = 3, lend = 0, ljoin = 2)
lines(x,y4,col="blue", lty = 1, lwd = 2, pch = 3, lend = 2, ljoin = 1, lmitre = 2)

Note that the pch argument does not function all that well with the lines() function and that it’s best to use it only with points().

Tip: if you want to plot points in the same graph, you can use the points() function:

y5 <- x^3
points(x,
       y5,
       col="yellow")

You can add the same arguments to the points() function as you did with the lines() function and that are listed above. There are some additions, though:

Feature Argument Input
Background (fill) color bg Only if pch = 21:25
Character (or symbol) expansion cex Integer

Code examples of these arguments are the following:

points(x,y4,col="blue", pch=21, bg = "red") 
points(x, y5, col="yellow", pch = 5, bg = "blue") 

If you incorporate these changes into the plot that you see above, you will get the following result:

x <- seq(0,pi,0.1)
y1 <- cos(x)
plot(x,y1,type="l" ,col = "red") #basic graphical object
y2 <- sin(x)
y3 <- tan(x)
y4 <- log(x)
y5 <- x^3
lines(x,y2,col="green", lty = 1, lwd = 3) #first layer
lines(x,y2,col="green", lty = 3, lwd = 2, pch = 2) #second layer
lines(x,y3,col="black", lty = 2, lwd = 1, pch = 3, lend = 0, ljoin = 2) #third layer
points(x,y4,col="blue", pch=21, bg = "red") #fourth layer
points(x, y5, col="yellow", pch = 24, bg = "blue") #fifth layer

Rplot30

Using ggplot2 To Plot Multiple Lines Or Points In One R Plot

The ggplot2 package conveniently allows you to also create layers, which allows you to basically plot two or more graphs into the same R plot without any difficulties and pretty easily:

library(ggplot2) 
x <- 1:10
y1 <- c(2,4,6,8,7,12,14,16,18,20)
y2 <- rnorm(10, mean = 5)
df <- data.frame(x, y1, y2)
ggplot(df, aes(x)) +  # basic graphical object
  geom_line(aes(y=y1), 
            colour="red") +  # first layer
  geom_line(aes(y=y2),      # second layer
            colour="green")  

14. How To Fix The Aspect Ratio For Your R Plots

If you want to put your R plot to be saved as an image where the axes are proportional to their size, it’s a sign that you want to fix the aspect ratio.

Adjusting The Aspect Ratio With Basic R

When you’re working with basic R commands to produce your plots, you can add the argument asp of the plot() function, completed with an integer, to set your aspect ratio. Look at this first example without a defined aspect ratio:

x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,y)

Rplot31

And compare this now to the plot where the aspect ratio is defined with the argument asp:

x <- seq(0,2*pi,0.1)
y <- sin(x)
plot(x,
     y, 
     asp=2)

Rplot32

Adjusting The Aspect Ratio For Your Plots With ggplot2

To fix the aspect ratio for ggplot2 plots, you just add the function coord_fixed(), which provides a “fixed scale coordinate system [that] forces a specified ratio between the physical representation of data units on the axes”.

In other words, this function allows you to specify a number of units on the y-axis which is equivalent to one unit on the x-axis. The default is always set at 1, which means that one unit on the x-axis has the same length as one unit on the y-axis. If your ratio is set at a higher value, the units on the y-axis are longer than units on the x-axis and vice versa.

Compare the following examples:

library(ggplot2)
df <- data.frame(
  x = runif(100, 0, 5),
  y = runif(100, 0, 5))

ggplot(df, aes(x=x, y=y)) + geom_point()

Rplot33

versus

library(ggplot2)
df <- data.frame(
  x = runif(100, 0, 5),
  y = runif(100, 0, 5))

ggplot(df, aes(x=x, y=y)) + 
  geom_point() + 
  coord_fixed(ratio=1)

Rplot34

Adjusting The Aspect Ratio For Your Plots With MASS

You can also consider using the MASS package, which encompasses the eqscplot() function: it produces plots with geometrically equal scales. It does this for scatterplots:

chol <- read.table(url("http://assets.datacamp.com/blog_assets/chol.txt"), header = TRUE)
library(MASS)
x = chol$HEIGHT
y = chol$WEIGHT
z = as.numeric(chol$MORT)

eqscplot(x, 
         y, 
         ratio = 1, 
         col=c("red", "green"), 
         pch=c(1,2))

Rplot35

Tip: you might do well starting a new plot frame before executing the code above!

Note that you can give additional arguments to the eqscplot() function to customize the scatterplot’s look!

15. What Is The Function Of hjust And vjust In ggplot2?

Well, you basically use these arguments when you want to set the position of text in your ggplot. hjust allows you to define the horizontal justification, while vjust is meant to control the vertical justification. See the documentation on geom_text() for more information.

To demonstrate what exactly happens, you can create a data frame from all combinations of factors with the expand.grid() function:

hjustvjust <- expand.grid(hjust=c(0, 0.5, 1),
                          vjust=c(0, 0.5, 1),
                          angle=c(0, 45, 90),
                          text="Text"
                          )

Note that hjust and vjust can only take values between 0 and 1.

  • 0 means that the text is left-justified; In other words, all text is aligned to the left margin. This is usually what you see when working with text editors such as Word.
  • 1 means that the text is right-justified: all text is aligned to the right margin.

Then, you can plot the data frame that you have just made above with the ggplot() function, defining the x-and y-axis as “hjust” and “vjust” respectively:

library(ggplot2)
ggplot(hjustvjust, aes(x=hjust, y=vjust)) + 
    geom_point() +
    geom_text(aes(label=text, 
                  angle=angle, 
                  hjust=hjust, 
                  vjust=vjust)) + 
    facet_grid(~angle) +
    scale_x_continuous(breaks=c(0, 0.5, 1), 
                       expand=c(0, 0.2)) +
    scale_y_continuous(breaks=c(0, 0.5, 1), 
                       expand=c(0, 0.2))

Rplot36

Also note how the hjust and vjust arguments are added to geom_text(), which takes care of the textual annotations to the plot.

In the plot above you see that the text at the point (0,0) is left-aligned, horizontally as well as vertically. On the other hand, the text at point (1,1) is right-aligned in horizontal as well as vertical direction. The point (0.5,0.5) is right in the middle: it’s not really left-aligned nor right-aligned for what concerns the horizontal and vertical directions.

Note that when these arguments are defined to change the axis text, the horizontal alignment for axis text is defined in relation to the entire plot, not to the x-axis!

DF <- data.frame(x=LETTERS[1:3],
                 y=1:3)
p <- ggplot(DF, aes(x,y)) + 
  geom_point() + 
  ylab("Very long label for y") + 
  theme(axis.title.y=element_text(angle=0))


p1 <- p + theme(axis.title.x=element_text(hjust=0)) + xlab("X-axis at hjust=0")
p2 <- p + theme(axis.title.x=element_text(hjust=0.5)) + xlab("X-axis at hjust=0.5")
p3 <- p + theme(axis.title.x=element_text(hjust=1)) + xlab("X-axis at hjust=1")

library(gridExtra)
grid.arrange(p1, p2, p3)

Rplot37

Also try for yourself what defining the vjust agument to change the axis text does to the representation of your plot:

DF <- data.frame(x=c("ana","b","cdefghijk","l"),
                 y=1:4)
p <- ggplot(DF, aes(x,y)) + geom_point()

p1 <- p + theme(axis.text.x=element_text(vjust=0, colour="red")) + 
        xlab("X-axis labels aligned with vjust=0")
p2 <- p + theme(axis.text.x=element_text(vjust=0.5, colour="red")) + 
        xlab("X-axis labels aligned with vjust=0.5")
p3 <- p + theme(axis.text.x=element_text(vjust=1, colour="red")) + 
        xlab("X-axis labels aligned with vjust=1")


library(gridExtra)
grid.arrange(p1,p2,p3)

To go to the original excellent discussion, from which the code above was adopted, click here.

As A Last Note…

It’s really worth checking out this article, which lists 10 tips for making your R graphics look their best!

Also, if you want to know more about data visualization, you might consider checking out DataCamp’s interactive course on data visualization with ggvis, given by Garrett Grolemund, author of Hands on Programming with R, as well as Data Science with R.

Or maybe our course on reporting with R Markdown can interest you!

facebooktwittergoogle_pluslinkedin

The post 15 Questions All R Users Have About Plots appeared first on The DataCamp Blog .

To leave a comment for the author, please follow the link and comment on his blog: The DataCamp Blog » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

MRAN’s Packages Spotlight

By Joseph Rickert

(This article was first published on Revolutions, and kindly contributed to R-bloggers)

by Joseph Rickert

New R packages just keep coming. The following plot, constructed with information from the monthly files on Dirk Eddelbuettel’s CRANberries site, shows a plot of the number of new packages released to CRAN between January 1, 2013 and July 27, 2015 by month (not quite 31 months).

This is amazing growth! The mean rate is about 125 new packages a month. How can anyone keep up? The direct approach, of course, would be to become an avid, frequent reader of CRANberries. Every day the CRAN:New link presents the relentless roll call of new arrivals. However, dealing with this extreme level of tediousness is not for everyone.

At MRAN we are attempting to provide some help with the problem of keeping up with what’s new through the old fashioned (pre-machine learning) practice of making some idiosyncratic, but not completely capricious, human generated recommendations. With every new release of RRO we publish on the Package Spotlight page brief descriptions of packages in three categories: New Packages, Updated Packages and GitHub packages. None of these lists are intended to be either comprehensive or complete in any sense.

The New Packages list includes new packages that have been released to CRAN since the previous release of RRO. My general rules for selecting packages for this list are: (1) that they should either be tools or infrastructure packages that may prove to be useful to a wide audience or (2) they should involve a new algorithm or statistical technique that I think will be of interest to statisticians and data scientists working in many different areas. The following two packages respectively illustrate these two selection rules:

metricsgraphics V0.8.5: provides an htmlwidgets widgets interface to the MetricsGraphics.js D3 JavaScript library for plotting time series data. The vignette shows what it can do

rotationForest V0.1: provides an implementation of the new Rotation Forest binary ensemble classifier described in the paper by Rodriguez et. al

I also tend to favor packages that are backed by a vignette, paper or url that provides additional explanatory material.

Of course, any scheme like this is limited by the knowledge and biases of the curator. I am particularly worried about missing packages targeted towards biotech applications that may indeed have broader appeal. The way to mitigate the shortcomings of this approach is to involve more people. So if you come across a new package that you think may have broad appeal send us a note and let us know why (open@revolutionanalytics.com).

The Updated Package list is constructed with the single criterion that the fact that the package was updated should convey news of some sort. Most of the very popular and useful packages are updated frequently, some approaching monthly updates. So, even though they are important packages the fact that they have been updated is generally no news at all. It is also the case that package authors generally do not put much effort in to describing the updates. In my experience poking around CRAN I have found that the NEWS directories for packages go mostly unused. (An exemplary exception is the NEWS for ggplot2.)

Finally, the GitHub list is mostly built from repositories that are trending on GitHub with a few serendipitous finds included.

We would be very interested in learning how you keep up with new R packages. Please leave us a comment.

Post Script:

The code for generating the plot may be found here: Download New_packages

Also, we have written quite a few posts over the last year or so about the difficulties of searching for relevant packages on CRAN. Here are links to three recent posts:

How many packages are there really on CRAN?
Fishing for packages in CRAN
Working with R Studio CRAN Logs

To leave a comment for the author, please follow the link and comment on his blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

The little mixed model that could, but shouldn’t be used to score surgical performance

By Christos Argyropoulos

Capture

(This article was first published on Statistical Reflections of a Medical Doctor » R, and kindly contributed to R-bloggers)

The Surgeon Scorecard

Two weeks ago, the world of medical journalism was rocked by the public release of ProPublica’s Surgeon Scorecard. In this project ProPublica “calculated death and complication rates for surgeons performing one of eight elective procedures in Medicare, carefully adjusting for differences in patient health, age and hospital quality.” By making the dataset available through a user friendly portal, the intended use of this public resource was to “use this database to know more about a surgeon before your operation“.

The Scorecard was met with great enthusiasm and coverage by non-medical media. TODAY.com headline “nutshelled” the Scorecard as a resource that “aims to help you find doctors with lowest complication rates“. A (?tongue in cheek) NBC headline tells us the scorecard “It’s complicated“. On the other hand the project was not well received by my medical colleagues. John Mandrola gave it a failing grade in Medscape. Writing at KevinMD.com, Jeffrey Parks called it a journalistic low point for ProPublica. Jha Shaurabh pointed out a potential paradox in a statistically informed, easy to read and highly entertaining piece. In this paradox, the surgeon with the higher complication case who takes high risk patients from a disadvantaged socio-economic environment, may actually be the surgeon one wants to perform one’s surgery! Ed Schloss summarized the criticism (in the blogosphere and twitter) in an open letter and asked for peer review of the Scorecard methodology.

The criticism to date has largely focused on the potential for selection effects (as the Scorecard is based on Medicare data, and does not include data from private insurers), the incomplete adjustment for confounders, the paucity of data for individual surgeons, the counting of complications and re-admission rates, decisions about risk category classification boundaries and even data errors (ProPublica’s response arguing that the Scorecard matters may be found here). With a few exceptions (e.g. see Schloss’s blogpost in which the complexity of the statistical approach is mentioned) the criticism of the statistical approach (including my own comments in twitter) has largely focused on these issues.

@RogueRad @JohnTuckerPhD @daviesbj @charlesornstein Simpson paradox and selection effects all in one chart

— ChristosArgyropoulos (@ChristosArgyrop) July 14, 2015

@daviesbj @RogueRad @JohnTuckerPhD @charlesornstein More simply not considering the case mix can lead to results that are reverse of truth

— ChristosArgyropoulos (@ChristosArgyrop) July 14, 2015

On the other hand, the underlying statistical methodology (here and there) that powers the Scorecard has not received much attention. Therefore I undertook a series of simulation experiments to explore the impact of the statistical procedures on the inferences afforded by the Scorecard.

The mixed model that could – a short non-technical summary of ProPublica’s approah

ProPublica’s approach to the scorecard is based on logistic regression model, in which individual surgeon (and hospital) performance (probability of suffering a complication) is modelled using Gaussian random effects, while patient level characteristics that may act as confounders are adjusted for, using fixed effects. In a nutshell this approach implies fitting a model of the average complication rate that is function of the fixed effects (e.g. patient age) for the entire universe of surgeries performed in the USA. Individual surgeon and hospital factors modify this complication rate, so that a given surgeon and hospital will have an individual rate that varies around the population average. These individual surgeon and hospital factors are constrained to follow Gaussian, bell-shaped distribution when analyzing complication data. After model fitting, these predicted random effects are used to quantify and compare surgical performance. A feature of mixed modeling approaches is the unavoidable shrinkage of the raw complication rate towards the population mean. Shrinkage implies that the dynamic range of the actually observed complication rates is compressed. This is readily appreciated in the figures generated by the ProPublica analytical team:

In their methodological white paper the ProPublica team notes:

While raw rates ranged from 0.0 to 29%, the Adjusted Complication Rate goes from 1.1 to 5.7%. …. shrinkage is largely a result of modeling in the first place, not due to adjusting for case mix. This shrinkage is another piece of the measured approach we are taking: we are taking care not to unfairly characterize surgeons and hospitals.”

These features should alert us that something is going on. For if a model can distort the data to such a large extent, then the model should be closely scrutinized before being accepted. In fact, given these observations, it is possible that one mistakes the noise from the model for the information hidden in the empirical data. Or, even more likely, that one is not using the model in the most productive manner.

Note that these comments should not be interpreted as a criticism against the use of mixed models in general, or even for the particular aspects of the Scorecard project. They are rather a call for re-examining the modeling assumptions and for gaining a better understanding of the model “mechanics of prediction” before releasing the Kraken to the world.

The little mixed model that shouldn’t

There are many technical aspects one could potentially misfire in a Generalized Linear Mixed Model for complication rates. Getting the wrong shape of the random effects distribution is of concern (e.g. assuming it is bell shaped when it is not). Getting the underlying model wrong, e.g. assuming the binomial model for complication rates while a model with many more zeros (a zero inflated model) may be more appropriate, is yet another potential problem area. However, even if these factors are not operational, then one may still be misled when using the results of the model. In particular, the major area of concern for such models is the cluster size: the number of observations per individual random effect (e.g. surgeon) in the dataset. It is this factor, rather than the actual size of the dataset that determines the precision in the individual random affects. Using a toy example, we show that the number of observations per surgeon typical of the Scorecard dataset, leads to predicted random effects that may be far from their true value. This seems to stem from the non-linear nature of the logistic regression model. As we conclude in our first technical post:

  • Random Effect modeling of binomial outcomes require a substantial number of observations per individual (in the order of thousands) for the procedure to yield estimates of individual effects that are numerically indistinguishable from the true values.

Contrast this conclusion to the cluster size in the actual scorecard:

Procedure Code N (procedures) N(surgeons) Procedures per surgeon
51.23 201,351 21,479 9.37
60.5 78,763 5,093 15.46
60.29 73,752 7,898 9.34
81.02 52,972 5,624 9.42
81.07 106,689 6,214 17.17
81.08 102,716 6,136 16.74
81.51 494,576 13,414 36.87
81.54 1,190,631 18,029 66.04
Total 2,301,450 83,887 27.44

In a follow up simulation study we demonstrate that this feature results in predicted individual effects that are non-uniformly shrank towards their average value. This compromises the ability of mixed model predictions to separate the good from the bad “apples”.

In the second technical post, we undertook a simulation study to understand the implications of over-shrinkage for the Scorecard project. These are best understood by a numerical example from one of simulated datasets. To understand this example one should note that the individual random effects have the interpretation of (log-) odds ratios. Hence, the difference in these random effects when exponentiated yield the odds ratio of suffering a complication in the hands of a good relative to a bad surgeon. By comparing these random effects for good and bad surgeons who are equally bad (or good) relative to the mean (symmetric quantiles around the median), one can get an idea of the impact of using the predicted random effects to carry out individual comparisons.

Good Bad Quantile (Good) Quantile (Bad) True OR Pred OR Shrinkage Factor
-0.050 0.050 48.0 52.0 0.905 0.959 1.06
-0.100 0.100 46.0 54.0 0.819 0.920 1.12
-0.150 0.150 44.0 56.0 0.741 0.883 1.19
-0.200 0.200 42.1 57.9 0.670 0.847 1.26
-0.250 0.250 40.1 59.9 0.607 0.813 1.34
-0.300 0.300 38.2 61.8 0.549 0.780 1.42
-0.350 0.350 36.3 63.7 0.497 0.749 1.51
-0.400 0.400 34.5 65.5 0.449 0.719 1.60
-0.450 0.450 32.6 67.4 0.407 0.690 1.70
-0.500 0.500 30.9 69.1 0.368 0.662 1.80
-0.550 0.550 29.1 70.9 0.333 0.635 1.91
-0.600 0.600 27.4 72.6 0.301 0.609 2.02
-0.650 0.650 25.8 74.2 0.273 0.583 2.14
-0.700 0.700 24.2 75.8 0.247 0.558 2.26
-0.750 0.750 22.7 77.3 0.223 0.534 2.39
-0.800 0.800 21.2 78.8 0.202 0.511 2.53
-0.850 0.850 19.8 80.2 0.183 0.489 2.68
-0.900 0.900 18.4 81.6 0.165 0.467 2.83
-0.950 0.950 17.1 82.9 0.150 0.447 2.99
-1.000 1.000 15.9 84.1 0.135 0.427 3.15
-1.050 1.050 14.7 85.3 0.122 0.408 3.33
-1.100 1.100 13.6 86.4 0.111 0.390 3.52
-1.150 1.150 12.5 87.5 0.100 0.372 3.71
-1.200 1.200 11.5 88.5 0.091 0.356 3.92
-1.250 1.250 10.6 89.4 0.082 0.340 4.14
-1.300 1.300 9.7 90.3 0.074 0.325 4.37
-1.350 1.350 8.9 91.1 0.067 0.310 4.62
-1.400 1.400 8.1 91.9 0.061 0.297 4.88
-1.450 1.450 7.4 92.6 0.055 0.283 5.15
-1.500 1.500 6.7 93.3 0.050 0.271 5.44
-1.550 1.550 6.1 93.9 0.045 0.259 5.74
-1.600 1.600 5.5 94.5 0.041 0.247 6.07
-1.650 1.650 4.9 95.1 0.037 0.236 6.41
-1.700 1.700 4.5 95.5 0.033 0.226 6.77
-1.750 1.750 4.0 96.0 0.030 0.216 7.14
-1.800 1.800 3.6 96.4 0.027 0.206 7.55
-1.850 1.850 3.2 96.8 0.025 0.197 7.97
-1.900 1.900 2.9 97.1 0.022 0.188 8.42
-1.950 1.950 2.6 97.4 0.020 0.180 8.89
-2.000 2.000 2.3 97.7 0.018 0.172 9.39
-2.050 2.050 2.0 98.0 0.017 0.164 9.91

From this table it can be seen that predicted odds ratios are always larger than the true ones. The ratio of these odds ratios (the shrinkage factor) is larger, the more extreme comparisons are contemplated.

In summary, the use of the random effects models for the small cluster sizes (number of observations per surgeon) is likely to lead to estimates (or rather predictions) of individual effects that are smaller than their true values. Even though one should expect the differences to decrease with larger cluster sizes, this is unlikely to happen in real world datasets (how often does one come across a surgeon who has performed 1000s of operation of the same type before they retire?). Hence, the comparison of surgeon performance based on these random effect predictions is likely to be misleading due to over-shrinkage.

Where to go from here?

ProPublica should be congratulated for taking up such an ambitious, and ultimately useful project. However, the limitations of the adopted approach should make one very skeptical about accepting the inferences from their modeling tool. In particular, the small number of observations per surgeon limits the utility of the predicted random effects to directly compare surgeons due to over-shrinkage. Further studies are required before one could use the results of mixed effects modeling for this application. Based on some limited simulation experiments (that we do not present here), it seems that relative rankings of surgeons may be robust measures of surgical performance, at least compared to the absolute rates used by the Scorecard. Adding my voice to that of Dr Schloss, I think it is time for an open and transparent dialogue (and possibly a “crowdsourced” research project) to better define the best measure of surgical performance given the limitations of the available data. Such a project could also explore other directions, e.g. the explicit handling of zero inflation and even go beyond the ubiquitous bell shaped curve. By making the R code available, I hope that someone (possibly ProPublica) who can access more powerful computational resources can perform more extensive simulations. These may better define other aspects of the modeling approach and suggest improvements in the scorecard methodology. In the meantime, it is probably a good idea not to exclusively rely on the numerical measures of the scorecard when picking up the surgeon who will perform your next surgery.

To leave a comment for the author, please follow the link and comment on his blog: Statistical Reflections of a Medical Doctor » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Empirical bias analysis of random effects predictions in linear and logistic mixed model regression

By Christos Argyropoulos

Predicted v.s. simulated random effects for logistic and linear mixed regression as a function of the number of observations per random effect (cluster size)

(This article was first published on Statistical Reflections of a Medical Doctor » R, and kindly contributed to R-bloggers)

In the first technical post in this series, I conducted a numerical investigation of the biasedness of random effect predictions in generalized linear mixed models (GLMM), such as the ones used in the Surgeon Scorecard, I decided to undertake two explorations: firstly, the behavior of these estimates as more and more data are gathered for each individual surgeon and secondly whether the limiting behavior of these estimators critically depends on the underlying GLMM family. Note that the first question directly assesses whether the random effect estimators reflect the underlying (but unobserved) “true” value of the individual practitioner effect in logistic regression models for surgical complications. On the other hand the second simulation examines a separate issue, namely whether the non-linearity of the logistic regression model affects the convergence rate of the random effect predictions towards their true value.

For these simulations we will examine three different ranges of dataset sizes for each surgeon:

  • small data (complication data from between 20-100 cases/ surgeon are available)
  • large data (complications from between 200-1000 cases/surgeon)
  • extra large data (complications from between 1000-2000 cases/surgeon)

We simulated 200 surgeons (“random effects”) from a normal distribution with a mean of zero and a standard deviation of 0.26, while the population average complication rate was set t0 5%. These numbers were chosen to reflect the range of values (average and population standard deviation) of the random effects in the Score Card dataset, while the use of 200 random effects was a realistic compromise with the computational capabilities of the Asus Transformer T100 2 in 1 laptop/tablet that I used for these analyses.

The following code was used to simulate the logistic case for small data (the large and extra large cases were simulated by changing the values of the Nmin and Nmax variables).

library(lme4)
library(mgcv)
## helper functions
logit<-function(x) log(x/(1-x))
invlogit<-function(x) exp(x)/(1+exp(x))

## simulate cases
simcase<-function(N,p) rbinom(N,1,p)
## simulation scenario
pall<-0.05; # global average
Nsurgeon<-200; # number of surgeons
Nmin<-20; # min number of surgeries per surgeon
Nmax<-100; # max number of surgeries per surgeon

## simulate individual surgical performance
## how many simulations of each scenario
set.seed(123465); # reproducibility
ind<-rnorm(Nsurgeon,0,.26) ; # surgical random effects
logitind<-logit(pall)+ind ; # convert to logits
pind<-invlogit(logitind); # convert to probabilities
Nsim<-sample(Nmin:Nmax,Nsurgeon,replace=TRUE); # number of cases per surgeon

complications<-data.frame(ev=do.call(c,mapply(simcase,Nsim,pind,SIMPLIFY=TRUE)),
id=do.call(c,mapply(function(i,N) rep(i,N),1:Nsurgeon,Nsim)))
complications$id<-factor(complications$id)

A random effect and fixed effect model were fit to these data (the fixed effect model is simply a series of independent fits to the data for each random effect):


## Random Effects

fit2<-glmer(ev~1+(1|id),data=complications,family=binomial,nAGQ=2)
ran2<-ranef(fit2)[["id"]][,1]
c2<-cor(ran2,ind)
int2<-fixef(fit2)
ranind2<-ran2+int2

## Fixed Effects

fixfit<-vector("numeric",Nsurgeon)
for(i in 1:Nsurgeon) {
fixfit[i]<-glm(ev~1,data=subset(complications,id==i),family="binomial")$coef[1]
}

The corresponding Gaussian GLMM cases were simulated by making minor changes to these codes. These are shown below:


simcase<-function(N,p) rnorm(N,p,1)

fit2<-glmer(ev~1+(1|id),data=complications,nAGQ=2)

fixfit[i]<-glm(ev~1,data=subset(complications,id==i),family="gaussian")$coef[1]

The predicted random effects were assessed against the simulated truth by smoothing regression splines. In these regressions, the intercept yields the bias of the average of the predicted random effects vis-a-vis the truth, while the slope of the regression quantifies the amount of shrinkage effected by the mixed model formulation. For unbiased estimation not only would we like the intercept to be zero, but also the slope to be equal to one. In this case, the predicted random effect would be equal to its true (simulated) value. Excessive shrinkage would result in a slope that is substantially different from one. Assuming that the bias (intercept) is not different from zero, the relaxation of the slope towards one quantifies the consistency and the bias (or rather its rate of convergence) of these estimators using simulation techniques (or so it seems to me).

The use of smoothing (flexible), rather than simple linear regression, to quantify these relationships does away with a restrictive assumption: that the amount of shrinkage is the same throughout the range of the random effects:

## smoothing spline (flexible) fit
fitg<-gam(ran2~s(ind)
## linear regression
fitl<-lm(ran2~ind)

The following figure shows the results of the flexible regression (black with 95% CI, dashed black) v.s. the linear regression (red) and the expected (blue) line (intercept of zero, slope of one).

Predicted v.s. simulated random effects for logistic and linear mixed regression as a function of the number of observations per random effect (cluster size)

Several observations are worth noting in this figure.
First
, the flexible regression was indistinguishable from a linear regression in all cases; hence the red and black lines overlap. Stated in other terms, the amount of shrinkage was the same across the range of the random effect values.
Second, the intercept in all flexible models was (within machine precision) equal to zero. Consequently, when estimating a group of random effects their overall mean is (unbiasedly) estimated.
Third, the amount of shrinkage of individual random effects appears to be excessive for small sample sizes (i.e. few cases per surgeon). Increasing the number of cases decreases the shrinkage, i.e. the black and red lines come closer to the blue line as N is increased from 20-100 to 1000-2000. Conversely, for small cluster sizes the amount of shrinkage is so excessive that one may lose the ability to distinguish between individuals with very different complication rates. This is reflected by a regression line between the predicted and the simulated random effect value that is nearly horizontal.
Fourth, the rate of convergence of the predicted random effects to their true value critically depends upon the linearity of the regression model. In particular, the shrinkage of logistic regression model with 1000-2000 observations per case is almost the same at that of a linear model with 20-100 for the parameter values considered in this simulation.

An interesting question is whether these observations (overshrinkage of random effects from small sample sizes in logistic mixed regression) reflects the use of random effects in modeling, or whether they are simply due to the interplay between sample size and the non-linearity of the statistical model. Hence, I turned to fixed effects modeling of the same datasets. The results of these analyses are summarized in the following figure:

Difference between fixed effect estimates of random effects(black histograms) v.s. random effects predictions (density estimators: red lines) relative to their simulated (true) values

One notes that the distribution of the differences between the random and fixed effects relative to the true (simulated) values is nearly identical for the linear case (second row). In other words, the use of the implicit constraint of the mixed model, offers no additional advantage when estimating individual performance in this model. On the other hand there is a value in applying mixed modeling techniques for the logistic regression case. In particular, outliers (such as those arising for small samples) are eliminated by the use of random effect modeling. The difference between the fixed and the random effect approach progressively decreases for large sample sizes, implying that the benefit of the latter approach is lost for “extra large” cluster sizes.

One way to put these differences into perspective is to realize that the random effects for the logistic model correspond to log-odd ratios, relative to the population mean. Hence the difference between the predicted random effect and its true value, when exponentiated, corresponds to an Odd Ratio (OR). A summary of the odds ratios over the population of the random effects as a function of cluster size is shown below.


Metric 20-100  200-1000 1000-2000
Min    0.5082   0.6665    0.7938
Q25    0.8901   0.9323    0.9536
Median 1.0330   1.0420    1.0190 
Mean   1.0530   1.0410    1.0300  
Q75    1.1740   1.1340    1.1000   
Max    1.8390   1.5910    1.3160 

Even though the average Odds Ratio is close to 1, a substantial number of predicted random effects are far from the true value and yield ORs that are greater than 11% in either direction for small cluster sizes. These observations have implications for the Score Card (or similar projects): if one were to use Random Effects modeling to focus on individuals, then unless the cluster sizes (observations per individual) are substantial, one would run a substantial risk of misclassifying individuals, even though one would be right on average!

One could wonder whether these differences between the simulated truth and the predicted random effects arise as a result of the numerical algorithms of the lme4 package. The latter was used by both the Surgeon Score Card project and our simulations so far and thus it would be important to verify that it performs up to specs. The major tuning variable for the algorithm is the order of the Adaptive Gaussian Quadrature (argument nAGQ). We did not find any substantial departures when the order of the quadrature was varied from 0 to 1 and 2. However, there is a possibility that the algorithm fails for all AGQ orders as it has to calculate probabilities that are numerically close to the boundary of the parameter space. We thus decided to fit the same model from a Bayesian perspective using Markov Chain Monte Carlo (MCMC) methods. The following code will fit the Bayesian model and graphs the true values of the effects used in the simulated dataset against the Bayesian estimates (the posterior mean) and also the lme4 predictions. The latter tend to be numerically close to the posterior mode of the random effects when a Bayesian perspective is adopted.


## Fit the mixed effects logistic model from R using openbugs

library("glmmBUGS")
library(nlme)
fitBUGS = glmmBUGS(ev ~ 1, data=complications, effects="id", family="bernoulli")
startingValues = fitBUGS$startingValues
source("getInits.R")
require(R2WinBUGS)
fitBUGSResult = bugs(fitBUGS$ragged, getInits, parameters.to.save = names(getInits()),
model.file="model.txt", n.chain=3, n.iter=6000, n.burnin=2000, n.thin=10,
program="openbugs", working.directory=getwd())

fitBUGSParams = restoreParams(fitBUGSResult , fitBUGS$ragged)
sumBUGS<-summaryChain(fitBUGSParams )
checkChain(fitBUGSParams )

## extract random effects

cnames<-as.character(sort(as.numeric(row.names(sumBUGS$FittedRid))))
fitBUGSRE<-sumBUGS$Rid[cnames,1]

## plot against the simulated (true) effects and the lme4 estimates

hist(ind,xlab="RE",ylim=c(0,3.8),freq=FALSE,main="")
lines(density(fitBUGSRE),main="Bayesian",xlab="RE",col="blue")
lines(density(ran2),col="red")
legend(legend=c("Truth","lme4","MCMC"),col=c("black","red","blue"),
bty="n",x=0.2,y=3,lwd=1)

The following figure shows the histogram of the true values of the random effects (black), the frequentist(lme4) estimates (red) and the Bayesian posterior means (blue).

MCMClme4Truth

It can be appreciated that both the Bayesian estimates and the lme4 predictions demonstrate considerable shrinkage relative to the true values for small cluster sizes (20-100). Hence, an lme4 numerical quirk seems an unlikely explanation for the shrinkage observed in the simulation.

Summing up:

  • Random Effect modeling of binomial outcomes require a substantial number of observations per individual (cluster size) for the procedure to yield estimates of individual effects that are numerically indistinguishable from the true values
  • Fixed effect modeling is even worse an approach for this problem
  • Bayesian fitting procedures do not appear to yield numerically different effects from their frequentist counterparts

These features should raise the barrier for accepting a random effects logistic modeling approach when the focus is on individual rather than population average effects. Even though the procedure is certainly preferable to fixed effects regression, the direct use of the value of the predicted individual random effects as an effect measure will be problematic for small cluster sizes (e.g. a small number of procedures per surgeon). In particular, a substantial proportion of these estimated effects is likely to be far from the truth even if the model is unbiased on the average. These observations are of direct relevance to the Surgical Score Card in which the number of observations per surgeon were far lower than the average value in our simulations: 60 (small), 600 (large) and 1500 (extra large):

Procedure Code N (procedures) N(surgeons) Procedures per surgeon
51.23 201,351 21,479 9.37
60.5 78,763 5,093 15.46
60.29 73,752 7,898 9.34
81.02 52,972 5,624 9.42
81.07 106,689 6,214 17.17
81.08 102,716 6,136 16.74
81.51 494,576 13,414 36.87
81.54 1,190,631 18,029 66.04
Total 2,301,450 83,887 27.44

To leave a comment for the author, please follow the link and comment on his blog: Statistical Reflections of a Medical Doctor » R.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

#rstats Make arrays into vectors before running table

By strictlystat

(This article was first published on A HopStat and Jump Away » Rbloggers, and kindly contributed to R-bloggers)

Setup of Problem

While working with nifti objects from the oro.nifti, I tried to table the values of the image. The table took a long time to compute. I thought this was due to the added information about a medical image, but I found that the same sluggishness happened when coercing the nifti object to an array as well.

Quick, illustrative simulation

But, if I coerced the data to a vector using the c function, things were much faster. Here’s a simple example of the problem.

library(microbenchmark)
dim1 = 30
n = dim1 ^ 3
vec = rbinom(n = n, size = 15, prob = 0.5)
arr = array(vec, dim = c(dim1, dim1, dim1))
microbenchmark(table(vec), table(arr), table(c(arr)), times = 100)
Unit: milliseconds
          expr       min        lq      mean    median        uq      max
    table(vec)  5.767608  5.977569  8.052919  6.404160  7.574409 51.13589
    table(arr) 21.780273 23.515651 25.050044 24.367534 25.753732 68.91016
 table(c(arr))  5.803281  6.070403  6.829207  6.786833  7.374568  9.69886
 neval cld
   100  a 
   100   b
   100  a 

As you can see, it’s much faster to run table on the vector than the array, and the coercion of an array to a vector doesn’t take much time compared to the tabling and is comparable in speed.

Explanation of simulation

If the code above is clear, you can skip this section. I created an array that was 30 × 30 × 30 from random binomial variables with half probabily and 15 Bernoulli trials. To keep things on the same playing field, the array (arr) and the vector (vec) have the same values in them. The microbenchmark function (and package of the same name) will run the command 100 times and displays the statistics of the time component.

Why, oh why?

I’ve looked into the table function, but cannot seem to find where the bottleneck occurs. Now, for and array of 30 × 30 × 30, it takes less than a tenth of a second to compute. The problem is when the data is 512 × 512 × 30 (such as CT data), the tabulation using the array form can be very time consuming.

I reduced the replicates, but let’s show see this in a reasonable image dimension example:

library(microbenchmark)
dims = c(512, 512, 30)
n = prod(dims)
vec = rbinom(n = n, size = 15, prob = 0.5)
arr = array(vec, dim = dims)
microbenchmark(table(vec), table(arr), table(c(arr)), times = 10)
Unit: seconds
          expr      min       lq     mean    median        uq       max
    table(vec) 1.871762 1.898383 1.990402  1.950302  1.990898  2.299721
    table(arr) 8.935822 9.355209 9.990732 10.078947 10.449311 11.594772
 table(c(arr)) 1.925444 1.981403 2.127866  2.018741  2.222639  2.612065
 neval cld
    10  a 
    10   b
    10  a 

Conclusion

I can’t figure out why right now, but it seems that coercing an array (or nifti image) to a vector before running table can significantly speed up the procedure. If anyone has any intuition why this is, I’d love to hear it. Hope that helps your array tabulations!

To leave a comment for the author, please follow the link and comment on his blog: A HopStat and Jump Away » Rbloggers.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

R Oddities: Strings in DataFrames

By C

(This article was first published on R-Chart, and kindly contributed to R-bloggers)

Have you ever read a file into R and then encountered strange problems filtering and sorting because the strings were converted to factors? For instance, you might think the two data frames, df and df2 below are contain the same data

> df-data>
> write.csv(df, ‘df.csv’)
> df2-read>

But look the dimensions are different

> dim(df)
[1] 1 1
> dim(df2)
[1] 1 2

And on further analysis – they are indeed different classes:

> class(df$V1)
[1] “character”
> class(df2$V1)
[1] “factor”

This behavior is very confusing to many when first introduced to R. Over time, I accepted it but didn’t really understand how or why it originated. Roger Peng over at the Simply Statistics blog has a great write up on why R operates this way:

http://simplystatistics.org/2015/07/24/stringsasfactors-an-unauthorized-biography/

To leave a comment for the author, please follow the link and comment on his blog: R-Chart.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

But I Don’t Want to Be a Statistician!

By Joel Cadwell

(This article was first published on Engaging Market Research, and kindly contributed to R-bloggers)

“For a long time I have thought I was a statistician…. But as I have watched mathematical statistics evolve, I have had cause to wonder and to doubt…. All in all, I have come to feel that my central interest is in data analysis….”

Opening paragraph from John Tukey “The Future of Data Analysis” (1962)
To begin, we must acknowledge that these labels are largely administrative based on who signs your paycheck. Still, I prefer the name “data analysis” with its active connotation. I understand the desire to rebrand data analysis as “data science” given the availability of so much digital information. As data has become big, it has become the star and the center of attention.

We can borrow from Breiman’s two cultures of statistical modeling to clarify the changing focus. If our data collection is directed by a generative model, we are members of an established data modeling community and might call ourselves statisticians. On the other hand, the algorithmic modeler (although originally considered a deviant but now rich and sexy) took whatever data was available and made black box predictions. If you need a guide to applied predictive modeling in R, Max Kuhn might be a good place to start.

Nevertheless, causation keeps sneaking in through the back door in the form of causal networks. As an example, choice modeling can be justified as an “as if” predictive modeling but then it cannot be used for product design or pricing. As Judea Pearl notes, most data analysis is “not associational but causal in nature.”

Does an inductive bias or schema predispose us to see the world as divided into causes and effects with features creating preference and preference impacting choice? Technically, the hierarchical Bayes choice model does not require the experimental manipulation of feature levels, for example, reporting the likelihood of bus ridership for individuals with differing demographics. Even here, it is difficult not be see causation at work with demographics becoming stereotypes. We want to be able to turn the dial, or at least selection different individuals, and watch choices change. Are such cognitive tendencies part of statistics?

Moreover, data visualization has always been an integral component in the R statistical programming language. Is data visualization statistics? And what of presentations like Hans Rosling’s Let My Dataset Change Your Mindset? Does statistics include argumentation and persuasion?

Hadley Wickham and the Cognitive Interpretation of Data Analysis

You have seen all of his data manipulation packages in R, but you may have missed the theoretical foundations in the paper “A Cognitive Interpretation of Data Analysis” by Grolemund and Wickham. Sensemaking is offered as an organizing force with data analysis as an external tool to aid understanding. We can make sensemaking less vague with an illustration.

Perceptual maps are graphical displays of a data matrix such as the one below from an earlier post showing the association between 14 European car models and 27 attributes. Our familiarity with Euclidean spaces aid in the interpretation of the 14 x 27 association table. It summarizes the data using a picture and enables us to speak of repositioning car models. The joint plot can be seen as the competitive landscape and soon the language of marketing warfare brings this simple 14 x 27 table to life. Where is the high ground or an opening for a new entry? How can we guard against an attack from below? This is sensemaking, but is it statistics?

I consider myself to be a marketing researcher, though with a PhD, I get more work calling myself a marketing scientist. I am a data analyst and not a statistician, yet in casual conversation I might say that I am a statistician in the hope that the label provides some information. It seldom does.

I deal in sensemaking. First, I attempt to understand how consumers make sense of products and decide what to buy. Then, I try to represent what I have learned in a form that assists in strategic marketing. My audience has no training in research or mathematics. Statistics plays a role and R helps, but I never wanted to be a statistician. Not that there is anything wrong with that.

To leave a comment for the author, please follow the link and comment on his blog: Engaging Market Research.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News