Are the Kids Alright? Trends in Teen Risk Behavior in NYC

By Brandy Freitas

CDCWebsite

(This article was first published on R – NYC Data Science Academy Blog, and kindly contributed to R-bloggers)
Contributed by Brandy Freitas as part of the Spring 2017 NYC Data Science Academy 12-Week Data Science Bootcamp. This post is based on the first class project, due Week 3 – Exploratory Visualization & Shiny. A link to the Shiny App is here: Youth Risk Behavior Survey App.

Introduction:

Take a second to think about a teenager in your community, your borough, your neighborhood. Think about who they are, what they look like, what is happening in their life.
Take another second to think about how much power we, as voting adults, advertisers, policy makers, have over an entire population with its own culture, norms, issues and struggles. We make the policies and decisions that allocate resources like time, money, and education for them. Often, adults have a notion that “we know best” how to help our youth, but I wonder: do we really know what’s going on with them? Are we in touch with the reality of the youth experience in our community?
I was drawn to a particular study, the ase Control (CDC) since the 1990’s. The survey itself looks at eating habits, exposure to violence, sex, drugs, physical activity, and many more variables related to the physical and emotional health of youth. It offers a longitudinal view of the well-being of teens from across the country, in both rural and urban settings, and was designed to determine the prevalence of health behaviors, assess whether they increase, decrease, or stay the same over time, and examine their co-occurrence. The CDC adds questions over the years in response to technological advancements and trends, as well, such as questions about texting while driving and cyber bullying.

The Data:

I downloaded the combined YRBS dataset in the available ASCII format from the CDC’s website, and converted to .csv file using an SPSS script and IBM’s SPSS software package. Since my questions were framed around understanding my local youth, for whom my voting patterns and preconceptions probably matter most, I wanted to work with data on teens from the five New York City boroughs.
Using R, I filtered the data based on location, variables that I was interested studying (based on question type), and cleaned them up to remove missing values in key variable columns (grade, age, sex, etc) that were required for my analysis. Originally, the file contained 320K rows with 210 variables, and was reduced to 64K rows and 27 variables. I also recoded the data to make it easier to understand, as most of the variables included were coded for simplicity (ie, 1 = Female, 2 = Male), which was difficult to manipulate and interpret. I used ggplot2, Leaflet, plotly, and the Shiny Dashboard to visualize the data. A sample of the data manipulated in R is below:
The CDC weights the data to make them representative of the population of students from which the sample was drawn. Generally, these adjustments are made by applying a weight based on student sex, grade, and race, so that researchers accessing the data are able to use it as is without worrying about selection bias. Borough data were from seven surveys over twelve years (the odd years from 2003-2015), and there were about two- to three-thousand respondents from each borough per year.
I narrowed my focus down to obesity—which the CDC determines based on height, weight, age and gender—illegal drug use, and suicidal ideation. I was interested in studying differences in gender, borough, grade level, and the effects over time.

The App:

My app is designed to offer a visual representation of some of the data from the YRBS, and to encourage the user to interact with the findings.

I. Teen Demographics:

The landing page of the app is a Leaflet map of the NYC region, with markers for each borough showing the racial breakdown of the teens going to school there. I wanted to allow the user to be able to explore the demographics of the region, and draw their own inferences about variations between the teen population and the Census-reported overall population.
demomap
For instance, the racial makeup of the students in public and private schools in Manhattan is not representative of the resident population.
overall_zoom teens_zoom
Does this distribution of teen demographics reflect a changing trend in the resident population of the five boroughs? What are the social and political implications of this kind of population disparity?

II. Drug Use:

The purportedly worsening drug habits of urban youth have often been the subject of crime dramas, early morning news shows, and impassioned political speeches. We have also heard that things like legalization of marijuana in the US will increase the usage of marijuana among youths due to a shifting perspective of morality. I wondered, though, whether teens really are consuming more illegal drugs.
In this tab of the app, we can see the drug use over time of particular illegal drugs, grouped by borough. Below we see the trends in marijuana use over time:
druguse
For the drugs that I looked at (Cocaine, Methamphetamines, Heroin, Ecstasy, Marijuana), you can see that there is a downward trend in many of the boroughs, and that the overall percent of students reporting using Ecstasy, Methamphetamines, and Heroin is very low. Digging deeper, I wonder: could we elucidate whether there is a correlation between drug use and sexual promiscuity (which is surveyed by the CDC as well), or between drug use and measures of depression?
As a side note (or, in the ever-popular research science vernacular: ‘Data Not Shown‘), during an initial look at racial differences in the data, I found that Asian teens were very unlikely to report having used drugs, while white teens typically reported the highest levels of trying illegal drugs. This is particularly interesting give than Staten Island, with the highest percentage of white teens of all of the boroughs, consistently has the highest drug use reported. I am, however, hesitant to place any significance on these findings until I understand more about reporting differences between populations.

III. Depression:

The National Suicide Prevention Resource Center estimates that 3.7% of the US’s Adult Population have suicidal thoughts, and 0.5% attempt suicide every year. If we look at the percentages among these populations (which is a percentage of students with either suicidal thoughts or actual attempts), though, you can see that they are significantly higher than that of the adult population.
The user can explore many questions in this tab: Is depression in males increasing over time? Which boroughs tend to have higher depression rates? How has mental health trended over the year, and what might cause this?
Suicide
For perspective on mental health aid available to teens in NYC, there are 231K high school students in NYC with over 200 high schools in the Bronx alone. However, according to the NYC Department of Education, there are only 200 Mental Health Clinics available in schools in all five boroughs.
A lot of questions came up for me from this, particularly from a policy standpoint. The CDC has been mainly focused, both programmatically and fundidepressionng-based, on HIV, STD, and pregnancy prevention efforts in the nation’s schools. Based on this depression and mental health data, I wonder: Is this focus justified? Are mental health issues, which still appear to be under-funded and stigmatized, the basis of some of these risk behaviors?
Further questions that I would like to study from this dataset: Are the teens from boroughs that have a lower median income or a high report of violence more prone to depression? Do students who are overweight or obese, or students who identify as LGBTQ, show more signs of suicidal ideation? Does bullying contribute?

IV. Obesity:

For the US Teen population, 5th to 85th percentiles are considered normal weight and lie in the range 17-25. Under 17 is underweight, and over 30 is obese. I wanted to first focus in on this normal range, and designed my chart to present the user with the biologically more interesting section of the data. I wanted to draw attention first to the mass of the distribution, and the median of each group.
Take the Queens in 2015 here for example:
BMI2015
A quick guide to reading this box plot using the highlighted BMI:
  • 42.14 = Greatest BMI value
  • 31.54 = Greatest value, excluding outliers
  • 24.75 = 25% of students have a BMI over this value
  • 22.19 = 50% of the students in the sample have a BMI over this value (median)
  • 19.72 = 25% of the students have a BMI less than this
  • 13.15 = Minimum value, excluding outliers
Which means that about 25% of the students in Queens in 11th grade are overweight. If you look across the grade levels, you can see that there is an increase in the median consistently throughout high school. Are students getting heavier as they advance in grade level? What is causing this?
I also wanted to make sure that the user was able to interact with the data by being able to view the full range and whiskers. To get a perspective on the data’s distribution, we can zoom out to see which way the data sways. Unfortunately, you can see that it leans toward obese:
ObesityWhiskers2
This portion of the study brought up many avenues for further study. Which populations are most at risk (by race, gender, or some other factor), and can we identify them using the data here? For boroughs with better BMI stats (like Manhattan and Staten Island), what are they doing well, and could this be replicated in other areas? Regarding current policy trends, now that large urban centers are shifting away from the ‘free and reduced lunch’ model to the ‘universal free lunch’ model, will we see a shift in this, either in a positive or negative way? Could we make provided school lunches more nutritious? What could be done to improve education on weight and exercise in schools?

Final Notes:

This study offers an interesting snapshot of the mental and physical health of a very vulnerable portion of our society, and I am looking forward to digging deeper into the data to find more coincident variables and health outcomes. It is my strong suggestion, though, that the CDC offer zipped .csv files on their website, so that data enthusiasts would be more likely to access and analyze this study.

The post Are the Kids Alright? Trends in Teen Risk Behavior in NYC appeared first on NYC Data Science Academy Blog.

To leave a comment for the author, please follow the link and comment on their blog: R – NYC Data Science Academy Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Data science for Doctors: Variable importance Exercises

By Vasileios Tsakalos

(This article was first published on R-exercises, and kindly contributed to R-bloggers)

Data science enhances people’s decision making. Doctors and researchers are making critical decisions every day. Therefore, it is absolutely necessary for those people to have some basic knowledge of data science. This series aims to help people that are around medical field to enhance their data science skills.

We will work with a health related database the famous “Pima Indians Diabetes Database”. It was generously donated by Vincent Sigillito from Johns Hopkins University. Please find further information regarding the dataset here.

This is the tenth part of the series and it aims to cover the very basics of the subject of principal correlation coefficient and components analysis, those two methods illustrate how variables are related.
In my opinion, it is necessary for researchers to know how to have a notion of the relationships between variables, in order to be able to find potential cause and effect relation – however this relation is hypothetical, you can’t claim that there is a cause-effect relation only because the correlation is high between those two variables-,remove unecessary variables etc. In particular we will go through Pearson correlation coefficient and Confidence interval by the bootstrap and ( Principal component analysis.

Before proceeding, it might be helpful to look over the help pages for the ggplot, cor, cor.tes, boot.cor, quantile, eigen, princomp, summary, plot, autoplot.

Moreover please load the following libraries.
install.packages("ggplot2")
library(ggplot2)
install.packages("ggfortify")
library(ggfortify)

Please run the code below in order to load the data set and transform it into a proper data frame format:

url <- "https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data"
data <- read.table(url, fileEncoding="UTF-8", sep=",")
names <- c('preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class')
colnames(data) <- names
data <- data[-which(data$mass ==0),]

Answers to the exercises are available here.

If you obtained a different (correct) answer than those listed on the solutions page, please feel free to post your answer as a comment on that page.

Exercise 1

Compute the value of the correlation coefficient for the variables age and preg.

Exercise 2

Construct the scatterplot for the variables age and preg.

Exercise 3

Apply a correlation test for the variables age and preg with null hypothesis to be the correlation is zero and the alternative to be different from zero.
hint: cor.test

Exercise 4

Construct a 95% confidence interval is by the bootstrap. First find the correlation by bootstrap.
hint: mean

Exercise 5

Now that you have found the correlation, find the 95% confidence interval.

Exercise 6

Find the eigen values and eigen vectors for the data set(exclude the class.fac variable).

Exercise 7

Compute the principal components for the dataset used above.

Exercise 8

Show the importance of each principal component.

Exercise 9

Plot the principal components using an elbow graph.

Exercise 10

Constract a scatterplot with x-axis to be the first component and the y-axis to be the second component. Moreover if possible draw the eigen vectors on the plot.
hint: autoplot

To leave a comment for the author, please follow the link and comment on their blog: R-exercises.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Machine Learning Classification Using Naive Bayes

By Data Scientist PakinJa

(This article was first published on Data R Value, and kindly contributed to R-bloggers)

We will develop a classification exercise using Naive-Bayes algorithm. The exercise was originally published in “Machine Learning in R” by Brett Lantz, PACKT publishing 2015 (open source community experience destilled).

Naive Bayes is a probabilistic classification algorithm that can be applied to problems of text classification such as spam filtering, intrusion detection or network anomalies, diagnosis of medical conditions given a set of symptoms, among others.

The exercise we will develop is about filtering spam and ham sms messages.

We will carry out the exercise verbatim as published in the aforementioned reference.

To develop the Naive Bayes classifier, we will use data adapted from the SMS Spam Collection at http://www.dt.fee.unicamp.br/~tiago/smsspamcollection/ .

### install required packages

install.packages(“tm”)
install.packages(“NLP”)
install.packages(“SnowballC”)
install.packages(“wordcloud”)
install.packages(“e1071”)
install.packages(“RColorBrewer”)
install.packages(“gmodels”)

library(tm)
library(NLP)
library(SnowballC)
library(RColorBrewer)
library(wordcloud)
library(e1071)
library(gmodels)

### read the data
sms_raw <- read.csv(“sms_spam.csv”, stringsAsFactors = FALSE)

### transform type into factor (ham/spam)
sms_raw$type <- factor(sms_raw$type)

### see frequencies
table(sms_raw$type)

### create volatile (stored in memory) corpus
### Vcorpus create a complex list, we can use list manipulation
### commands to manage it

sms_corpus <- VCorpus(VectorSource(sms_raw$text))

### inspect the two first elements
inspect(sms_corpus[1:2])

### see the first sms (as text)
as.character(sms_corpus[[1]])

### to view multiple documents
lapply(sms_corpus[1:2], as.character)

### cleaning the text in documents (corpus)

### tm_map to map all over the corpus
### content_transformer to access the corpus
### tolower to lowercase all strings
sms_corpus_clean <- tm_map(sms_corpus,
content_transformer(tolower))
### check and compare the result of the cleaning
as.character(sms_corpus[[1]])
as.character(sms_corpus_clean[[1]])

### remove numbers
sms_corpus_clean <- tm_map(sms_corpus_clean, removeNumbers)

### remove “stop words”
### see the “stop words” list

stopwords()

sms_corpus_clean <- tm_map(sms_corpus_clean,
removeWords, stopwords())
### remove punctuation
sms_corpus_clean <- tm_map(sms_corpus_clean, removePunctuation)

### stemming (transform words into it’s root form)
sms_corpus_clean <- tm_map(sms_corpus_clean, stemDocument)

### romove additional whitespaces
sms_corpus_clean <- tm_map(sms_corpus_clean, stripWhitespace)

### check and compare result of cleaning
as.character(sms_corpus[[1]])
as.character(sms_corpus_clean[[1]])

### tokenization
### document term matrix (DTM)
### rows are sms and columns are words

sms_dtm <- DocumentTermMatrix(sms_corpus_clean)

### data preparation (training and test sets)
### 75% train 25% test

sms_dtm_train <- sms_dtm[1:4169, ]
sms_dtm_test <- sms_dtm[4170:5559, ]

### labels for train and test sets
### feature to be classified

sms_train_labels <- sms_raw[1:4169, ]$type
sms_test_labels <- sms_raw[4170:5559, ]$type

### confirm that the subsets are representative of the
### complete set of SMS data

prop.table(table(sms_train_labels))
prop.table(table(sms_test_labels))

### wordcloud
wordcloud(sms_corpus_clean, min.freq = 50, random.order = FALSE)


### separated wordclouds
spam <- subset(sms_raw, type == “spam”)
ham <- subset(sms_raw, type == “ham”)

wordcloud(spam$text, max.words = 40, scale = c(3, 0.5))
wordcloud(ham$text, max.words = 40, scale = c(3, 0.5))

### creating indicator features for frequent words
findFreqTerms(sms_dtm_train, 5)
sms_freq_words <- findFreqTerms(sms_dtm_train, 5)
str(sms_freq_words)

### filter DTM by frequent terms
sms_dtm_freq_train <- sms_dtm_train[ , sms_freq_words]
sms_dtm_freq_test <- sms_dtm_test[ , sms_freq_words]

### change DTM frequency to factor (categorical)
convert_counts <- function(x) {
x 0, “Yes”, “No”)
}

sms_train <- apply(sms_dtm_freq_train, MARGIN = 2,
convert_counts)
sms_test <- apply(sms_dtm_freq_test, MARGIN = 2,
convert_counts)

### training a model on the data
### alternative package for Naives-Bayes (“klaR”)

sms_classifier <- naiveBayes(sms_train, sms_train_labels)

### evaluating model performance
### make predictions on test set

sms_test_pred <- predict(sms_classifier, sms_test)

### compare predictions (classifications) with the
### true values

CrossTable(sms_test_pred, sms_test_labels,
prop.chisq = FALSE, prop.t = FALSE,
dnn = c(‘predicted’, ‘actual’))



### improving model performance?
### set Laplace estimator to 1

sms_classifier2 <- naiveBayes(sms_train, sms_train_labels,
laplace = 1)

sms_test_pred2 <- predict(sms_classifier2, sms_test)

CrossTable(sms_test_pred2, sms_test_labels,
prop.chisq = FALSE, prop.t = FALSE, prop.r = FALSE,
dnn = c(‘predicted’, ‘actual’))

You can get the exercise and the data set in:
https://github.com/pakinja/Data-R-Value

To leave a comment for the author, please follow the link and comment on their blog: Data R Value.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Retrieving Reading Levels with R

By jlebeau

(This article was first published on More or Less Numbers, and kindly contributed to R-bloggers)
For those that don’t work in education or aren’t aware, there is a measurement for a child’s reading level called a Lexile ® Level. There are ways in which this level can be retrieved using different reading assessments. The measurement can be used to match a child’s reading level to books that are the same level. So the company maintains a database of books that they have assigned levels. There are other assessments that measure how well a child is reading, but I’m not aware of any systems that assign books as comprehensively. The library at the school I work at lacked Lexile Levels for their books. This is unfortunate because of the fact that teachers are not able provide students with books for their respective Lexile Level. Fortunately though the library had a list of ISBNs for the books.

On the Lexile website there is a way to search for books using ISBN numbers to retrieve their Lexile ® Level if the book is available in their database. Entering every ISBN number available is a task fit for something not human.

rvest to the rescue.

Below is the script to retrieve the Lexile Levels of books if a list of ISBNs is available. This was an incredible time save provided by some R code and hopefully someone else out there could use it.

library(rvest) library(httr)
library(htmltools)
library(dplyr)
##Prep for things used later
url<-https://www.lexile.com/fab/results/?keyword=
url2<-https://lexile.com/book/details/
##CSV file with ISBN numbers
dat1<-read.csv(~isbns.csv,header=FALSE)
##dat1<-data.frame(dat1[203:634,])
dat<-as.character(dat1[,1])%>%trimws()
##dat<-dat[41:51]
blank<-as.character(NA)
blank1<-as.character(NA)
##blank2<-as.character(“NA”)
##blank3<-as.character(“NA”)
all<-data.frame(A,B,C)
colnames(all)<-c(name,lexiledat,num)
all<-data.frame(all[1,])
for(i in dat) {
sites<-paste(url,i,sep=)
x <- GET(sites, add_headers(user-agent = r))
webpath<-x$url%>%includeHTML%>%read_html()
##Book Name
name<-webpath%>%html_nodes(xpath=///div[2]/div/div[2]/h4/a)%>%html_text()%>%trimws()
##Lexile Range
lexile<-webpath%>%html_nodes(xpath=///div[2]/div/div[3]/div[1])%>%html_text()%>%trimws()%>%as.character()
##CSS change sometimes
lexiledat<-ifelse(is.na(lexile[2])==TRUE,lexile,lexile[2])
test1<-data.frame(lexiledat,NA)
##Breaks every now and then when adding Author/Pages
##Author Name
##author%html_nodes(xpath=’///div[2]/div/div[2]/span’)%>%html_text()%>%as.character()%>%trimws()
##author<-sub(“by: “,””,author)
##Pages
##pages%html_nodes(xpath=’///div[2]/div/div[2]/div/div[1]’)%>%html_text()%>%as.character()%>%trimws()
##pages<-sub(“Pages: “,””,pages)
##Some books not found, this excludes them and replaces with NA values
df<-if(is.na(test1)) data.frame(blank,blank1) else data.frame(name,lexiledat,stringsAsFactors = FALSE)
colnames(df)<-c(name,lexiledat)
df$num <- i
all<-bind_rows(all,df)
}
master<-rbind(all1,all)

Link to code

To leave a comment for the author, please follow the link and comment on their blog: More or Less Numbers.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

R Quick Tip: Upload multiple files in shiny and consolidate into a dataset

By Steph

In shiny, you can use the fileInput with the parameter multiple = TRUE to enable you to upload multiple files at once. But how do you process those multiple files in shiny and consolidate into a single dataset?

The bit we need from shiny is the input$param$fileinputpath value.

We can use lapply() with data.table‘s fread() to read multiple CSVs from the fileInput(). Then to consolidate the data, we can use data.table‘s rbindlist() to consolidate these into a dataset.

For more info on using data.table for consolidating CSVs, read my post on rbindlist()

If you wanted to process things other CSVs then you might consider alternative libraries, and of course, you don’t just need to put them all into one big table.

View the code on Gist.

The post R Quick Tip: Upload multiple files in shiny and consolidate into a dataset appeared first on Locke Data. Locke Data are a data science consultancy aimed at helping organisations get ready and get started with data science.

Source:: R News

Beautiful boxplots in base R

By biomickwatson

probability

(This article was first published on R – Opiniomics, and kindly contributed to R-bloggers)

As many of you will be aware, I like to post some R code, and I especially like to post base R versions of ggplot2 things!

Well these amazing boxplots turned up on github – go and check them out!

So I did my own version in base R – check out the code here and the result below. Enjoy!

To leave a comment for the author, please follow the link and comment on their blog: R – Opiniomics.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Meetup: Machine Learning in Production with Szilard Pafka

By Szilard Pafka

Machine Learning in Production
by Szilard Pafka

In this talk I will discuss the main steps involved in having machine learning models in production. No hype (“artificial intelligence”), no bullshitting and no marketing of expensive products (the best tools are all open source and free, you just need to know what you are doing).

Bio: Szilard has trained, deployed and maintained machine learning models in production (starting with random forests) since 2007. He’s a Chief Scientist and a Physics PhD with 20+ years experience with data, computing and modeling. He founded the first data meetup in LA in 2009 (by chance) and he has organized several data science events since.

We’ll also have a lightning talk (10-mins) before the main talk:

Opening the black box: Attempts to understand the results of machine learning models
by Michael Tiernay, Data Scientist at Edmunds

Timeline:

– 6:30pm arrival, food/drinks and networking
– 7:30pm talks
– 9 pm more networking

You must have a confirmed RSVP and please arrive by 7:25pm the latest. Please RSVP here on Eventbrite.

Venue: Edmunds, 2401 Colorado Avenue (this is Edmunds’ NEW LOCATION, don’t go to the old one)
Park underneath the building (Colorado Center), Edmunds will validate.

Source:: R News

Salaries by alma mater – an interactive visualization with R and plotly

By Alexej's blog

Visualization of starting salaries by college

(This article was first published on Alexej’s blog, and kindly contributed to R-bloggers)

Based on an interesting dataset from the Wall Street Journal I made the above visualization of the median starting salary for US college graduates from different undergraduate institutions (I have also looked at the mid-career salaries, and the salary increase, but more on that later). However, I thought that it would be a lot more informative, if it were interactive. To the very least I wanted to be able to see the school names when hovering over or clicking on the points with the mouse.

Luckily, this kind of interactivity can be easily achieved in R with the library plotly, especially due to its excellent integration with ggplot2, which I used to produce the above figure. In the following I describe how exactly this can be done.

Before I show you the interactive visualizations, a few words on the data preprocessing, and on how the map and the points are plotted with ggplot2:

  • I generally use functions from the tidyverse R packages.
  • I save the data in the data frame salaries, and transform the given amounts to proper floating point numbers, stripping the dollar signs and extra whitespaces.
  • The data provide school names. However, I need to find out the exact geographical coordinates of each school to put it on the map. This can be done in a very convenient way, by using the geocode function from the ggmap R package:
    school_longlat <- geocode(salaries$school)
    school_longlat$school <- salaries$school
    salaries <- left_join(salaries, school_longlat)
    
  • For the visualization I want to disregard the colleges in Alaska and Hawaii to avoid shrinking the rest of the map. The respective rows of salaries can be easily determined with a grep search:
    grep("alaska", salaries$school, ignore.case = 1)
    # [1] 206
    grep("hawaii", salaries$school, ignore.case = 1)
    # [1] 226
    
  • A data frame containing geographical data that can be used to plot the outline of all US states can be loaded using the function map_data from the ggplot2 package:
    states <- map_data("state")
    
  • And I load a yellow-orange-red palette with the function brewer.pal from the RColorBrewer library, to use as a scale for the salary amounts:
    yor_col <- brewer.pal(6, "YlOrRd")
    
  • Finally the (yet non-interactive) visualization is created with ggplot2:
    p <- ggplot(salaries[-c(206, 226), ]) +
        geom_polygon(aes(x = long, y = lat, group = group),
                     data = states, fill = "black",
                     color = "white") +
        geom_point(aes(x = lon, y = lat,
                       color = starting, text = school)) +
        coord_fixed(1.3) +
        scale_color_gradientn(name = "StartingnSalary",
                              colors = rev(yor_col),
                              labels = comma) +
        guides(size = FALSE) +
        theme_bw() +
        theme(axis.text = element_blank(),
              axis.line = element_blank(),
              axis.ticks = element_blank(),
              panel.border = element_blank(),
              panel.grid = element_blank(),
              axis.title = element_blank())
    

Now, entering p into the R console will generate the figure shown at the top of this post.

However, we want to…

…make it interactive

The function ggplotly immediately generates a plotly interactive visualization from a ggplot object. It’s that simple! :smiley: (Though I must admit that, more often than I would be okay with, some elements of the ggplot visualization disappear or don’t look as expected. :fearful:)

The function argument tooltip can be used to specify which aesthetic mappings from the ggplot call should be shown in the tooltip. So, the code

ggplotly(p, tooltip = c("text", "starting"),
         width = 800, height = 500)

generates the following interactive visualization.

Now, if you want to publish a plotly visualization to https://plot.ly/, you first need to communicate your account info to the plotly R package:

Sys.setenv("plotly_username" = "??????")
Sys.setenv("plotly_api_key" = "????????????")

and after that, posting the visualization to your account at https://plot.ly/ is as simple as:

plotly_POST(filename = "Starting", sharing = "public")

More visualizations

Finally, based on the same dataset I have generated an interactive visualization of the median mid-career salaries by undergraduate alma mater (the R script is almost identical to the one described above).
The resulting interactive visualization is embedded below.

Additionally, it is quite informative to look at a visualization of the salary increase from starting to mid-career.

To leave a comment for the author, please follow the link and comment on their blog: Alexej’s blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

a secretary problem with maximum ability

By xi’an

(This article was first published on R – Xi’an’s Og, and kindly contributed to R-bloggers)

The Riddler of today has a secretary problem, where one measures sequentially N random variables until one deems the current variable to be the largest of the whole sample. The classical secretary problem has a counter-intuitive solution where one first measures N/e random variables without taking any decision and then and only then picks the first next outcome larger than the largest in the first group. The added information in the current riddle is that the distribution of those iid random variables is set to be uniform on {1,…,M}, which begs for a modification in the algorithm. As for instance when observing M on the current draw.

The approach I devised is clearly suboptimal, as I decided to pick the currently observed value if the (conditional) probability it is the largest is larger than the probability subsequent draws. This translates into the following R code:

M=100 #maximum value
N=10  #total number of draws
hyprob=function(m){
# m is sequence of draws so far
n=length(m);mmax=max(m)
if ((m[n]<mmax)||(mmax-n<N-n)){prob=0
  }else{
  prob=prod(sort((1:mmax)[-m],dec=TRUE)
   [1:(N-n)]/((M-n):(M-N+1))}
return(prob)}

decision=function(draz=sample(1:M,N)){
  i=0
  keepgoin=TRUE
  while ((keepgoin)&(i<N)){
   i=i+1
   keepgoin=(hyprob(draz[1:i])<0.5)}
  return(c(i,draz[i],(draz[i]<max(draz))))}

which produces a winning rate of around 62% when N=10 and M=100, hence much better than the expected performances of the secretary algorithm, with a winning frequency of 1/e.

Filed under:

To leave a comment for the author, please follow the link and comment on their blog: R – Xi’an’s Og.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Where Europe lives, in 14 lines of R Code

By David Smith

(This article was first published on Revolutions, and kindly contributed to R-bloggers)

Via Max Galka, always a great source of interesting data visualizations, we have this lovely visualization of population density in Europe in 2011, created by Henrik Lindberg:

Impressively, the chart was created with just 14 lines of R code:

(To recreate it yourself, download the GEOSTAT-grid-POP-1K-2011-V2-0-1.zip file from eurostat, and move the two .csv files inside in range of your R script.) The code parses the latitude/longitude of population centers listed in the CSV file, arranges them into a 0.01 by 0.01 degree grid, and plots each row as a horizontal line with population as the vertical axis. Grid cells with zero populations cause breaks in the line and leave white gaps in the map. It’s quite an elegant effect!

To leave a comment for the author, please follow the link and comment on their blog: Revolutions.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News