(This article was first published on Thinking inside the box , and kindly contributed to R-bloggers)

RcppArmadillo release 0.8.500.0, originally prepared and uploaded on April 21, has hit CRAN today (after having already been available via the RcppCore drat repo). A corresponding Debian release will be prepared as well. This RcppArmadillo release contains Armadillo release 8.500.0 with a number of rather nice changes (see below for details), and continues our normal bi-monthly CRAN release cycle.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 472 other packages on CRAN.

A high-level summary of changes follows.

#### Changes in RcppArmadillo version 0.8.500.0 (2018-04-21)

• faster handling of sparse matrices by `kron()` and `repmat()`

• faster transpose of sparse matrices

• faster element access in sparse matrices

• faster row iterators for sparse matrices

• faster handling of compound expressions by `trace()`

• more efficient handling of aliasing in submatrix views

• expanded `normalise()` to handle sparse matrices

• expanded `.transform()` and `.for_each()` to handle sparse matrices

• added `reverse()` for reversing order of elements

• added `repelem()` for replicating elements

• added `roots()` for finding the roots of a polynomial

• Fewer LAPACK compile-time guards are used, new unit tests for underlying features have been added (Keith O’Hara in #211 addressing #207).

• The configure check for LAPACK features has been updated accordingly (Keith O’Hara in #214 addressing #213).

• The compile-time check for `g++` is now more robust to minimal shell versions (#217 addressing #216).

• Compiler tests to were added for macOS (Keith O’Hara in #219).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

## 17 Jobs for R users from around the world (2018-04-30)

### To post your R job on the next post

Just visit this link and post a new R job to the R community.

You can post a job for free (and there are also “featured job” options available for extra exposure).

### Featured Jobs

1. Freelance
Freelance Project: R Libraries Install on Amazon AWS EC2 WS
Anywhere
27 Apr 2018
2. Full-Time
Associate, Data Analytics National Network for Safe Communities (NNSC>) – Posted by nnsc
New York New York, United States
18 Apr 2018
3. Full-Time
Senior Research Associate Kellogg Research Support – Posted by lorbos
Evanston Illinois, United States
17 Apr 2018
4. Full-Time
Empirical Research Fellow Kellogg School of Management (Northwestern University)– Posted by lorbos
Evanston Illinois, United States
17 Apr 2018
5. Part-Time
Data Scientist Alchemy – Posted by Andreas Voniatis
Anywhere
7 Apr 2018
6. Full-Time
Solutions Engineer RStudio – Posted by nwstephens
Anywhere
3 Apr 2018

### All New R Jobs

1. Full-Time
Lead Data Scientist @ Washington, District of Columbia, U.S. AFL-CIO – Posted by carterkalchik
Washington District of Columbia, United States
27 Apr 2018
2. Full-Time
Data Scientist R @ Medellín, Antioquia, Colombia IDATA S.A.S – Posted by vmhoyos
Medellín Antioquia, Colombia
27 Apr 2018
3. Full-Time
Data Scientist @ Annapolis, Maryland, United States The National Socio-Environmental Synthesis Center – Posted by sesync
Annapolis Maryland, United States
27 Apr 2018
4. Freelance
Freelance Project: R Libraries Install on Amazon AWS EC2 WS
Anywhere
27 Apr 2018
5. Full-Time
Associate, Data Analytics National Network for Safe Communities (NNSC>) – Posted by nnsc
New York New York, United States
18 Apr 2018
6. Full-Time
Senior Research Associate Kellogg Research Support – Posted by lorbos
Evanston Illinois, United States
17 Apr 2018
7. Full-Time
Empirical Research Fellow Kellogg School of Management (Northwestern University)– Posted by lorbos
Evanston Illinois, United States
17 Apr 2018
8. Full-Time
Discrete Choice Modeler @ Chicago, Illinois, U.S. Resource Systems Group – Posted by patricia.holland@rsginc.com
Chicago Illinois, United States
13 Apr 2018
9. Full-Time
R&D Database Developer @ Toronto, Canada Crescendo Technology Ltd – Posted by Crescendo
12 Apr 2018
10. Full-Time
Applied Statistics Position @ Florida U.S. New College of Florida – Posted by Jobs@New College
Sarasota Florida, United States
9 Apr 2018
11. Full-Time
PhD Fellowship @ Wallace University, US Ubydul Haque – Posted by Ubydul
Khon Kaen Chang Wat Khon Kaen, Thailand
9 Apr 2018
12. Part-Time
Data Scientist Alchemy – Posted by Andreas Voniatis
Anywhere
7 Apr 2018
13. Full-Time
Product Owner (Data Science) (m/f) Civey GmbH – Posted by Civey
Berlin Berlin, Germany
6 Apr 2018
14. Full-Time
Solutions Engineer RStudio – Posted by nwstephens
Anywhere
3 Apr 2018
15. Full-Time
Post-Doctoral Fellow – Data Scientist @ CIMMYT (eastern Africa) International Maize and Wheat Improvement Center – Posted by cimmyt-jobs
Marsabit County Kenya
27 Mar 2018
16. Freelance
Statistician / R Developer – for Academic Statistical Research Academic Research – Posted by empiricus
Anywhere
27 Mar 2018
17. Full-Time
Graduate Data Scientist JamieAi – Posted by JamieAi
Anywhere
27 Mar 2018

In R-users.com you can see all the R jobs that are currently available.

R-users Resumes

R-users also has a resume section which features CVs from over 300 R users. You can submit your resume (as a “job seeker”) or browse the resumes for free.

(you may also look at previous R jobs posts ).

Source:: R News

## Microsoft R Open 3.4.4 now available

(This article was first published on Revolutions, and kindly contributed to R-bloggers)

An update to Microsoft R Open (MRO) is now available for download on Windows, Mac and Linux. This release upgrades the R language engine to version 3.4.4, which addresses some minor issues with timezone detection and some edge cases in some statistics functions. As a maintenance release, it’s backwards-compatible with scripts and packages from the prior release of MRO.

MRO 3.4.4 points to a fixed CRAN snapshot taken on April 1 2018, and you can see some highlights of new packages released since the prior version of MRO on the Spotlights page. As always, you can use the built-in checkpoint package to access packages from an earlier date (for reproducibility) or a later date (to access new and updated packages).

Looking ahead, the next update based on R 3.5.0 has started the build and test process. Microsoft R Open 3.5.0 is scheduled for release on May 31.

We hope you find Microsoft R Open useful, and if you have any comments or questions please visit the Microsoft R Open forum. You can follow the development of Microsoft R Open at the MRO Github repository. To download Microsoft R Open, simply follow the link below.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

## Make a sculpture in LEGO from a photo, with R

(This article was first published on Revolutions, and kindly contributed to R-bloggers)

The entrance to our office in Redmond in is adorned with this sculpture of our department logo, rendered in LEGO:

We had fun with LEGO bricks at work this week. APEX is our internal team name, this was fun. Oh and we’re hiring for all roles in Azure! pic.twitter.com/VlNNaTexA5

— Jeff Sandquist (@jeffsand) March 30, 2017

Our team, the Cloud Developer Advocates, has a logo as well, created by the multitalented Ashley Macnamara. (The mascot’s name is Bit: he’s a raccoon because, like developers, he’s into everything.) It would be nice to have a LEGO rendition of Bit for the wall as well, but converting an image into LEGO bricks isn’t easy … until now.

This R script by Ryan Timpe provides everything you need render an image in LEGO. It will downscale the image to a size that meets your bricks budget, convert the colors to those available as LEGO bricks, and divide the image up into LEGO-sized pieces, ready to lay out on a flat tray. The script is super easy to use: just source a file of utility functions and then:

(You can also use readJPEG to read in JPG images; I just loaded in the png package and used readPNG which works just as well.) Here’s what the output looks like. (Click to see the original, for comparison.)

The script also provides a shopping list of the bricks you need by color and size: this particular project will require 1842 LEGO bricks in 19 different colors to create the 48×48 image. It will even provide a series of step-by-step instructions showing how the project will look in various stages of completion:

The R script is available on GitHub, here, and works with any recent version of R and with up-to-date tidyverse installation. (I used R 3.5.0.) You can find a complete walkthrough of using the scripts in the blog post at the link below.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

## Z is for Z-Scores and Standardizing

(This article was first published on Deeply Trivial, and kindly contributed to R-bloggers)

Z is for Z-Scores and StandardizingLast April, I wrapped up the A to Z of Statistics with a post about Z-scores. It seems only fitting that I’m wrapping this April A to Z with the same topic. Z-scores are frequently used, sometimes when you don’t even realize it. When you take your child to the doctor and they say he’s at the x percentile on height, or when you take a standardized test and are told you scored in the y percentile, those values are being derived from Z-scores. You create a Z-score when you subtract the population mean from a value and divide that result by the population standard deviation.

Of course, we often will standardize variables in statistics, and the results are similar to Z-scores (though technically not the same if the mean and standard deviation aren’t population values). In fact, when I demonstrated the GLM function earlier this month, I skipped a very important step when conducting an analysis with interactions. I should have standardized my continuous predictors first, which means subtracting the variable mean and dividing by the variable standard deviation, creating a new variable with a mean of 0 and a standard deviation of 1 (just like the normal distribution).

Let’s revisit that GLM analysis. I was predicting verdict (guilty, not guilty) with binomial regression. I did one analysis where I used a handful of attitude items and the participant’s guilt rating, and a second analysis where I created interactions between each attitude item and the guilt rating. The purpose was to see if an attitude impacts the threshold – how high the guilt rating needed to be before a participant selected “guilty”. When working with interactions, the individual variables are highly correlated with the interaction variables based on them, so we solve that problem, and make our analysis and output a bit cleaner, by centering our variables and using those centered values to create interactions.

I’ll go ahead and load my data. Also, since I know I have some missing values, which caused an error when I tried to work with predicted values and residuals yesterday, I’ll also go ahead and identify that case/those cases.

`dissertationread.delim("dissertation_data.txt",header=TRUE)predictorsc("obguilt","reasdoubt","bettertolet","libertyvorder","jurevidence","guilt")library(psych)`
`## Warning: package 'psych' was built under R version 3.4.4`
`describe(dissertation[predictors])`
`##               vars   n mean   sd median trimmed  mad min max range  skew## obguilt          1 356 3.50 0.89      4    3.52 0.00   1   5     4 -0.50## reasdoubt        2 356 2.59 1.51      2    2.68 1.48  -9   5    14 -3.63## bettertolet      3 356 2.36 1.50      2    2.38 1.48  -9   5    14 -3.41## libertyvorder    4 355 2.74 1.31      3    2.77 1.48  -9   5    14 -3.89## jurevidence      5 356 2.54 1.63      2    2.66 1.48  -9   5    14 -3.76## guilt            6 356 4.80 1.21      5    4.90 1.48   2   7     5 -0.59##               kurtosis   se## obguilt          -0.55 0.05## reasdoubt        26.92 0.08## bettertolet      25.47 0.08## libertyvorder    34.58 0.07## jurevidence      25.39 0.09## guilt            -0.54 0.06`
`dissertationsubset(dissertation, !is.na(libertyvorder))`

R has a built-in function that will do this for you: scale. The scale function has 3 main arguments – the variable or variables to be scaled, and whether you want those variables centered (subtract mean) and/or scaled (divided by standard deviation). For regression with interactions, we want to both center and scale. For instance, to center and scale the guilt rating:

`dissertation\$guilt_cscale(dissertation\$guilt, center=TRUE, scale=TRUE)`

I have a set of predictors I want to do this to, so I want to apply a function across those specific columns:

`dissertation[46:51]lapply(dissertation[predictors], function(x) {yscale(x, center=TRUE, scale=TRUE)})`

Now, let’s rerun that binomial regression, this time using the centered variables in the model.

`pred_int'verdict ~ obguilt.1 + reasdoubt.1 + bettertolet.1 + libertyvorder.1 +                   jurevidence.1 + guilt.1 + obguilt.1*guilt.1 + reasdoubt.1*guilt.1 +                  bettertolet.1*guilt.1 + libertyvorder.1*guilt.1 + jurevidence.1*guilt.1'model2glm(pred_int, family="binomial", data=dissertation)summary(model2)`
`## ## Call:## glm(formula = pred_int, family = "binomial", data = dissertation)## ## Deviance Residuals: ##     Min       1Q   Median       3Q      Max  ## -2.6101  -0.5432  -0.1289   0.6422   2.2805  ## ## Coefficients:##                         Estimate Std. Error z value Pr(>|z|)    ## (Intercept)             -0.47994    0.16264  -2.951  0.00317 ** ## obguilt.1                0.25161    0.16158   1.557  0.11942    ## reasdoubt.1             -0.09230    0.20037  -0.461  0.64507    ## bettertolet.1           -0.22484    0.20340  -1.105  0.26899    ## libertyvorder.1          0.05825    0.21517   0.271  0.78660    ## jurevidence.1            0.07252    0.19376   0.374  0.70819    ## guilt.1                  2.31003    0.26867   8.598  ## obguilt.1:guilt.1        0.14058    0.23411   0.600  0.54818    ## reasdoubt.1:guilt.1     -0.61724    0.29693  -2.079  0.03764 *  ## bettertolet.1:guilt.1    0.02579    0.30123   0.086  0.93178    ## libertyvorder.1:guilt.1 -0.27492    0.29355  -0.937  0.34899    ## jurevidence.1:guilt.1    0.27601    0.36181   0.763  0.44555    ## ---## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## (Dispersion parameter for binomial family taken to be 1)## ##     Null deviance: 490.08  on 354  degrees of freedom## Residual deviance: 300.66  on 343  degrees of freedom## AIC: 324.66## ## Number of Fisher Scoring iterations: 6`

The results are essentially the same; the constant and slopes of the predictors variables are different but the variables that were significant before still are. So standardizing doesn’t change the results, but it is generally recommended because it makes results easier to interpret, because the variables are centered around the mean. So negative numbers are below the mean, and positive numbers are above the mean.

Hard to believe A to Z is over! Don’t worry, I’m going to keep blogging about statistics, R, and whatever strikes my fancy. I almost kept this post going by applying the prediction work from yesterday to the binomial model, but decided that would make for a fun future post. And I’ll probably sprinkle in posts in the near future on topics I didn’t have room for this month or that I promised to write a future post on. Thanks for reading and hope you keep stopping by, even though April A to Z is officially over!

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

## Using Shiny Dashboards for Financial Analysis

(This article was first published on R-posts.com, and kindly contributed to R-bloggers)

For some time now, I have been trading traditional assets—mostly U.S. equities. About a year ago, I jumped into the cryptocurrency markets to try my hand there as well. In my time in investor Telegram chats and subreddits, I often saw people arguing over which investments had performed better over time, but the reality was that most such statements were anecdotal, and thus unfalsifiable.

Given the paucity of cryptocurrency data available in an easily accessible format, it was quite difficult to say for certain that a particular investment was a good one relative to some alternative, unless you were very familiar with a handful of APIs. Even then, assuming you knew how to get daily OHLC data for a crypto-asset like Bitcoin, in order to compare it to some other asset—say Amazon stock—you would have to eyeball trends from a website like Yahoo finance or scrape that data separately and build your own visualizations and metrics. In short, historical asset performance comparisons in the crypto space were difficult to conduct for all but the most technically savvy individuals, so I set out to build a tool that would remedy this, and the Financial Asset Comparison Tool was born.

In this post, I aim to describe a few key components of the dashboard, and also call out lessons learned from the process of iterating on the tool along the way. Prior to proceeding, I highly recommend that you read the app’s README and take a look at the UI and code base itself, as this will provide the context necessary to understanding the rest of the commentary below.

I’ll start by delving into a few principles that I find to be to key when designing analytic dashboards, drawing on the asset comparison dashboard as my exemplar, and will end with some discussion of the relative utility of a few packages integral to the app. Overall, my goal is not to focus on the tool that I built alone, but to highlight a few main best practices when it comes to building dashboards for any analysis.

Build the app around the story, not the other way around.

Before ever writing a single line of code for an analytic app, I find that it is absolutely imperative to have a clear vision of the story that the tool must tell. I do not mean by this that you should already have conclusions about your data that you will then force the app into telling, but rather, that you must know how you want your user to interact with the app in order glean useful information.

In the case of my asset comparison tool, I wanted to serve multiple audiences—everyone from a casual trader who just wanted to see which investment produced the greatest net profit over a period of time, to a more experience trader, who had more nuanced questions about risk-adjusted return on investment given varying discount rates. The trick is thus building the app in such a way that serves all possible audiences without hindering any one type of user in particular.

The way I designed my app to meet this need was to build the UI such that as you descend the various sections vertically, the metrics displayed scale in complexity. My reasoning for this becomes apparent when you consider the two extremes in terms of users—the most basic vs. the most advanced trader.

The most basic user will care only about the assets of interest, the time period they want to examine, and how their initial investment performed over time. As such, they will start with the sidebar, input their assets and time frame of choice, and then use the top right-most input box to modulate their initial investment amount (although some may choose to stick with the default value here). They will then see the first chart change to reflect their choices, and they will see, both visually, and via the summary table below, which asset performed better.

The experienced trader, on the other hand, will start off exactly as the novice did, by choosing assets of interest, a time frame of reference, and an initial investment amount. They may then choose to modulate the LOESS parameters as they see fit, descending the page, looking over the simple returns section, perhaps stopping to make changes to the corresponding inputs there, and finally ending at the bottom of the page—at the Sharpe Ratio visualizations. Here they will likely spend more time—playing around with the time period over which to measure returns and changing the risk-free rate to align with their own personal macroeconomic assumptions.

The point of these two examples is to illustrate that the app by dint of its structure alone guides the user through the analytic story in a waterfall-like manner—building from simple portfolio performance, to relative performance, to the most complicated metrics for risk-adjusted returns. This keeps the novice trader from being overwhelmed or confused, and also allows the most experienced user to follow the same line of thought that they would anyway when comparing assets, while following a logical progression of complexity, as shown via the screenshot below.

Once you think you have a structure that guides all users through the story you want them to experience, test it by asking yourself if the app flows in such a way that you could pose and answer a logical series of questions as you navigate the app without any gaps in cohesion. In the case of this app, the questions that the UI answers as you descend are as follows:

• How do these assets compare in terms of absolute profit?
• How do these assets compare in terms of simple return on investment?
• How do these assets compare in terms of variance-adjusted and/or risk-adjusted return on investment?

Thus, when you string these questions together, you can make statements of the type: “Asset X seemed to outperform Asset Y in terms of absolute profit, and this trend held true as well when it comes to simple return on investment, over varying time frames. That said, when you take into account the variance inherent to Asset X, it seems that Asset Y may have been the best choice, as the excess downside risk associated with Asset X outweighs its excess net profitability.

Too many cooks in the kitchen—the case for a functional approach to app-building.

While the design of the UI of any analytic app is of great importance, it’s important to not forget that the code base itself should also be well-designed; a fully-functional app from the user’s perspective can still be a terrible app to work with if the code is a jumbled, incomprehensible mess. A poorly designed code base makes QC a tiresome, aggravating process, and knowledge sharing all but impossible.

For this reason, I find that sourcing a separate R script file containing all analytic functions necessitated by the app is the best way to go, as done below (you can see Functions.R at my repo here).

```# source the Functions.R file, where all analytic functions for the app are stored
source("Functions.R")```

Not only does this allow for a more comprehensible and less-cluttered App.R, but it also drastically improves testability and reusability of the code. Consider the example function below, used to create the portfolio performance chart in the app (first box displayed in the UI, center middle).

```build_portfolio_perf_chart %
# asset 1 data plotted
marker = list(color = '#FC9C01'),
name = asset_name1,
showlegend = FALSE) %>%
add_lines(y = ~fitted(loess(port_tbl[,2] ~ date_in_numeric_form, span = loess_span_parameter)),
line = list(color = '#FC9C01'),
name = asset_name1,
showlegend = TRUE) %>%
# asset 2 data plotted
marker = list(color = '#3498DB'),
name = asset_name2,
showlegend = FALSE) %>%
add_lines(y = ~fitted(loess(port_tbl[,3] ~ date_in_numeric_form, span = loess_span_parameter)),
line = list(color = '#3498DB'),
name = asset_name2,
showlegend = TRUE) %>%
layout(
title = FALSE,
xaxis = list(type = "date",
title = "Date"),
yaxis = list(title = "Portfolio Value (\$)"),
legend = list(orientation = 'h',
x = 0,
y = 1.15)) %>%
x= 1,
y= 1.133,
xref = "paper",
yref = "paper",
text = "",
showarrow = F
)

return(port_perf_plot)

}
```

Writing this function in the sourced Functions.R file instead of directly within the App.R allows for the developer to first test the function itself with fake data—i.e. data not gleaned from the reactive inputs. Once it has been tested in this way, it can be integrated in the app.R on the server side as shown below, with very little code.

```  output\$portfolio_perf_chart
This process allows for better error-identification and troubleshooting. If, for example, you want to change the work accomplished by the analytic function in some way, you can make the changes necessary to the code, and if the app fails to produce the desired outcome, you simply restart the chain: first you test the function in a vacuum outside of the app, and if it runs fine there, then you know that you have a problem with the way the reactive inputs are integrating with the function itself. This is a huge time saver when debugging.
Lastly, this allows for ease of reproducibility and hand-offs. If, say, one of your functions simply takes in a dataset and produces a chart of some sort, then it can be easily copied from the Functions.R and reused elsewhere. I have done this too many times to count, ripping code from project and, with a few alterations, instantly applying it in other contexts. This is easy to do if the functions are written in a manner not dependent on a particular Shiny reactive structure. For all these reasons, it makes sense in most cases to keep the code for the app UI and inputs cleanly separated from the analytic functions via a sourced R script.

Dashboard documentation—both a story and a manual, not one or the other.
When building an app for a customer at work, I never simply write an email with a link in it and write “here you go!” That will result in, at best, a steep learning curve, and at worst, an app used in an unintended way, resulting in user frustration or incorrect results. I always meet with the customer, explain the purpose and functionalities of the tool, walk through the app live, take feedback, and integrate any key takeaways into further iterations.
Even if you are just planning on writing some code to put up on GitHub, you should still consider all of these steps when working on the documentation for your app. In most cases, the README is the epicenter of your documentation—the README is your meeting with the customer.  As you saw when reading the README for the Asset Comparison Tool, I always start my READMEs with a high-level introduction to the purpose of the app—hopefully written or supplemented with visuals (as seen below) that are easy to understand and will capture the attention of browsing passers-by.

After this introduction, the rest of the potential sections to include can vary greatly from app-to-app. In some cases apps are meant to answer one particular question, and might have a variety of filters or supplemental functionalities—one such example can be found here. As can be seen, in that README, I spend a great deal of time on the methodology after making the overall purpose clear, calling out additional options along the way. In the case of the README for the Asset Comparison Tool, however, the story is a bit different. Given that there are many questions that the app seeks to answer, it makes sense to answer each in turn, writing the README in such a way that its progression mirrors the logical flow of the progression intended for the app's user.
One should of course not neglect to cover necessary technical information in the README as well. Anything that is not immediately clear from using the app should be clarified in the README—from calculation details to the source of your data, etc. Finally, don't neglect the iterative component! Mention how you want to interact with prospective users and collaborators in your documentation. For example, I normally call out how I would like people to use the Issues tab on GitHub to propose any changes or additions to the documentation, or the app in general. In short, your documentation must include both the story you want to tell, and a manual for your audience to follow.

Why Shiny Dashboard?
One of the first things you will notice about the app.R code is that the entire thing is built using Shiny Dashboard as its skeleton. There are a two main reasons for this, which I will touch on in turn.

Shiny Dashboard provides the biggest bang for your buck in terms of how much UI complexity and customizability you get out of just a small amount of code.

I can think of few cases where any analyst or developer would prefer longer, more verbose code to a shorter, succinct solution. That said, Shiny Dashboard's simplicity when it comes to UI manipulation and customization is not just helpful because it saves you time as a coder, but because it is intuitive from the perspective of your audience.
Most of the folks that use the tools I have built to shed insight into economic questions don't know how to code in R or Python, but they can, with a little help from extensive commenting and detailed documentation, understand the broad structure of an app coded in Shiny Dashboard format. This is, I believe, largely a function of two features of Shiny Dashboard: the colloquial-English-like syntax of the code for UI elements, and the lack of the necessity for in-line or external CSS.
As you can see from the example below, Shiny Dashboard's system of “boxes” for UI building is easy to follow. Users can see a box in the app and easily tie that back to a particular box in the UI code.
Here is the box as visible to the user:

And here is the code that produces the box:
```
```box(
title = "Portfolio Performance Inputs",
status= "primary",
h5("This box focuses on portfolio value, i.e., how much an initial investment of the amount specified below (in USD) would be worth over time, given price fluctuations."),

textInput(
inputId = "initial_investment",
label = "Enter your initial investment amount (\$):",
value = "1000"),

hr(),

h5("The slider below modifies the", a(href = "https://stats.stackexchange.com/questions/2002/how-do-i-decide-what-span-to-use-in-loess-regression-in-r", "smoothing parameter"), "used in the", a(href = "https://en.wikipedia.org/wiki/Local_regression", "LOESS function"), "that produces the lines on the scatterplot."),

sliderInput(
inputId = "port_loess_param",
label = "Smoothing parameter for portfolio chart:",
min = 0.1,
max = 2,
value = .33,
step = 0.01,
animate = FALSE
),

hr(),
h5("The table below provides metrics by which we can compare the portfolios. For each column, the asset that performed best by that metric is colored green."),

height = 500,
width = 4
)
```

Secondly, and somewhat related to the first point, with Shiny Dashboard, much of the coloring and overall UI design comes pre-made via dashboard-wide “skins”, and box-specific “statuses.”

This is great if you are okay sacrificing a bit of control for a significant reduction in code complexity. In my experience dealing with non-coding-proficient audiences, I find that in-line CSS or complicated external CSS makes folks far more uncomfortable with the code in general. Anything you can do to reduce this anxiety and make those using your tools feel as though they understand them better is a good thing, and Shiny Dashboard makes that easier.

Shiny Dashboard’s combination of sidebar and boxes makes for easy and efficient data processing when your app has a waterfall-like analytic structure.

Having written versions of this app both in base Shiny and using Shiny Dashboard, the number one reason I chose Shiny Dashboard was the fact that the analytic questions I sought to solve followed a waterfall-like structure, as explained in the previous section. This works perfectly well with Shiny Dashboard’s combination of sidebar input controls and inputs within UI boxes themselves.

The inputs of primordial importance to all users are included in the sidebar UI: the two assets to analyze, and the date range over which to compare their performance. These are the only inputs that all users, regardless of experience or intent, must absolutely use, and when they are changed, all views in the dashboard will be affected. All other inputs are stored in the UI Boxes adjacent to the views that they modulate. This makes for a much more intuitive and fluid user experience, as once the initial sidebar inputs have been modulated, the sidebar can be hidden, as all other non-hidden inputs affect only the visualizations to which they are adjacent.

This waterfall-like structure also makes for more efficient reactive processes on the Shiny back-end. The inputs on the sidebar are parameters that, when changed, force the main reactive function that creates that primary dataset to fire, thus recreating the base dataset (as can be seen in the code for that base datasets creation below).

```  # utility functions to be used within the server; this enables us to use a textinput for our portfolio values
exists_as_number
Each of the visualizations are then produced via their own separate reactive functions, each of which takes as an input the main reactive (as shown below). This makes it so that whenever the sidebar inputs are changed, all reactives fire and all visualizations are updated; however, if all that is changed is a single loess smoothing parameter input, only the reactive used in the creation of that particular parameter-dependent visualization fires, which makes for great computational efficiency.
```
``` # Now the reactives for the actual visualizations
output\$portfolio_perf_chart

Why Plotly?

Plotly vs. ggplot is always a fun subject for discussion among folks who build visualizations in R. Sometimes I feel like such discussions just devolve into the same type of argument as R vs. Python for data science (my answer to this question being just pick one and learn it well), but over time I have found that there are actually some circumstances where the plotly vs. ggplot debate can yield cleaner answers.
In particular, I have found in working on this particular type of analytic app that there are two areas where plotly has a bit of an advantage: clickable interactivity, and wide data.
Those familiar with ggplot will know that every good ggplot begins with long data. It is possible, via some functions, to transform wide data into a long format, but that transformation can sometimes be problematic. While there are essentially no circumstances in which it is impossible to transform wide data into long format, there are a handful of cases where it is excessively cumbersome: namely, when dealing with indexed xts objects (as shown below) or time series / OHLC-styled data.

In these cases—either due to the sometimes-awkward way in which you have to handle rowname indexes in xts, or the time and code complexity saved by not having to transform every dataset into long format—plotly offers efficiency gains relative to ggplot.
The aforementioned efficiency gains are a reason to choose plotly in some cases because it makes the life of the coder easier, but there are also reasons why it sometimes make the life of the user easier as well.
If one of the primary utilities of a visualization is to allow the user the ability to seamlessly and intuitively zoom in on, select, or filter the data displayed, particularly in the context of a Shiny App, then plotly should be strongly considered. Sure, ggplotly wrappers can be used to make a ggplot interactive, but with an added layer of abstraction comes an added layer of possible errors. While in most cases a ggplotly wrapper should work seamlessly, I have found that, particularly in cases where auto-sizing and margin size specification is key, ggplotly can require a great deal of added code in order to work correctly in a Shiny context.
In summary, when considering when to start with plotly vs. when to start with ggplot, I find one question to be particularly helpful: what do I value most—visual complexity and/or customization, or interactive versatility and/or preserving wide data?

If I choose the former, then ggplot is what I need; otherwise, I go with plotly. More often than not I find that ggplot emerges victorious, but even if you disagree with me in my decision-making calculus, I think it is helpful to at least think through what your personal calculus is. This will save you time when coding, as instead of playing around with various types of viz, you can simply pose the question(s) behind your calculus and know quickly what solution best fits your problem.

Why Formattable?

The case for formattable is, in my opinion, a much easier case to make than arguing for plotly vs. ggplot. The only question worth asking when deciding on whether or not to use formattable in your app is: do I want my table to tell a quick story via added visual complexity within the same cell that contains my data, or is a reference table all I am looking for? If you chose the former, formattable is probably a good way to go. You'll notice as well that the case for formattable is very specific–in most cases there is likely a simpler solution via the DT  or kableExtra packages.
The one downside that I have encountered in dealing with formattable code is the amount of code necessary to generate even moderately complicated tables. That said, this problem is easily remedied via a quick function that we can use to kill most of the duplicative coding, as seen in the example below.
First, here is the long form version:

```
```  react_formattable
This code can easily be shortened via the integration of a custom function, as shown below.

```
```simple_formatter
As can be seen, formattable allows for a great deal of added complexity in crafting your table—complexity that may not be suited for all apps. That said, if you do want to quickly draw a user's attention to something in a table, formattable is a great solution, and most of the details of the code can be greatly simplified via a function, as shown.

Conclusions:
That was a lot—I know—but I hope that from this commentary and my exemplar of the Asset Comparison Tool more generally has helped to inform your understanding of how dashboards can serve as a helpful analytic tool. Furthermore, I hope to have prompted some thoughts as to the best practices to be followed when building such a tool. I'll end with a quick tl;dr:

```
• Shave complexity wherever possible, and make code as simple as possible by keeping the code for the app’s UI and inner mechanism (inputs, reactives, etc.) separate from the code for the analytic functions and visualizations.
• Build with the most extreme cases in mind (think of how your most edge-case user might use the app, and ensure that behavior won’t break the app)
• Document, document, and then document some more. Make your README both a story and a manual.
• Give Shiny Dashboard a shot if you want an easy-to-construct UI over which you don’t need complete control when it comes to visual design.
• Pick your visualization packages based on what you want to prioritize for your user, not the other way around (this applies to ggplot, plotly, formattable, etc.).

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

## Statistics Sunday: Conducting Meta-Analysis in R

(This article was first published on Deeply Trivial, and kindly contributed to R-bloggers)

Here it is, everyone! The promised 4th post on meta-analysis, and my second video for Deeply Trivial! In this video, I walk through conducting a basic meta-analysis, both fixed and random effects, in the metafor package:

You can access the code I used in the video here as well as code to do similar analysis with the or_meta dataset here.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

## Deep Learning from first principles in Python, R and Octave – Part 7

(This article was first published on R – Giga thoughts …, and kindly contributed to R-bloggers)

Artificial Intelligence is the new electricity. – Prof Andrew Ng

Most of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. We know how to make the icing and the cherry, but we don’t know how to make the cake. We need to solve the unsupervised learning problem before we can even think of getting to true AI. – Yann LeCun, March 14, 2016 (Facebook)

# Introduction

In this post ‘Deep Learning from first principles with Python, R and Octave-Part 7′, I implement optimization methods used in Stochastic Gradient Descent (SGD) to speed up the convergence. Specifically I discuss and implement the following gradient descent optimization techniques

b.Learning rate decay
c. Momentum method
d. RMSProp

This post, further enhances my generic L-Layer Deep Learning Network implementations in vectorized Python, R and Octave to also include the Stochastic Gradient Descent optimization techniques. You can clone/download the code from Github at DeepLearning-Part7

Incidentally, a good discussion of the various optimizations methods used in Stochastic Gradient Optimization techniques can be seen at Sebastian Ruder’s blog

Note: In the vectorized Python, R and Octave implementations below only a 1024 random training samples were used. This was to reduce the computation time. You are free to use the entire data set (60000 training data) for the computation.

This post is largely based of on Prof Andrew Ng’s Deep Learning Specialization. All the above optimization techniques for Stochastic Gradient Descent are based on the technique of exponentially weighted average method. So for example if we had some time series data then we we can represent the exponentially average value at time ‘t’ as a sequence of the the previous value $v_{t-1}$ and $theta_{t}$ as shown below
$v_{t} = beta v_{t-1} + (1-beta)theta_{t}$

Here $v_{t}$ represent the average of the data set over $frac {1}{1-beta}$ By choosing different values of $beta$, we can average over a larger or smaller number of the data points.
We can write the equations as follows
$v_{t} = beta v_{t-1} + (1-beta)theta_{t}$
$v_{t-1} = beta v_{t-2} + (1-beta)theta_{t-1}$
$v_{t-2} = beta v_{t-3} + (1-beta)theta_{t-2}$
and
$v_{t-k} = beta v_{t-(k+1))} + (1-beta)theta_{t-k}$
By substitution we have
$v_{t} = (1-beta)theta_{t} + beta v_{t-1}$
$v_{t} = (1-beta)theta_{t} + beta ((1-beta)theta_{t-1}) + beta v_{t-2}$
$v_{t} = (1-beta)theta_{t} + beta ((1-beta)theta_{t-1}) + beta ((1-beta)theta_{t-2}+ beta v_{t-3} )$

Hence it can be seen that the $v_{t}$ is the weighted sum over the previous values $theta_{k}$, which is an exponentially decaying function.

By the way, also take a look at my compact, minimal book “Practical Machine Learning with R and Python- Machine Learning in stereo” available in Amazon in paperback(\$9.99) and Kindle(\$6.99) versions. This book is ideal for a quick reference of the various ML functions and associated measurements in both R and Python which are essential to delve deep into Deep Learning.

## 1.1a. Stochastic Gradient Descent (Vanilla) – Python

``````import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets

lbls=[]
pxls=[]
for i in range(60000):
l,p=training[i]
lbls.append(l)
pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of 1024 random numbers.
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
# Set the layer dimensions
layersDimensions=[784, 15,9,10]
# Perform SGD with regular gradient descent
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu',
outputActivationFunc="softmax",learningRate = 0.01 ,
optimizer="gd",
mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig1.png")
``````

## 1.1b. Stochastic Gradient Descent (Vanilla) – R

``````source("mnist.R")
source("DLfunctions7.R")
x 1:60000]
y 1:60000]
y2 # Subset 1024 random samples from MNIST
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 784, 15,9, 10)
# Perform SGD with regular gradient descent
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
hiddenActivationFunc='tanh',
outputActivationFunc="softmax",
learningRate = 0.05,
optimizer="gd",
mini_batch_size = 512,
num_epochs = 5000,
print_cost = True)``````
``````#Plot the cost vs iterations
iterations 0,5000,1000)
costs=retvalsSGD\$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
ggtitle("Costs vs no of epochs") + xlab("No of epochss") + ylab("Cost")``````

## 1.1c. Stochastic Gradient Descent (Vanilla) – Octave

```source("DL7functions.m")
#Create a random permutatation from 1024
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
# Perform SGD with regular gradient descent
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
hiddenActivationFunc='relu',
outputActivationFunc="softmax",
learningRate = 0.005,
lrDecay=true,
decayRate=1,
lambd=0,
keep_prob=1,
optimizer="gd",
beta=0.9,
beta1=0.9,
beta2=0.999,
epsilon=10^-8,
mini_batch_size = 512,
num_epochs = 5000);

plotCostVsEpochs(5000,costs);
```

## 2.1. Stochastic Gradient Descent with Learning rate decay

Since in Stochastic Gradient Descent,with each epoch, we use slight different samples, the gradient descent algorithm, oscillates across the ravines and wanders around the minima, when a fixed learning rate is used. In this technique of ‘learning rate decay’ the learning rate is slowly decreased with the number of epochs and becomes smaller and smaller, so that gradient descent can take smaller steps towards the minima.

There are several techniques employed in learning rate decay

a) Exponential decay: $alpha = decayRate^{epochNum} *alpha_{0}$
b) 1/t decay : $alpha = frac{alpha_{0}}{1 + decayRate*epochNum}$
c) $alpha = frac {decayRate}{sqrt(epochNum)}*alpha_{0}$

In my implementation I have used the ‘exponential decay’. The code snippet for Python is shown below

```if lrDecay == True:
learningRate = np.power(decayRate,(num_epochs/1000)) * learningRate
```

## 2.1a. Stochastic Gradient Descent with Learning rate decay – Python

``````import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets

lbls=[]
pxls=[]
for i in range(60000):
l,p=training[i]
lbls.append(l)
pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
# Set layer dimensions
layersDimensions=[784, 15,9,10]
# Perform SGD with learning rate decay
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu',
outputActivationFunc="softmax",
learningRate = 0.01 , lrDecay=True, decayRate=0.9999,
optimizer="gd",
mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig2.png")``````

## 2.1b. Stochastic Gradient Descent with Learning rate decay – R

``````source("mnist.R")
source("DLfunctions7.R")
x 1:60000]
y 1:60000]
y2 # Subset 1024 random samples from MNIST
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 784, 15,9, 10)
# Perform SGD with Learning rate decay
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
hiddenActivationFunc='tanh',
outputActivationFunc="softmax",
learningRate = 0.05,
lrDecay=TRUE,
decayRate=0.9999,
optimizer="gd",
mini_batch_size = 512,
num_epochs = 5000,
print_cost = True)``````
``````#Plot the cost vs iterations
iterations 0,5000,1000)
costs=retvalsSGD\$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")``````

## 2.1c. Stochastic Gradient Descent with Learning rate decay – Octave

```source("DL7functions.m")
#Create a random permutatation from 1024
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
# Perform SGD with regular Learning rate decay
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
hiddenActivationFunc='relu',
outputActivationFunc="softmax",
learningRate = 0.01,
lrDecay=true,
decayRate=0.999,
lambd=0,
keep_prob=1,
optimizer="gd",
beta=0.9,
beta1=0.9,
beta2=0.999,
epsilon=10^-8,
mini_batch_size = 512,
num_epochs = 5000);
plotCostVsEpochs(5000,costs)
```

## 3.1. Stochastic Gradient Descent with Momentum

Stochastic Gradient Descent with Momentum uses the exponentially weighted average method discusses above and more generally moves faster into the ravine than across it. The equations are
$v_{dW}^l = beta v_{dW}^l + (1-beta)dW^{l}$
$v_{db}^l = beta v_{db}^l + (1-beta)db^{l}$
$W^{l} = W^{l} - alpha v_{dW}^l$
$b^{l} = b^{l} - alpha v_{db}^l$ where
$v_{dW}$ and $v_{db}$ are the momentum terms which are exponentially weighted with the corresponding gradients ‘dW’ and ‘db’ at the corresponding layer ‘l’ The code snippet for Stochastic Gradient Descent with momentum in R is shown below

```# Perform Gradient Descent with momentum
# Input : Weights and biases
#       : beta
#       : learning rate
#       : outputActivationFunc - Activation function at hidden layer sigmoid/softmax
#output : Updated weights after 1 iteration
```

## 3.1a. Stochastic Gradient Descent with Momentum- Python

``````import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
lbls=[]
pxls=[]
for i in range(60000):
l,p=training[i]
lbls.append(l)
pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
layersDimensions=[784, 15,9,10]
# Perform SGD with momentum
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu',
outputActivationFunc="softmax",learningRate = 0.01 ,
optimizer="momentum", beta=0.9,
mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig3.png")``````

## 3.1b. Stochastic Gradient Descent with Momentum- R

``````source("mnist.R")
source("DLfunctions7.R")
x 1:60000]
y 1:60000]
y2 # Subset 1024 random samples from MNIST
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 784, 15,9, 10)
# Perform SGD with momentum
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
hiddenActivationFunc='tanh',
outputActivationFunc="softmax",
learningRate = 0.05,
optimizer="momentum",
beta=0.9,
mini_batch_size = 512,
num_epochs = 5000,
print_cost = True)``````

``````
``````#Plot the cost vs iterations
iterations 0,5000,1000)
costs=retvalsSGD\$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")``````

## 3.1c. Stochastic Gradient Descent with Momentum- Octave

```source("DL7functions.m")
#Create a random permutatation from 60K
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
# Perform SGD with Momentum
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
hiddenActivationFunc='relu',
outputActivationFunc="softmax",
learningRate = 0.01,
lrDecay=false,
decayRate=1,
lambd=0,
keep_prob=1,
optimizer="momentum",
beta=0.9,
beta1=0.9,
beta2=0.999,
epsilon=10^-8,
mini_batch_size = 512,
num_epochs = 5000);

plotCostVsEpochs(5000,costs)
```

## 4.1. Stochastic Gradient Descent with RMSProp

Stochastic Gradient Descent with RMSProp tries to move faster towards the minima while dampening the oscillations across the ravine.
The equations are

$s_{dW}^l = beta_{1} s_{dW}^l + (1-beta_{1})(dW^{l})^{2}$
$s_{db}^l = beta_{1} s_{db}^l + (1-beta_{1})(db^{l})^2$
$W^{l} = W^{l} - frac {alpha s_{dW}^l}{sqrt (s_{dW}^l + epsilon) }$
$b^{l} = b^{l} - frac {alpha s_{db}^l}{sqrt (s_{db}^l + epsilon) }$
where $s_{dW}$ and $s_{db}$ are the RMSProp terms which are exponentially weighted with the corresponding gradients ‘dW’ and ‘db’ at the corresponding layer ‘l’

The code snippet in Octave is shown below

```# Update parameters with RMSProp
# Input : parameters
#       : s
#       : beta
#       : learningRate
#       :
#output : Updated parameters RMSProp
L = size(weights)(2); # number of layers in the neural network
# Update rule for each parameter.
for l=1:(L-1)
weights{l} = weights{l} - learningRate* gradsDW{l} ./ sqrt(sdW{l} + epsilon);
biases{l} = biases{l} -  learningRate* gradsDB{l} ./ sqrt(sdB{l} + epsilon);
endfor

if (strcmp(outputActivationFunc,"sigmoid"))
weights{L} = weights{L} -learningRate* gradsDW{L} ./ sqrt(sdW{L} +epsilon);
biases{L} = biases{L} -learningRate* gradsDB{L} ./ sqrt(sdB{L} + epsilon);
elseif (strcmp(outputActivationFunc,"softmax"))
weights{L} = weights{L} -learningRate* gradsDW{L}' ./ sqrt(sdW{L} +epsilon);
biases{L} = biases{L} -learningRate* gradsDB{L}' ./ sqrt(sdB{L} + epsilon);
endif
end
```

## 4.1a. Stochastic Gradient Descent with RMSProp – Python

``````import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets

lbls=[]
pxls=[]
for i in range(60000):
l,p=training[i]
lbls.append(l)
pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

print("X1=",X1.shape)
print("y1=",Y1.shape)

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))

layersDimensions=[784, 15,9,10]
# Use SGD with RMSProp
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu',
outputActivationFunc="softmax",learningRate = 0.01 ,
optimizer="rmsprop", beta1=0.7, epsilon=1e-8,
mini_batch_size =512, num_epochs = 1000, print_cost = True,figure="fig4.png")``````

## 4.1b. Stochastic Gradient Descent with RMSProp – R

``````source("mnist.R")
source("DLfunctions7.R")
x 1:60000]
y 1:60000]
y2 # Subset 1024 random samples from MNIST
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 784, 15,9, 10)
#Perform SGD with RMSProp
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
hiddenActivationFunc='tanh',
outputActivationFunc="softmax",
learningRate = 0.001,
optimizer="rmsprop",
beta1=0.9,
epsilon=10^-8,
mini_batch_size = 512,
num_epochs = 5000 ,
print_cost = True)``````
``````#Plot the cost vs iterations
iterations 0,5000,1000)
costs=retvalsSGD\$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")
``````

## 4.1c. Stochastic Gradient Descent with RMSProp – Octave

```source("DL7functions.m")
#Create a random permutatation from 1024
permutation = randperm(1024);

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);

# Set layer dimensions
layersDimensions=[784, 15, 9, 10];
#Perform SGD with RMSProp
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
hiddenActivationFunc='relu',
outputActivationFunc="softmax",
learningRate = 0.005,
lrDecay=false,
decayRate=1,
lambd=0,
keep_prob=1,
optimizer="rmsprop",
beta=0.9,
beta1=0.9,
beta2=0.999,
epsilon=1,
mini_batch_size = 512,
num_epochs = 5000);
plotCostVsEpochs(5000,costs)
```

Adaptive Moment Estimate is a combination of the momentum (1st moment) and RMSProp(2nd moment). The equations for Adam are below
$v_{dW}^l = beta_{1} v_{dW}^l + (1-beta_{1})dW^{l}$
$v_{db}^l = beta_{1} v_{db}^l + (1-beta_{1})db^{l}$
The bias corrections for the 1st moment
$vCorrected_{dW}^l= frac {v_{dW}^l}{1 - beta_{1}^{t}}$
$vCorrected_{db}^l= frac {v_{db}^l}{1 - beta_{1}^{t}}$

Similarly the moving average for the 2nd moment- RMSProp
$s_{dW}^l = beta_{2} s_{dW}^l + (1-beta_{2})(dW^{l})^2$
$s_{db}^l = beta_{2} s_{db}^l + (1-beta_{2})(db^{l})^2$
The bias corrections for the 2nd moment
$sCorrected_{dW}^l= frac {s_{dW}^l}{1 - beta_{2}^{t}}$
$sCorrected_{db}^l= frac {s_{db}^l}{1 - beta_{2}^{t}}$

$W^{l} = W^{l} - frac {alpha vCorrected_{dW}^l}{sqrt (s_{dW}^l + epsilon) }$
$b^{l} = b^{l} - frac {alpha vCorrected_{db}^l}{sqrt (s_{db}^l + epsilon) }$
The code snippet of Adam in R is included below

```# Perform Gradient Descent with Adam
# Input : Weights and biases
#       : beta1
#       : epsilon
#       : learning rate
#       : outputActivationFunc - Activation function at hidden layer sigmoid/softmax
#output : Updated weights after 1 iteration
```

``````import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import sklearn.linear_model
import pandas as pd
import sklearn
import sklearn.datasets
lbls=[]
pxls=[]
print(len(training))
#for i in range(len(training)):
for i in range(60000):
l,p=training[i]
lbls.append(l)
pxls.append(p)
labels= np.array(lbls)
pixels=np.array(pxls)
y=labels.reshape(-1,1)
X=pixels.reshape(pixels.shape[0],-1)
X1=X.T
Y1=y.T

# Create  a list of random numbers of 1024
permutation = list(np.random.permutation(2**10))
# Subset 16384 from the data
X2 = X1[:, permutation]
Y2 = Y1[:, permutation].reshape((1,2**10))
layersDimensions=[784, 15,9,10]
parameters = L_Layer_DeepModel_SGD(X2, Y2, layersDimensions, hiddenActivationFunc='relu',
outputActivationFunc="softmax",learningRate = 0.01 ,
optimizer="adam", beta1=0.9, beta2=0.9, epsilon = 1e-8,
mini_batch_size =512, num_epochs = 1000, print_cost = True, figure="fig5.png")``````

``````source("mnist.R")
source("DLfunctions7.R")
x 1:60000]
y 1:60000]
y2 # Subset 1024 random samples from MNIST
permutation = c(sample(2^10))
# Randomly shuffle the training data
X1 = X[, permutation]
y1 = Y[1, permutation]
y2 784, 15,9, 10)
retvalsSGD= L_Layer_DeepModel_SGD(X1, Y1, layersDimensions,
hiddenActivationFunc='tanh',
outputActivationFunc="softmax",
learningRate = 0.005,
beta1=0.7,
beta2=0.9,
epsilon=10^-8,
mini_batch_size = 512,
num_epochs = 5000 ,
print_cost = True)``````
``````#Plot the cost vs iterations
iterations 0,5000,1000)
costs=retvalsSGD\$costs
df=data.frame(iterations,costs)
ggplot(df,aes(x=iterations,y=costs)) + geom_point() + geom_line(color="blue") +
ggtitle("Costs vs number of epochs") + xlab("No of epochs") + ylab("Cost")``````

```source("DL7functions.m")
#Create a random permutatation from 1024
permutation = randperm(1024);
disp(length(permutation));

# Use this 1024 as the batch
X=trainX(permutation,:);
Y=trainY(permutation,:);
# Set layer dimensions
layersDimensions=[784, 15, 9, 10];

# Note the high value for epsilon.
#Otherwise GD with Adam does not seem to converge
[weights biases costs]=L_Layer_DeepModel_SGD(X', Y', layersDimensions,
hiddenActivationFunc='relu',
outputActivationFunc="softmax",
learningRate = 0.1,
lrDecay=false,
decayRate=1,
lambd=0,
keep_prob=1,
beta=0.9,
beta1=0.9,
beta2=0.9,
epsilon=100,
mini_batch_size = 512,
num_epochs = 5000);
plotCostVsEpochs(5000,costs)
```

Conclusion: In this post I discuss and implement several Stochastic Gradient Descent optimization methods. The implementation of these methods enhance my already existing generic L-Layer Deep Learning Network implementation in vectorized Python, R and Octave, which I had discussed in the previous post in this series on Deep Learning from first principles in Python, R and Octave. Check it out, if you haven’t already. As already mentioned the code for this post can be cloned/forked from Github at DeepLearning-Part7

Watch this space! I’ll be back!

To see all post click Index of posts

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

## Read Random Rows from A Huge CSV File

Given R data frames stored in the memory, sometimes it is beneficial to sample and examine the data in a large-size csv file before importing into the data frame. To the best of my knowledge, there is no off-shelf R function performing such data sampling with a relatively low computing cost. Therefore, I drafted two utility functions serving this particular purpose, one with the LaF library and the other with the reticulate library by leveraging the power of Python. While the first function is more efficient and samples 3 records out of 336,776 in about 100 milliseconds, the second one is more for fun and a showcase of the reticulate package.

```
library(LaF)

sample1

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
```

Source:: R News

## Y is for Ys, Y-hats, and Residuals

(This article was first published on Deeply Trivial, and kindly contributed to R-bloggers)

Y is for Ys, Y-hats, and Residuals When working with a prediction model, like a linear regression, there are a few Ys you need to concern yourself with: the ys (observed outcome variable), the y-hats (predicted outcome variables based on the equation), and the residuals (y minus y-hat). Today, I’ll dig into the different flavors of y and how you might work with them when conducting linear regression.

As my data example, I’ll use my dissertation dataset and the linear model I introduced in the GLM post.

`dissertationread.delim("dissertation_data.txt",header=TRUE)guilt_lm_fulllm(guilt ~ illev + circumst + deftest + policepower +suspicious + overzealous + upstanding,data=dissertation)options(scipen = 999)summary(guilt_lm_full)`
`## ## Call:## lm(formula = guilt ~ illev + circumst + deftest + policepower + ##     suspicious + overzealous + upstanding, data = dissertation)## ## Residuals:##     Min      1Q  Median      3Q     Max ## -3.0357 -0.7452  0.1828  0.9706  2.5013 ## ## Coefficients:##             Estimate Std. Error t value             Pr(>|t|)    ## (Intercept)  4.16081    0.38966  10.678 ## illev        0.11111    0.05816   1.911              0.05689 .  ## circumst    -0.08779    0.06708  -1.309              0.19147    ## deftest     -0.02020    0.05834  -0.346              0.72942    ## policepower  0.02828    0.06058   0.467              0.64090    ## suspicious   0.17286    0.06072   2.847              0.00468 ** ## overzealous -0.03298    0.04792  -0.688              0.49176    ## upstanding   0.08941    0.05374   1.664              0.09706 .  ## ---## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.169 on 347 degrees of freedom##   (1 observation deleted due to missingness)## Multiple R-squared:  0.07647, Adjusted R-squared:  0.05784 ## F-statistic: 4.105 on 7 and 347 DF,  p-value: 0.0002387`

In this model, the outcome variable is a guilt rating, ranging from 1 to 7. This is our y, which our regression model is trying to recreate through the linear relationship between our x’s and our y. Using the coefficients in the output, we could compute the predicted y (y-hat) – what a person’s score would be if the linear model fit the data perfectly. Fortunately, R has a built-in function that will compute y-hat for a dataset: predict. This function requires two arguments: a regression model and the dataset to use to predict values. Let’s have R predict values for the dissertation dataset, and add it on as a new variable.

`dissertation\$predictedpredict(guilt_lm_full, dissertation)`

In this application, we don’t care as much about the predicted values – we will later on in this post – but we probably do care about the residuals: the difference between the observed value and the predicted value. This gives us an idea of how our model is doing and whether it fits reasonably well. It can also tell us if the model falls apart at certain values or ranges of values.

In the residuals post, I showed that you can easily request residuals from the model. As we did with predicted, let’s create a new variable in the dataset that contains our residuals.

`dissertation\$residualresid(guilt_lm_full)`
`## Error in `\$`

Ruh-roh, we got an error. Our dataset contains 356 observations, but we only have 355 residuals. This is because someone has a missing value on one of the variables in the regression model and was dropped from the analysis. There are a variety of ways we could find out which case is missing a value, but since I’m only working with a handful of variables, I’ll just run descriptives and look for the variable with only 355 values.

`library(psych)`
`## Warning: package 'psych' was built under R version 3.4.4`
`describe(dissertation[c(13,15,18,21,27,29,31,44)])`
`##             vars   n mean   sd median trimmed  mad min max range  skew## illev          1 356 2.98 1.13      3    3.02 1.48   1   5     4 -0.17## circumst       2 356 2.99 0.95      3    2.97 1.48   1   5     4  0.13## deftest        3 356 3.39 1.46      4    3.57 0.00  -9   5    14 -5.25## policepower    4 355 3.86 1.41      4    4.02 0.00  -9   5    14 -6.40## suspicious     5 356 2.09 1.14      2    2.01 1.48  -9   5    14 -1.97## overzealous    6 356 3.34 1.34      4    3.41 1.48  -9   5    14 -4.49## upstanding     7 356 3.09 1.29      3    3.11 1.48  -9   5    14 -2.31## guilt          8 356 4.80 1.21      5    4.90 1.48   2   7     5 -0.59##             kurtosis   se## illev          -1.04 0.06## circumst       -0.51 0.05## deftest        40.74 0.08## policepower    55.05 0.08## suspicious     23.52 0.06## overzealous    38.44 0.07## upstanding     19.66 0.07## guilt          -0.54 0.06`

The variable policepower is the culprit. I can drop that missing value then rerun the residual code.

`dissertationsubset(dissertation, !is.na(policepower))dissertation\$residualresid(guilt_lm_full)`

Now I can plot my observed values and residuals.

`library(ggplot2)`
`## ## Attaching package: 'ggplot2'`
`## The following objects are masked from 'package:psych':## ##     %+%, alpha`
`qplot(guilt,residual,data=dissertation)`

We want our residuals to fall around 0, which is only happening for guilt ratings near the midpoint of the scale. This suggests that ordinary least squares regression may not be the best fit for the data, as the model shows a tendency to overpredict (negative residual) guilt rating for people with lower observed ratings and underpredict (positive residual) for people with higher observed ratings.

But, as I often do on this blog for the sake of demonstration, let’s pretend the model is doing well. One way we could use a regression model is to predict scores in a new sample. For instance, there are multiple rumors that different graduate schools have prediction equations they use to predict a candidate’s anticipated graduate school GPA, based on a combination of factors asked about in the application packet, to determine if a person is grad school-ready (and ultimately, to decide if they should be admitted). Schools generally won’t confirm they do this, nor would they ever share the prediction equation, should such an equation exist. But this is one way regression results from a training sample could be used to make decisions on a testing sample. So let’s do that.

Unfortunately, I don’t have a second dissertation dataset laying around that I could apply this equation to, but I could take a note from the data science playbook, and randomly divide my sample into training and testing datasets. I use the training dataset to generate my equation, and I use the testing dataset to apply my equation and predict values. Since I have outcome variable data in the testing dataset too, I can see how well my model did. Once I have a well-performing model, I could then apply it to new data, maybe to predict how highly you’ll rate a book or movie, or to generate recommendations, or even to determine if I should let you in to the super-elite Monstersori school I want to create.

First, I’ll split my dataset in half.

`smp_size  floor(0.50 * nrow(dissertation))set.seed(42)train_ind  sample(seq_len(nrow(dissertation)), size = smp_size)train  dissertation[train_ind, ]test  dissertation[-train_ind, ]`

Now I have a train dataset, with 177 observations, and a test dataset with 178. I can rerun the same linear model as before, this time only using the training data.

`guilt_lm_trainlm(guilt ~ illev + circumst + deftest + policepower +suspicious + overzealous + upstanding,data=train)summary(guilt_lm_train)`
`## ## Call:## lm(formula = guilt ~ illev + circumst + deftest + policepower + ##     suspicious + overzealous + upstanding, data = train)## ## Residuals:##     Min      1Q  Median      3Q     Max ## -2.9420 -0.8359  0.1641  0.9371  2.3151 ## ## Coefficients:##             Estimate Std. Error t value       Pr(>|t|)    ## (Intercept)  5.28874    0.77150   6.855 0.000000000128 ***## illev        0.08866    0.08485   1.045        0.29759    ## circumst    -0.13018    0.09917  -1.313        0.19109    ## deftest     -0.25726    0.10699  -2.405        0.01727 *  ## policepower  0.01758    0.12316   0.143        0.88665    ## suspicious   0.25716    0.08857   2.903        0.00419 ** ## overzealous -0.11683    0.08240  -1.418        0.15807    ## upstanding   0.10371    0.07574   1.369        0.17273    ## ---## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.194 on 169 degrees of freedom## Multiple R-squared:  0.1265, Adjusted R-squared:  0.09027 ## F-statistic: 3.495 on 7 and 169 DF,  p-value: 0.001586`

I can use my predict function to predict scores for the testing dataset. Remember, all this function needs is the linear model name and a dataset to use for the prediction function – and it can be any dataset, as long as it contains the same variables from the model.

`test\$predicted2predict(guilt_lm_train, test)`

The original predicted value (from when I was working with the full dataset) is still in this set. I could have replaced values by using the same variable name, but just for fun, decided to keep those values and create a second prediction variable.

Because we have observed and predicted2 for our training dataset, let’s see how well our model did, by creating a new residual variable, residual2. We can’t use the resid function, because we didn’t have R perform a linear regression on the testing dataset, but we can easily generate this variable by subtracting the predicted score from the observed score. Then we can once again plot our observed and residual values.

`test\$residual2test\$guilt-test\$predicted2qplot(guilt,residual2,data=test)`

We’re still seeing similar issues with the residuals as we did for the full dataset. If we wanted to actually apply our linear model, we’d want to do more research and pilot work to get the best equation we can. As with many things in statistics, the process is heavily determined by what you plan to do with the results. If you want to report variables that have a strong linear relationship with an outcome, we might be fine with these regression results. If we want to build an equation that predicts an outcome with a minimum of error, we’d want to do more exploratory work, pulling in new variables and dropping low-performing ones. Many of the equations in the model were not significant, so we could start model-building from those results. We may need to build multiple training datasets, to ensure we aren’t only picking up chance relationships. And for much larger applications, such as recommendation systems on services like Amazon and Netflix, machine learning may be a better, more powerful method.

One more A to Z post left!