Create your Machine Learning library from scratch with R ! (2/5) – PCA

By Antoine Guillot

PCA iris

(This article was first published on Enhance Data Science, and kindly contributed to R-bloggers)

This is this second post of the “Create your Machine Learning library from scratch with R !” series. Today, we will see how you can implement Principal components analysis (PCA) using only the linear algebra available in R. Previously, we managed to implement linear regression and logistic regression from scratch and next time we will deal with K nearest neighbors (KNN).

Principal components analysis

The PCA is a dimensionality reduction method which seeks the vectors which explains most of the variance in the dataset. From a mathematical standpoint, the PCA is just a coordinates change to represent the points in a more appropriate basis. Picking few of these coordinates is enough to explain an important part of the variance in the dataset.

The mathematics of PCA

Let be the observations of our datasets, the points are in mathbb{R}^p. We assume that they are centered and of unit variance. We denote mathbf{X}=(mathbf{x}_1, ..., mathbf{x}_n)^T the matrix of observations.
Then, mathbf{X}^T mathbf{X} can be diagonalized and has real and positive eigenvalues (it is a symmetric positive definite matrix).
We denote … > lambda_p” title=”Rendered by QuickLaTeX.com” height=”18″ width=”97″ data-recalc-dims=”1″> its ordered eigenvalues and u_1, ... , u_p the associated eigenvectors. It can be shown that frac{sum_{1leq i leq k} lambda_i}{sum_{1leq i leq p} lambda_i} is the cumulative variance explained by u_1, ..., u_k.
It can also be shown that u_1, ..., u_k is the orthonormal basis of size k which explains the most variances.

This is exactly what we wanted ! We have a smaller basis which explains as much variance as possible !

PCA in R

The implementation in R has three-steps:

  1. We center the data and divide them by their deviations. Our data now comply with PCA hypothesis.
  2. We diagonalise mathbf{X}^T mathbf{X} and store the eigenvectors and eigenvalues
  3. The cumulative variance is computed and the required numbers of eigenvectors k to reach the variance threshold is stored. We only keep the first k eigenvectors
###PCA
my_pca<-function(x,variance_explained=0.9,center=T,scale=T)
{
  my_pca=list()
  ##Compute the mean of each variable
  if (center)
  {
    my_pca[['center']]=colMeans(x)
  }
  ## Otherwise, we set the mean to 0 
  else
    my_pca[['center']]=rep(0,dim(x)[2])
  ####Compute the standard dev of each variable
  if (scale)
  {
    my_pca[['std']]=apply(x,2,sd)
  }
  ## Otherwise, we set the sd to 0 
  else
    my_pca[['std']]=rep(1,dim(x)[2])

  ##Normalization
  ##Centering
  x_std=sweep(x,2,my_pca[['center']])
  ##Standardization
  x_std=x_std%*%diag(1/my_pca[['std']])
  
  ##Cov matrix
  eigen_cov=eigen(crossprod(x_std,x_std))
  ##Computing the cumulative variance
  my_pca[['cumulative_variance']] =cumsum(eigen_cov[['values']])
  ##Number of required components
  my_pca[['n_components']] =sum((my_pca[['cumulative_variance']]/sum(eigen_cov[['values']]))<variance_explained)+1
  ##Selection of the principal components
  my_pca[['transform']] =eigen_cov[['vectors']][,1:my_pca[['n_components']]]
  attr(my_pca, "class") <- "my_pca"
  return(my_pca)
}

Now that we have the transformation matrix, we can perform the projection on the new basis.

predict.my_pca<-function(pca,x,..)
{
  ##Centering
  x_std=sweep(x,2,pca[['center']])
  ##Standardization
  x_std=x_std%*%diag(1/pca[['std']])
  return(x_std%*%pca[['transform']])
}

The function applies the change of basis formula and a projection on the k principals components.

Plot the PCA projection

Using the predict function, we can now plot the projection of the observations on the two main components. As in the part 1, we used the Iris dataset.

library(ggplot2)
pca1=my_pca(as.matrix(iris[,1:4]),1,scale=TRUE,center = TRUE)
projected=predict(pca1,as.matrix(iris[,1:4]))
ggplot()+geom_point(aes(x=projected[,1],y=projected[,2],color=iris[,5]))
PCA iris
Projection of the Iris dataset on the two mains PCA

Comparison with the FactoMineR implementation

We can now compare our implementation with the standard FactoMineR implementation of Principal Component Analysis.

library(FactoMineR)
pca_stats= PCA(as.matrix(iris[,1:4]))
projected_stats=predict(pca_stats,as.matrix(iris[,1:4]))$coord[,1:2]
ggplot(data=iris)+geom_point(aes(x=projected_stats[,1],y=-projected_stats[,2],color=Species))+xlab('PC1')+ylab('PC2')+ggtitle('Iris dataset projected on the two mains PC (FactomineR)')

When running this, you should get a plot very similar to the previous one. This ensures the sanity of our implementation.

Projection of the Iris dataset using the FactoMineR implementation

Thanks for reading ! To find more posts on Machine Learning, Python and R, you can follow us on Facebook or Twitter.
.

The post Create your Machine Learning library from scratch with R ! (2/5) – PCA appeared first on Enhance Data Science.

To leave a comment for the author, please follow the link and comment on their blog: Enhance Data Science.

R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more…

Source:: R News

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.