--- title: "Decision Trees, Random Forests, and Nearest-Neighbor classifiers" author: "Shane T. Mueller shanem@mtu.edu" date: "`r Sys.Date()`" output: rmdformats::readthedown: gallery: yes highlight: kate self_contained: no pdf_document: default html_document: df_print: paged word_document: reference_docx: ../template.docx always_allow_html: yes --- ```{r knitr_init, echo=FALSE, cache=FALSE} library(knitr) library(rmdformats) ## Global options options(max.print="75") opts_chunk$set(echo=TRUE, cache=TRUE, prompt=FALSE, tidy=TRUE, comment=NA, message=FALSE, warning=FALSE) opts_knit$set(width=75) ``` ```{r setup, include=FALSE} knitr::opts_chunk$set(echo = TRUE) ``` # Decision Trees, Forests, and Nearest-Neighbors classifiers The classic statistical decision theory on which LDA and QDA and logistic regression are highly model-based. We assume the features are fit by some model, we fit that model, and use inferences from that model to make a decision. Using the model means we make assumptions, and if those assumptions are correct, we can have a lot of success. Not all classifiers make such strong assumptions, and three of these will be covered in this section: Decision trees, random forests, and K-Nearest Neighbor classifiers. # Decision trees Decision trees assume that the different predictors are independent and combine together to form a an overall likelihood of one class over another. However, this may not be true. Many times, we might want to make a classification rule based on a few simple measures. The notion is that you may have several measures, and by asking a few decisions about individual dimensions, end up with a classification decision. For example, such trees are frequently used in medical contexts. If you want to know if you are at risk for some disease, the decision process might be a series of yes/no questions, at the end of which a 'high-risk' or 'low-risk' label would be given: 1. Decision 1: Are you above 35? + Yes: Use Decision 2 + No: Use Decision 3 2. Decision 2: Do you have testing scores above 1000? + Yes: High risk + No: Low risk 3. Decision 3: Do you a family history? + Yes: etc. + No: etc. Notice that if we were trying to determine a class via LDA, we'd create a single composite score based on all the questions and make a decision at the end. But if we make a decision about individual variables consecutively, it allows us to incorporate interactions between variables, which can be very powerful. Tree-based decision tools can be useful in many cases: * When we want simple decision rules that can be applied by novices or people under time stress. * When the structure of our classes are dependent or nested, or somehow hierarchical. Many natural categories have a hierarchical structure, so that the way you split one variable may depend on the value of another. For example, if you want to know if someone is 'tall', you first might determine their gender, and then use a different cutoff for each gender. * When many of your observables are binary states. To classify lawmakers we can look at their voting patterns--which are always binary. We may be able to identify just a couple issues we care about that will tell us everything we need to know. * When you have multiple categories. There is no need to restrict the classes to binary, unlike for the previous methods we examined. * When you expect some classifications will require complex interactions between feature values. To make a decision tree, we essentially have to use heuristic processes to determine the best order and cutoffs to use in making decisions, while identifying our error rates. Even with a single feature, we can make a decision tree that correctly classifies all the elements in a set (as long as all feature values are unique). So we also need to understand how many branches/rules to use, in order to minimize over-fitting. There are a number of software packages available for classification trees. One commonly-used package in R is called ```rpart```. In fact, rpart implements a more general concept called 'Classification and Regression Trees' (CART). So, instead of giving a categorical output to rpart, you can give it a continous output and specify "ANOVA", and each bottom leaf computes the average value of the items, producing a sort of tree-based regression/ANOVA model. Let's start with a simple tree made to classify elements on which we have only one measure: ```{r} library(rpart) library(rattle) #for graphing library(rpart.plot) #also for graphing library(DAAG) classes <- sample(c("A","B"),100,replace=T) predictor <- rnorm(100) r1 <- rpart(classes~predictor,method="class") plot(r1) text(r1) prp(r1) fancyRpartPlot(r1) #better visualization confusion(classes,predict(r1,type="class")) plot(predictor,col=factor(classes)) ``` Notice that even though the predictor had no true relationship to the categories, we were able to get 71% accuracy. We could in fact do better, just by adding rules. Various aspects of the tree algorithm are controlled via the control argument, using ```rpart.control```. Here, we set 'minsplit' to 1, which says we can split a group as long as it has at least 1 item. 'minbucket' specifies the smallest number of observations that can appear on the bottom node, and cp is a complexity argument. It defaults to .01, and is a criterion for how much better each model must be before splitting a leaf node. A negative value will mean that any additional node is better, and so it will accept any split. ```{r} r2 <- rpart(classes~predictor,method="class",control=rpart.control(minsplit=1,minbucket=1, cp=-1)) prp(r2) #this tree is a bit too large for this graphics method: #fancyRpartPlot(r2) confusion(classes,predict(r2,type="class")) ``` Now, we have completely predicted every item by successively dividing the line. Notice that by default, rpart must choose the best number of nodes to use,, and the default control parameters are set to often be reasonable, but there is no guarantee they are the right ones for your data. This is a critical decision for a decision tree, as it impacts the complexity of the model, and how much it overfits the data. How will this work for a real data set? Let's re-load the engineering data set. In this data, we have asked both Engineering and Psychology students to determine whether pairs of words from Psychology, Engineering go together, and measured their time and accuracy. ```{r} joint <- read.csv("eng-joint.csv") joint$eng <- as.factor(c("psych","eng")[joint$eng+1]) ##This is the partitioning tree: r1 <- rpart(eng~.,data=joint,method="class") r1 prp(r1,cex=.75) #text(r1,use.n=TRUE) fancyRpartPlot(r1) confusion(joint$eng,predict(r1,type="class")) ``` With the default full partitioning, we get 73% accuracy. But the decision tree is fairly complicated. For example, notice that nodes 2 and 4 consecutively select ppr.1 twice. First, if ppr.1 is faster than 2900, it then checks if it is slower than 2656, and makes different decisions based on these narrow ranges of response time. This seems unlikely to be a real or meaningful decision, and we might want something simpler. Let's only allow it to go to a depth of 2, by controlling maxdepth: ```{r,fig.width=8,fig.height=6} library(rattle) r2 <- rpart(eng~.,data=joint,method="class",control=rpart.control(maxdepth=2)) r2 fancyRpartPlot(r2) #text(r2,use.n=T) confusion(joint$eng,predict(r2,type="class")) ``` Here, we are down to 65% 'correct' classifications. ## Looking deeper If we look at the summary of the tree object, it gives us a lot of details about goodness of fit and decision points. ```{r} summary(r2) ``` This model seems to fit a lot better than our earlier LDA models, which suggest that it is probably overfitting. Cross-validation can be done within the control parameter $xval$: ```{r} r3 <- rpart(eng~.,data=joint,method="class",control=c(maxdepth=3,xval=10)) r3 confusion(joint$eng,predict(r1,type="class")) confusion(joint$eng,predict(r3,type="class")) ``` Accuracy goes down a bit, but the 74% accuracy is about what we achieved in the simple lda models. Clearly, for partitioning trees, we have to be careful about overfitting, because we can always easily get the perfect classification. ## Regression Trees Instead of using a tree to divide and predict group memberships, rpart can also use a tree as a sort of regression. It tries to model all of the data within a group as a single intercept value, and then tries to divide groups to improve fit. There are some pdf help files available for more detail, but the regression options (including poisson and anova) are a bit poorly documented. But, basically, we can use the same ideas to partition the data, and then fit either a single value within each group or some small linear model. The tree shows nodes with the the top value the mean for that branch. ```{r} cw <- rpart(weight~Time+ Diet, data=ChickWeight, control=rpart.control(maxdepth=5)) summary(cw) fancyRpartPlot(cw) ##this shows the yvalue estimate for each leaf node: cw$frame ``` ```{r} library(ggplot2) ChickWeight$predicted <- predict(cw) ggplot(ChickWeight,aes(x=Time,y=predicted,group=Diet,color=Diet,size=Diet)) + geom_point(shape=1) + theme_minimal() ``` # Random Forests One advantage of partitioning/decision trees is that they are fast and easy to make. They are also considered interpretable and easy to understand. A tree like this can be used by a doctor or a medical counselor to help understand the risk for a disease, by asking a few simple questions. The downsides of a decision tree is that they are often not very good, especially once they have been trimmed to avoid over-fitting. Recently, researchers have been interested in combining the results of many small (and often not-very-good) classifiers to make one better one. This often is described as 'boosting', or `ensemble' methods, and there are a number of ways to achieve this. Doing this in a particular way with decision trees is referred to as a 'random forest' (see Breiman and Cutler). Random forests can be used for both regression and classification (trees can be used in either way as well), and the classification and regression trees (CART) approach is a method that supports both. A random forest works as follows: * Build $N$ trees (where N may be hundreds; Brieman says 'Don't be stingy'), where each tree is built from a random subset of features/variables. That is, on each step, choose the best variable to divide the branch based on a random subset of variables. For each tree: 1. Pick a random sample of data. This is by default 'bootstrapped', meaning it has the same size as your data but samples it with replacement. 2. Pick K, where k is the number of variables considered at each step (typically sqrt(number of variables)) 3. Select k variables at random. 4. Find the best split among those k variables. 5. Repeat from point 3 until you reach the end criteria (determined by maxnodes and nodesize) * Then, to classify your data, have each tree determine its best guess, and then take the most frequent outcome (or give a probabilistic answer based on the balance of evidence). This can both provide a robust classifier and a general importance of variables. You can look at the trees that more more accurate and see which variables were used, and which were used earlier, and this can give an indication of the most robust classification. In contrast to a normal CART, where the first cut may end up not being as important as later cuts. The randomForest package in R supports building these models. Because these can be a bit more difficult to comprehend, there is a companion package randomForestExplainer that is also handy in digging down on the types of forests derived. ```{r} library(randomForest) library(randomForestExplainer) rf <- randomForest(x=joint[,-1],y=joint$eng, proximity=T, ntree=5000) rf #printRandomForests(rf) ##look at all the RFs confusion(joint$eng,predict(rf)) ``` If you run this repeatedly, you get a different answer each time. For this data, quite often the accuracy is below chance. The confusion matrix produced is "OOB": out-of-bag, which is like Leave-on-out cross-validation. Looking at the trees, they are quite complex as well, with tens of rules. We can get a printout of individual subtrees with getTree: ```{r} (getTree(rf,1)) (getTree(rf,5)) ``` There are many arguments that support the sampling/evaluation process. The feasibility of changing these will often depend on the size of the data set. With a large data set and many variables, fitting one of these models may take a long time if they are not constrained. Here we look at: * sampsize: number of bootstrapped samples to try. * replace: bootstrapped cases are sampled with or without replacement * mtry: how many variables to try at each split * ntree: how many trees to grow * keep.inbag: keep track of which samples are 'in the bag' ```{r} rf <- randomForest(eng~.,data=joint,proximity=T,replace=F,mtry=3,maxnodes=5, ntree=500,keep.inbag=T,sampsize=10,localImp=TRUE) confusion(joint$eng,predict(rf)) ``` The forest lets us understand which variables are more important, by identifying how often variables are closer to the root of the tree. Depending on the run, these figures change, but a variable that is frequently selected as the root, or one whose mean depth is low, is likely to be more important. ```{r} plot_min_depth_distribution(rf) plot_multi_way_importance(rf) ``` This plots the prediction of the forest for different values of variables. If the variables are useful, we will tend to see blocks of red/purple indicating the prediction in different regions.: ```{r} plot_predict_interaction(rf,joint,"eer","ppr") ``` Note that the random forest rarely produces a good classification for the engineers/psychologist data. How does it doe for the iphone data? ```{r} phone <- read.csv("data_study1.csv") phone.dt <- rpart(Smartphone~.,data=phone) prp(phone.dt,cex=.5) confusion(phone$Smartphone,predict(phone.dt)[,1]<.5) ##this doesn't like the predicted value to be a factor #phone$Smartphone<- phone$Smartphone=="iPhone" ##newer versions require this to be a factor: phone$Smartphone <- as.factor(phone$Smartphone) ``` ```{r} phone.rf <- randomForest(Smartphone ~ ., data=phone) phone.rf confusion(phone$Smartphone,predict(phone.rf)) ``` This does not seem to be better than the other models for the iphone data, but at least it is comparable. It does not seem to do as well as rpart though! Note the ```ranger``` library random forest gives roughly equivalent results. ```{r} library(ranger) r2 <- ranger(Smartphone ~ ., data=phone) r2 treeInfo(r2) confusion(phone$Smartphone,predictions(r2)) ``` For the examples we looked at, random forests did not perform that well. However, for large complex classifications they can both be more sensitive and provide more convincing explanations than trees, because they can give probabilistic and importance weightings to each variable. However, they do lose some of the simplicity and as we saw, don't always improve performance of the classifier. # K Nearest-Neighbor classifiers Another non-model-based classifier are nearest-neighbor methods. These methods require you to develop a similarity or distance space--usually on the basis of a set of predictors or features. Once this distance space is defined, we determine the class by finding the nearest neighbor in that space, and using that label. Because this could be susceptible to noisy data, we usually find a set of $k$ neighbors and have them vote on the outcome. These are K-nearest neighbor classifiers, or KNN. ## Choosing K. The choice of $k$ can have a big impact on the results. If $k$ is too small, classification will be highly impacted by local neighborhood; if it is too big (i.e., if K = N), it will always just respond with the most likely response in the entire population. ## Distance Metric The distance metric used in KNN easily be chosen poorly. For example, if you want to decide on political party based on age, income level, and gender, you need to figure out how to weigh and combine these. By default, you might add differences along each dimension, which would combine something that differs on the scale of dozens with something that differs on the scale of thousands, and so the KNN model would essentially ignore gender and age. ## Normalization selection A typical approach would be to normalize all variables first. Then each variable has equal weight and range. ## Using KNN Unlike previous methods where we train a system based on data and then make classifications, for KNN the trained system is just the data. ## Libraries and functions. There are several options for knn classifers. ### Library: Class * Function: knn * Function: knn.cv (cross-validation) ### Library: kknn * Function: kknn This provides a 'weighted k-nearest neighbor classifier'. Here, weighting is a weighting kernel, which gives greater weights to the nearer neighbors. ### Library: klaR * Function: sknn A 'simple' knn function. Permits using a weighting kernel as well. ### Function: nm This provides a 'nearest mean' classifier ## Example In general, we need to normalize the variables first when using knn, because it tries to create a distance metric between cases--we don't want distance to be dominated by a variable that happens to have a larger scale. ```{r,fig.width=7,fig.height=8} phone <- read.csv("data_study1.csv") # Normalizing the values by creating a new function normal = function(x){ xn = (x - min(x))/(max(x)-min(x)) return(xn) } phone2 <- phone phone2$Gender <- as.numeric(as.factor(phone2$Gender)) for(i in 2:ncol(phone2)){ phone2[,i] = normal(as.numeric(phone2[,i])) } # we might also use scale: phone3 <- scale(phone[,-(1:2)]) # checking the range of values after normalization par(mfrow=c(3,1)) boxplot(phone[,3:12]) boxplot(phone2[,3:12]) boxplot(phone3) ``` ## Fitting a simple KNN model Here, we use the class:knn model. We specify separate the training and testing separately, but they can be the same. 'training' a knn is a bit deceptive, the training pool is just the set of points being used to classify the test cases. We will use a k of 11, which means each case will be compared to the 11 closes cases and a decision made based on the vote of those. Using an odd number means that there won't be a tie. ```{r} library(class) m1 = knn(train = phone2[,-1], test = phone2[,-1], cl = phone2$Smartphone, k = 11) confusion(phone2$Smartphone,m1) ``` That is pretty good, but what if we expand k ```{r} m1 = knn(train = phone2[,-1], test = phone2[,-1], cl = phone2$Smartphone, k = 25) confusion(phone2$Smartphone,m1) ``` It looks we the choice of k is not going to matter much. What we could do is play with cross-validation though--right now we are testing on the same set as we are 'training' with, which will boost our accuracy (since the correct answer will always be one of the elements.) We can also try to do variable selection to change the similarity space to something that works better. ```{r} phone2 <- phone2[order(runif(nrow(phone2))),] #random sort phone.train = phone2[1:450, 2:12] phone.test = phone2[451:nrow(phone2), 2:12] phone.train.target = phone2[1:450, 1] phone.test.target = phone2[451:nrow(phone2), 1] train <- sample(1:nrow(phone2),300) m2 <- knn(train = phone.train, test = phone.test, cl = phone.train.target, k = 10) confusion(phone.test.target,m2) ``` # Additional resources * K-Nearest Neighbor Classification: - MASS: Chapter 12.3, p. 341 - https://rstudio-pubs-static.s3.amazonaws.com/123438_3b9052ed40ec4cd2854b72d1aa154df9.html * Relevant library/functions in R: * Decision Trees: - Faraway, Chapter 13 * Library: rpart (function rpart) * Random Forest Classification: References: - https://www.r-bloggers.com/predicting-wine-quality-using-random-forests/ - https://www.stat.berkeley.edu/~breiman/RandomForests/ - Library: randomForest (Function randomForest) - ranger (a fast random forest implementation).