--- title: 'Classification: LDA and QDA Approaches' author: "Shane T. Mueller shanem@mtu.edu" date: "`r Sys.Date()`" output: pdf_document: default rmdformats::readthedown: gallery: yes highlight: kate self_contained: no html_document: df_print: paged word_document: reference_docx: ../template.docx always_allow_html: yes --- ```{r knitr_init, echo=FALSE, cache=FALSE} library(knitr) library(rmdformats) ## Global options options(max.print="75") opts_chunk$set(echo=TRUE, cache=TRUE, prompt=FALSE, tidy=TRUE, comment=NA, message=FALSE, warning=FALSE) opts_knit$set(width=75) ``` # Classification and Categorization General regression approaches we have used so far have typically had the goal of modeling how a dependent variable (usually continuous, but in the case of logistic regression, binary, or with multinomial regression multiple levels) is predicted by a set of independent variables or predictor variables. The goal of building a regression models often prediction and hypothesis testing. Generally, we are seeking to create an understandable model of the process that we can interpret and make inferences from. Many times, we might care less about making a numerical prediction, and more about classifying into categories to be able to take action, based on the same measurable properties. This is very similar to the goals of logistic regression, but in logistic regression we are modeling the probability of a binary category, whereas in classification we just care about which category is most likely, and may not really care how probable that category is. Furthermore, logistic regression models the conditional probability of a category given the predictor variables, so it does not really matter if your two categories occur roughly equally or not. But if you know that 90% of cases are of one type, you might use a different criterion for making your classification if you want to maximize the number of correct responses. Classification methods would typically want to consider the overall base rate as well as the relative influence of different factors, in order to maximize the probability of getting the right prediction. Typically, for classification approaches, we need to select a criterion or cut-off of some sort to make a decision--hopefully one that will give the greatest chance of getting a new classification right. There are many applications for classification, and recently the work on this has migrated from statisticians to computer scientists and specialists in machine classification, machine learning, and artificial intelligence. The goals of classification differ somewhat from approaches like logistic regression and MANOVA. For example, either approach could be used as a tool to help diagnose a disease. A logistic regression would produce the odds that a person has a disease, whereas the a machine classification might be used to determine whether a disease is present--perhaps with some margin of error. A regression might be used to determine the likelihood a company's will declare bankruptcy, while a classification might be used to identify a set of 'at-risk' companies. Overall, these are very similar goals, and although there are many different approaches to achieve this, the solutions can end up looking very similar. Some of the terminology tends to differ across these two approaches. A regression approach might discuss variables, fitting, and inference; a classification might call these features, learning, and cross-validation. There are some aspects of applying regression models and classification models that have similar goals, but different methods. For example, in regression modeling, we often use anova, analysis of deviance, or a criterion such as AIC to compare models and determine whether a predictor should be used. In classification, the ultimate criterion is often classification accuracy, and a huge aspect of 'machine learning' is selecting and removing features to use in a model. In general, the attitude for regression models is typically interpretability, leading to small models with comprehensible measures. Oftentimes, classification approaches might start with hundreds or thousands of features, and combination rules and decision rules might be very abstract and difficult to comprehend. Furthermore, classification is more likely to use separate data sets or cross-validation to do variable selection (feature learning). Regression is more likely to use AIC or BIC methods to select variables on the whole data set. # Classification as Logistic Regression Logistic regression provides a method for making a prediction about the binary classification of something based on a set of predictors. In the following data set, we asked participants to judge whether concepts were related. Some of the concepts were engineering terms; others were psychology terms. We tested both engineering and psychology students, and we wanted to know if we could tell them apart based on the speed and accuracy of their responses. Terms were classified into different bins, with e=engineering, p=psychology, r=related, and u=unrelated. ```{r} ##The following just builds the data set. ee are eng-eng terms; p-p are psych-psych-terms; ## ep are eng-psych pairs. Colums are the accuracy and mean time to make a decision about each kind of word pair. library(dplyr) data.raw <- read.csv("samediff-pooled.csv") data <- dplyr::filter(data.raw,cond %in% c("eerc","eer","eeu","ep","ppr","ppu")) stim <- c(as.character(data$word1),as.character(data$word2)) acc <- c(data$corr,data$corr) rt <- c(data$rt,data$rt) sub <- c(data$subnum,data$subnum) data$pairs <- paste(data$word1,data$word2,sep="-") dat.corr <- as.data.frame(tapply(data$corr,list(sub=data$sub,type=factor(data$cond)),mean)) dat.rt <- as.data.frame(tapply(data$rt,list(sub=data$sub,type=factor(data$cond)),function(x){exp(mean(log(x),na.rm=T))})) dat.joint <- cbind(dat.corr,dat.rt) survey <- read.csv("survey.csv") surv2 <- data.frame(sub=survey$subnum,eng=survey$engineering) joint <- data.frame(eng=surv2$eng,dat.joint) joint[1:5,] ``` Let's start with a logistic regression. ```{r} model1 <- glm(eng~.,data=joint,family=binomial) summary(model1) ``` The residual deviance is a bit large so we might consider a quasi-binomial. We can see that none of the predictors are statistically significant, but nevertheless we can make a prediction about the log-odds of each person being an engineer, and compare it to their actual major: ```{r} table(predict(model1,joint)>0,joint$eng) ``` This is not bad--about 2/3 correct in each category. Maybe there is a better place to put the criterion. This could be true if we had mostly engineers or mostly psychologists. ```{r} sum(diag(table(predict(model1,joint)>-.5,joint$eng))) sum(diag(table(predict(model1,joint)>-.25,joint$eng))) sum(diag(table(predict(model1,joint)>-.1,joint$eng))) sum(diag(table(predict(model1,joint)>0,joint$eng))) sum(diag(table(predict(model1,joint)>.25,joint$eng))) sum(diag(table(predict(model1,joint)>.5,joint$eng))) ``` If we looked at just the first 30 participants, the best criterion is different: ```{r} sum(diag(table(predict(model1,joint[1:50,])>-.5,joint$eng[1:50]))) sum(diag(table(predict(model1,joint[1:50,])>-.25,joint$eng[1:50]))) sum(diag(table(predict(model1,joint[1:50,])>-.1,joint$eng[1:50]))) sum(diag(table(predict(model1,joint[1:50,])>0,joint$eng[1:50]))) sum(diag(table(predict(model1,joint[1:50,])>.25,joint$eng[1:50]))) sum(diag(table(predict(model1,joint[1:50,])>.5,joint$eng[1:50]))) ``` Here, a criterion of -.25 gives us better classification performance. This is just a consequence of base rate, because more engineers were stored in the first half of the data file. This is OK, but a bit worrisome. None of the predictors were significant, so we might have gotten here just by chance. We might be over-fitting the data. In regression, we'd do inferential test or information criteria to identify which variables to use. Later, we'll see how this is done for machine classification. ## Basics of Machine Classification We can see how logistic regression can be used as a classifier, as long as we can determine a reasonable criterion. Prior to the widespread use of logistic regression (which requires maximum-likelihood estimation and computerized approaches), various approaches to classification were developed that use simplified models. One traditional model assumes that two groups have multivariate normal distributions with equal variance. In the figure below, we see two such 2-dimensional distributions, with a line drawn between the centers of each, and contours showing the basic data. ```{r,fix.width=4,fig.height=4} library(ggplot2) set.seed(100) n <- 500 class <- rep(c("A","B"),each=n) x <- rnorm(n*2,mean=rep(c(3,4),each=n)) y <- rnorm(n*2,mean=rep(c(2.5,1.5),each=n)) ggplot(data.frame(class,y,x),aes(x=x,y=y,colour=class)) + geom_point(size=1.5) + geom_density_2d()+ geom_segment(x=-1,y=6.5,xend=7,yend=-1.5,col="grey60",lwd=1.2)+ geom_segment(x=3,y=2.5,xend=4,yend=1.5,col="black", lwd=1.2) + geom_segment(x=1.5,y=0,xend=5.5,yend=4,col="darkgreen") + coord_fixed(ratio = 1, xlim =c(0,6), ylim=c(0,6)) + theme_bw() + scale_color_manual(values=c("orange","navy")) ``` Under our assumptions, if we draw a line between the centers and extend it out (the grey line), we can project any points onto this line (i.e., the closest point on the line to each point). We could use position along this line to predict category membership in a regression or logistic regression--and this is essentially what regression does. This mapping from input variables to a single function is called the discriminant function, and is equivalent to the weighted sum of values by coefficients in regular regression. Now, if we want to classify any observation, we just need to determine which case is more likely. Given the assumptions of equal variance and normality, this can be shown to be a single criterion that maximizes our chances of being correct. In this case, that corresponds to where the green line intersects the black line, and if we move back to the original data, the entire green line is a good rule discriminating the two groups. Under these assumptions, there are a number of approaches that can be used. In fact, MANOVA is somewhat similar, but framing the model backwards. The most common approach used is referred to as linear discriminant analysis (LDA), or sometimes multivariate discrimination analysis (MDA). The assumptions of this approach are a bit stronger than regression (requiring normal input predictors and equal variances). If these assumptions hold or we can transform them so they work, we can get improved classification results over other methods. In practice, the methods are likely to be almost equivalent to logistic regression. # Linear discriminant analysis Using the fake data from the figure, we can fit an lda model, using the MASS library, using syntax that looks a lot like lm: ```{r} library(MASS) model0 <- lda(as.factor(class)~x+y) model0 ``` We can see that the simple LDA finds the mean of group A and B along the two measured dimensions, and then reports 'coefficients of linear discriminants'. This is the direction in XY space that best discriminates the two groups. If we could map each observation onto this line, we can easily make a decision that optimally classifies the two groups. We can simply multiply each observed variable by the coefficient to get a sum value, which is the value used to discriminate the two classes, once an optimal threshold is chosen. You can see that if you do predict() on a model, the \$class tells you predicted class, and \$x tells you the exact values we calculated in LD1. ```{r,fig.width=6,fig.height=6} ldout <- x*.71989 + y*(-.69406) p <- predict(model0) plot(p$x,ldout,col=p$class) abline(0,1) grid() ``` This is a bit clearer if we visualize it, which we can do via the klaR library, which has a number of classification schemes available, including a number of visualization methods. Let's look at a 'partimat' plot: ```{r,fig.width=6,fig.height=6} library(klaR) partimat(as.factor(class)~x+y, method = "lda") partimat(as.factor(class)~x+y, method = "lda", plot.matrix = TRUE, imageplot = FALSE) # takes some time ... ``` If you think about the line connecting the centers of the two groups, it goes from the upper left to lower right. This direction can be defined by a vector: (.71, -.69), which is a slope of around -1. The red line in the figure shows the decision criterion that best separates the two groups. Let's look at the engineering data set, which has more than two predictors ```{r,fig.width=8,fig.height=8} partimat(as.factor(eng)~.,data=joint,method="lda",plot.matrix=TRUE,imageplot=FALSE) ``` This shows the classification along each pair of dimensions. ```{r} library(MASS) library(DAAG) ll <- lda(eng~.,data=joint) ll ``` If we look at group means and the coefficients, we can see that a few of the measures differ substantially between the two groups, but not all. The largest coefficients typically map onto the dimensions with the greatest difference between groups. ## Predicting class from an LDA model We can use ```predict``` to predict the values of the fitted data, and then compare these to the true values. The function ```confusion``` in ```DAAG``` provides nicely-formatted output. If we just want to know how many we got correct, we can look at the diagonal of the table. ```{r} predict(ll) table(predict(ll)$class,joint$eng) confusion(joint$eng,predict(ll)$class) sum(diag(table(predict(ll)$class,joint$eng))) ``` Let's examine the different aspects of the results. First, the model reports the prior probabilities of groups. This will be the number of each type of input value. Note that if the true base rate differs from the training set (something that might be very likely), we might want to set this explicitly when we predict the data. Next, we see the mean values across each of the variables measured. This is the center of the two normal distributions. Then, we see coefficients of linear discriminants--these are equivalent to the beta weight used to create the discriminant function. It shows the best guess classifications, and finally posterior likelihoods of each--this should be very similar to estimated probabilities from the logistic model. Finally, $x shows the discriminant function value, which we could use to choose a different decision criterion. There are several methods for determining the best decision rule. ```{r} library(GGally) ggplot(data.frame(Logistic=model1$coefficients[-1], LDA=(ll$scaling)[,1]),aes(x=Logistic,y=LDA))+geom_point() + theme_bw() ``` Here, the coefficients from the logistic and lda models are not identical, but their correlation is 1.0! If we compare the predicted probability vs. the lda likelihood, we see that they are highly correlated. ```{r} logit <- function(lo) {1/(1+exp(-lo))} ##This is the inverse of the logodds function. The df <- data.frame(logistic=predict(model1), lda=(predict(ll)$x)[,1], logisticprob=logit(predict(model1)), ldaprob = predict(ll)$posterior[,2]) ggplot(df,aes(x=logistic,y=lda))+geom_point() + theme_bw() + ggtitle("Logistic model vs LDA") ggplot(df,aes(x=logisticprob,y=ldaprob))+geom_point() + theme_bw() + ggtitle("Logistic probability vs LDA probability") ``` Notice how the discriminant value is almost the same as the log-odds predicted value in logistic regression, and transforming these to probabilities also produces almost identical values. In terms of classification performance, the prediction for LDA was 1 error less than logistic regression, but the two models are really essentially identical, with the main difference being how the parameters are fit and the assumptions being made. ## Cross-validation It is easy to overfit classification data, and so we must be careful to avoid this. Generally, just as with variable selection in regression models, we are worried about determining the best subset of predictors to use. Since using more variables will never make the model worse at fitting its own data, it is useful to hold some data out and test the model on the held-out data. A common approach is to use leave-one-out cross-validation. This approach will fit the model N times for N observations, fitting each left-out case in each model. The LDA model allows you to do this automatically using the CV=TRUE option. This will do the classification automatically, instead of embedding it within the predict function and doing it manually. ## Results of original (overfit) model ```{r} ##Original model confusion(joint$eng,predict(ll)$class) sum(diag(table(joint$eng,predict(ll)$class))) ``` ## Results of cross-validated model ```{r} ##cross-validation: ll2 <- lda(eng~.,data=joint, CV=TRUE) confusion(ll2$class,joint$eng) sum(diag(table(ll2$class,joint$eng))) ``` ## Comparison of two models' predictions ```{r} ##correspondence between predictions confusion(ll2$class,predict(ll)$class) sum(diag(table(ll2$class,predict(ll)$class))) ``` In contrast to the 51 cases we got correct before, the cross-validation gets just 35 correct (out of 76)--this is actually worse than chance! This is in spite of the fact that there is a lot of agreement between the two models. This poorer performance is telling us that the full model is overfitting the data. Notice that we no longer have a single lda model to look at. The output of the cross-validation is simpler. We can't look at the linear discriminant values or means because there is no longer one model--we tested N models. ```{r} ll2 ``` The best practice in a situation like this might be to use cross-validation accuracy to help guide variable selection. You might use a stepwise procedure, and only include a variable if it improves cross-validation accuracy. You might use the single best model at the end, but still acknowledge cross-validation performance. In this case, results such as this led our research lab to conclude that there was no substantial difference between groups, and we developed new behavioral methods that were more powerful. ## LDA with multiple categories The other advantage of LDA over regression is that it handles multiple categories directly. Here, just as with the multinom() model, it creates $N-1$ discriminant functions for $N$ classes, each compared against a baseline. Let's look at the iris data for which we examined previously under the multinomial model. ```{r} m.iris <- lda(Species~.,data=iris) m.iris m.irisCV <- lda(Species~.,data=iris, CV=TRUE) table(iris$Species,m.irisCV$class) ``` Now, the classification is very good, even with cross-validation. We can see two sets of coefficients--the two discriminant functions distinguishing each pairing of outcomes. # Quadratic Discriminant Analysis One of the assumptions of LDA is that the two distributions have equal variance. If we relax this assumption, the best classification no longer has to be a line separating the space. We can get curved boundaries, or even a small region within a larger region. For example: ```{r} n <- 500 class <- rep(c("A","B"),each=n) x <- rnorm(n*2,mean=rep(c(3,4),each=n), sd=rep(c(.25,1),each=n)) y <- rnorm(n*2,mean=rep(c(2.5,1.5),each=n), sd=rep(c(.25,1),each=n)) qda1 <- qda(class~x+y,CV=TRUE) table(class,qda1$class) ggplot(data.frame(class,y,x),aes(x=x,y=y,colour=class)) + geom_point(size=1.5) + geom_density_2d()+ geom_segment(x=-1,y=6.5,xend=7,yend=-1.5,col="grey60",lwd=1.2)+ geom_segment(x=3,y=2.5,xend=4,yend=1.5,col="black", lwd=1.2) + coord_fixed(ratio = 1, xlim =c(0,6), ylim=c(0,6)) + ggtitle("Color by true class") + theme_bw() + scale_color_manual(values=c("orange","navy")) ggplot(data.frame(class,y,x),aes(x=x,y=y,colour=qda1$class)) + geom_point(size=1.5) + geom_density_2d()+ geom_segment(x=-1,y=6.5,xend=7,yend=-1.5,col="grey60",lwd=1.2)+ geom_segment(x=3,y=2.5,xend=4,yend=1.5,col="black", lwd=1.2) + coord_fixed(ratio = 1, xlim =c(0,6), ylim=c(0,6)) + ggtitle("Colored by QDA classification") + theme_bw() + scale_color_manual(values=c("orange","navy")) ``` We can see how an LDA model will suffer. If all points are projected onto the discriminant line, the single boundary on that line will not be ideal. If we were able to make a curved boundary in this xy space, we could capture correct classifications. Quadratic Discriminant Analysis (QDA) permits this. It provides a more powerful classifier that can capture non-linear boundaries in the feature space. Thus, it is also less constrained, so requires more careful analysis to ensure we don't overfit the model. How does it work with our real data set? ```{r} library(MASS) q <- qda(eng~.,data=joint,CV=TRUE) q confusion(q$class,joint$eng) sum(diag(table(q$class,joint$eng))) ``` Now, the qda model is a reasonable improvement over the LDA model--even with Cross-validation. We were at 46% accuracy with cross-validation, and now we are at 57%. This increased cross-validation accuracy from 35 to 43 accurate cases. ## Variable Selection in LDA We now have a good measure of how well this model is doing. But we suspect that--at least for LDA, the predictors might be over-fitting. We'd like to try removing variables to see if we get a better cross-validation performance. We could do this by hand, or using some tools built for this. The stepclass function within klaR package will do this: ```{r} library(klaR) modelstepL <- stepclass(eng~.,"lda", direction="both",data=joint) modelstepL modelstepQ <- stepclass(eng~.,"qda", direction="both",data=joint) modelstepQ ``` If you run this several times, you will find that you get a slightly different model each time. The best models have 1 to 2 predictors, and vary in accuracy from 55 to 65%. This is happening because the cross-validation the method uses is somewhat random, so the best model will depend on how the cross-validation is initialized. Perhaps if we reduce the improvement required, and use a higher cross-validation value, we will end up at a more stable result. Using fold=76 should be similar to doing leave-one-out cross-validation, and using a smaller improvement criterion will avoid stopping early. ```{r} modelstepL <- stepclass(eng~.,"lda", direction="both",data=joint,improvement=.001,fold=76) modelstepL modelstepQ <- stepclass(eng~.,"qda", direction="both",data=joint,improvemnet=.001,fold=76) modelstepQ ``` Now, the each model tends to converge on the same result each time. Recognize that the 'best' model is out there and does not change, but we just don't always find it because the search involves randomness. However, oftentimes we get to a model that is basically as good as the best model. In small variable sets like this, we could examine every possible sub-model, but if you start moving to large models with thousands of variables, you will probably never find the exact best model. However, there is likely to be so much redundancy in those predictors that it does not really matter. In our example, the variables selected are different for the two models, but that is probably fine. We can refit the best models using lda and qda to get more details about the fit: ```{r} l.final <- lda( eng ~ ppr + ppu + ppu.1,data=joint) l.final q.final <- qda(eng ~ ep + ppr, data=joint) q.final ``` # Example: LDA on the iphone data set The following works through all the steps of LDA and QDA again with the iphone data set. ## Data Preprocessing ```{r data prep} phone_ds <- read.csv("data_study1.csv") ``` Get rid of the gender variable, because it is categorical and LDA does not like that. It is also highly predictive, so this will help us see how good we can get. ```{r data acc to phone type} phone_type <- phone_ds[,c(1,3:13)] ``` Rescale all the numeric variables. ```{r Feature scaling1} phone_type[,2:12] <- scale(phone_type[,2:12],center = TRUE,scale = TRUE) ``` ## Loading library for LDA ```{r Lib} library(MASS) ``` ## Compute LDA without Cross validation ```{r LDA without Cross validation} lda_mod1 <- lda(Smartphone ~.,data = phone_type) lda_mod1 plot(lda_mod1) ``` ## Predict using phone_type ```{r } library(DAAG) plda1 <- predict(object = lda_mod1) confusion(phone_type$Smartphone,plda1$class) ``` This looks reasonably good: 67% accuracy. Notably, it gets MOST android phones wrong, but by calling 80.6% of iphones correctly, it makes up for it. This is mostly because of the large overlap between the discriminant distributions. ## Compute LDA with Cross-validation One way to avoid over-fitting is to use leave-one-out cross-validation (LOOC). In this CV method, we repeatedly leave an observation out and fit the model with the remaining data, and testing and try to optimize parameters based on the accuracy of these missing values. This is bound to give poorer performance than the non-cross-validation version, but it will help us remove and select variables to make a better model when applied to new data. For LDA and QDA, CV=TRUE uses LOOC to fit its model. ```{r} lda_mod2 <- lda(Smartphone ~.,data = phone_type,CV=TRUE) lda_mod2 ``` ## confusion() on lda_mod2 We can examine the overall accuracy here via a confusion matrix as follows: ```{r} confusion(phone_type$Smartphone,lda_mod2$class) ``` We can see that some of the accuracy comes from overfitting. Without CV, accuracy is 67%, and it goes down to 65%. This new model STILL gets most android users wrong (and it gets worse in fact), and misses a few more iPhone users. This seems like a bias in the decision, but let's think about how well the model could do if we didn't use ANY predictors. Suppose we just called all users iphone users in that case (note that ```confusion``` breaks because it wants both categories in both columns). ```{r} mean(phone_type$Smartphone =="iPhone") ``` So, we can get 59% correct by calling everyone an iPhone user, and 65% with LDA and CV, and 67% by overfitting LDA. ## Compute QDA without CV Just like LDA, applying qda alone does not use cross-validation. Consequently, it may overfit. Does it do better than LDA? ```{r} qda_mod1 <- qda(Smartphone ~.,data = phone_type) qda_mod1 ``` ```{r} qlda1 <- predict(object = qda_mod1) confusion(phone_type$Smartphone,qlda1$class) ``` Here, the QDA model is about 1% better than the LDA model without cross-validation. But let's see if it is just better at fitting noise, or is it going to be better when we fit it with cross-validation. ## Compute QDA with LOOC Now, use CV=TRUE for the qda model: ```{r} qda_mod2 <- qda(Smartphone ~., data = phone_type,CV = TRUE) qda_mod2 ``` ## confusion() on qda_mod2 ```{r} confusion(phone_type$Smartphone,qda_mod2$class) ``` Accuracy goes down to 59.4%, which compares to the 58.6% for the LDA. It looks like we get about a 1% boost in both cases. # Split-half and N-fold Cross Validation LOOC can be computationally expensive if you have large data sets, and is not built into many classification schemes. A simpler scheme is to fit on one portion of the data and keep aside another portion for testing. We can do this repeatedly and get an overall estimate of accuracy. Sometimes, you might split your data into something like ten sets, and then fit the model ten times on 90% of the data and test on each 10% alone. This is called n-fold cross-validation; in this case 10-fold. Another scheme is to split the data in half, fitting on one half and testing on another. This would also be called split-half cross-validation. You can do this repeatedly on multiple different splits to get an average performance. Below, I show a function will compute an lda and qda model, randomly picking half each time. The model fits on 1/2 of the data, and then predicts the other half. You could run this multiple times to get a better estimate of the fit of this class of model. ```{r} cv.lda <- function(class,predictors){ selection <- rep(c(T,F),length.out=length(class))[order(runif(length(class)))] out1 <- class[selection] pred1 <- predictors[selection,] joint1 <- data.frame(out=out1,pred1) out2 <- class[!selection] pred2 <- predictors[!selection,] joint2 <- data.frame(out=NA,pred2) ll <- lda(out~.,data=joint1) table(predict(ll,newdata=joint2)$class,out2) out.ll <-sum(diag(table(predict(ll,newdata=joint2)$class,out2)))/length(out2) qq <- qda(out~.,data=joint1) table(predict(qq,newdata=joint2)$class,out2) out.qq <- sum(diag(table(predict(qq,newdata=joint2)$class,out2)))/length(out2) c(out.ll,out.qq) } cv.lda(phone_type$Smartphone,phone_type[,2:12]) ``` Each time we run this, we get a different pair of fits, so let's try this 100 times and see how the two models compare. ```{r} out <- matrix(0,nrow=250,ncol=2) for(i in 1:250) { cat(".") out[i,] <- cv.lda(phone_type$Smartphone,phone_type[,2:12]) } cat("\n") colMeans(out) ``` There is an advantage for the lda here, and it seems to perform a bit better that the previous models--around 63%! The story might be that qda's extra parameters end up overfitting even with split-half CV, and that the simple model wins out. ## Step from klaR These models fit on the entire data set, but a good way to prevent overfitting is to do variable selection. In classification, this is referred to as feature selection and dimensionality reduction (although that can sometimes use PCA). We can automate variable selection with ```stepclass``` in the ```klaR``` library, which uses CV to determine whether a variable should be kept. We can specify the 'foldness' of the CV: here we use 2-fold, but 4-fold would hold out 1/4 of the data and fit on the other 3/4. ```{r} library(klaR) model <- stepclass(Smartphone ~.,data = phone_type,method="lda", fold=2, start.vars=1:11,direction="both",output=T) modell2 <- lda(model$formula,data=phone_type) confusion(phone_type$Smartphone,predict(modell2)$class) ``` Here, using variable selection and CV, our fit goes to 66.6% for the LDA model--the best fit yet. What about QDA? ```{r} modelq <- stepclass(Smartphone ~.,data = phone_type,method="qda", fold=2, start.vars=1:11,direction="both",output=T) modelq2 <- qda(modelq$formula,data=phone_type) confusion(phone_type$Smartphone,predict(modelq2)$class) ``` This appears to improve things even more--to 68.5%; by fitting a smaller model we actually do better. Originally, we had 11 predictor variables. Each time this is run, the result is a bit different, but in my case the lda dropped to 8, whereas the qda dropped to 9. They tended to remove different variables--lda removed agreeableness but kept the other personality variables and removed the variables avoidance of similarity, phone as status object, and socio-economic status, but QDA kept these and removed more personality variables. This might just be luck and not related to the greater complexity of qda. Multicolinearity could mean that there are mutually-exclusive sets of variables that provide roughly equivalent solutions, and if you start down one path early you end up with one set, but if you start down the other path you end up with the other set. ```{r} model$formula modelq$formula ``` # Applications of LDA Although the performance of LDA can often be surpassed by more modern machine learning methods, there are several reasons it still sees widespread use. * It is simple to use and understand. Like logistic regression, it can be used to make a simple model or decision tool that is both easy to implement and transparent. * It is sufficient for many situations. Many times, the benefit you might get from using a more complex model is negligible, at the cost of complexity or (worse yet) the possibility of making large mistakes because of strange interactions that you might not be able to predict. Some of the most widely-used LDA models are within finance. For example, Altman's (1968) bankruptcy model is based on LDA, predicting bankruptcy of firms within the next two years based on a handful of publicly-available statistics (see Altman, 1968, "Financial ratios, discriminant analysis and the prediction of corporate bankruptcy". The Journal of Finance, 23(4), 589-609.) This is useful because the model can be implemented in a spreadsheet and the model's parameters might be easily communicated so individuals can assess whether they want to invest in something. # Alternatives and extensions in Machine Classification There are hundreds of special-purpose methods available for machine classification, many of which are developed for special kinds of situations or that work under different assumptions. We will cover several of these in this class, and here is a partial listing of methods you might want to be familiar with: Within the klaR library, there are several implementations of related methods. * rda: Regularized discriminant Analysis. Attempts to build a discriminant model that is more robust to correlation between predictors (multi-colinearity) * Probabilistic LDA. This frames the LDA problem in a Bayesian and/or maximum likelihood format, and is increasingly used as part of deep neural nets as a 'fair' final decision that does not hide complexity. * sknn: simple k-nearest-neighbors classification. Makes classification based on a vote of the nearest observations * loclda: Makes a local lda for each point, based on its nearby neighbors. Similar to adding LDA to a KNN classifier, which we will discuss in an upcoming module. * NaiveBayes: A common and simple classifier based on bayes rule * svmlight: a lightweight 'support vector machine', which generalizes lda, focusing especially on identifying a good decision rule that separates the two groups The klaR library also has a lot of functions to help with variable selection and cross-validation. Within the nnet library: * nnet: a neural net classifier--essentially a network of LDA classifiers or logistic regressions. * multinom: an extension of generalized linear regression for multiple groups Within the class library * knn: a straightforward set of knn tools.