Machine Learning
Machine Learning can be thought of as a set of algorithms that uses advanced statistical techniques for classification, prediction, and segmentation. Let us look at two of the commonly used Machine Learning techniques: Support Vector Machines (SVM), and Naive Bayes.
Support Vector Machines (SVM)
Support Vector Machines can be used for both prediction and classification problems. This method constructs decision boundaries (linear/non-linear). Support Vector Machines are very flexible in handling problems that involves classification and prediction because of the nature of feature space in which the decision boundaries are formed. There are several types of Support Vector models including linear, polynomial, RBF, and sigmoid.
Naive Bayes (NB)
Naive Bayes is a classification technique that is based on Bayes Theorem. The simplicity of this method makes this one of the widely used classification technique. One of the assumptions of this method is that the predictor variables are statistically independent. This makes Naive Bayes models effective classification tools that are easy to use and interpret. Naive Bayes method is particularly appropriate when we have a high dimensional data (when there are too many predictor variables/input variables). Many times Naive Bayes method outperform other more sophisticated machine learning algorithms that are used for classification. Naive Bayes method provides lots of options to model the conditional distribution (such as normal, log-normal etc.) of the input variables.
Click Here to Read About Exploratory Factor Analysis