Posts

Showing posts from March, 2018

Entropy and Information Gain

Neither 'Entropy' and 'Information' are concepts with very intuitive definitions. Most people learn about entropy in chemistry class where it is used to describe the amount of 'order' in a system. But how do you translate 'order' into a mathematical equation? And what about information? In data science the terms 'Entropy' and 'Information Gain' are usually used in the context of decision trees. Here entropy describes the 'purity' of a set, which of course is equivalent to the order of a system in chemistry. Decision trees try to split up a dataset based on differences in a single feature such that the split results in the 'purest' branches, meaning the lowest amount of variation in the target variable. Then the branches are split again according to the same criterion, until we reach the point where all branches are pure, or we decide the model is strong enough. The entropy (or often you will see cross-entropy or dev

What is Logistic Regression?

Logistic Regression is closely related to Linear Regression. Read my post on Linear Regression here . Logistic Regression is a classification technique, meaning that the target Y  is qualitative instead of quantitative. For example, trying to predict whether a customer will stop doing business with you, a.k.a. churn. Logistic Regression models the probability that a measurement belongs to a class: If we would try to predict the target value directly, (let's say churn = 1 and not-churn = 0), as you would with Linear Regression, the model might output negative target values or values larger than 1 for certain predictor values. Probabilities smaller than zero or larger than 1 make no sense, so instead we can use the logistic, or sigmoid, function to model probabilities. This is also: which is the form you usually see it in when it is used as the activation function in a neural network layer. This function will only output values between 0 and 1, which you can t

What is Linear Regression?

Linear regression is used to model the relationship between continuous variables. For example to predict the price of a house when you have features like size in square meters and crime in the neighborhood etc.  A linear regression function takes the form of ... Here y is the target we're trying to predict (house price), the  x' s are the p  features or predictors (size, crime) and the β 's are the coefficients or the parameters that we are trying to estimate by fitting the model to data. The little hats on top of the  y and  β 's are called hats, and indicate we are dealing with estimates here. With multiple features it is called Multiple Linear Regression and when there's only one feature it is Simple Linear Regression. For Linear Regression the function does not have to be linear with regards to the predictors as long as it is linear in the parameters. This means that you can model interactions between predictors by, for example, multiplying x 's