How can reducing page memory size increase download times?
In 2009, YouTube made improvements which reduced the video player from 1.2MB to 98kB. However, the time taken to download the page seemed to increase.
In 2009, YouTube made improvements which reduced the video player from 1.2MB to 98kB. However, the time taken to download the page seemed to increase.
Man predictive models learn correlations between your features and your target, and apply those to make predictions. If you change your strategies, you risk changing these correlations. We look at an example where increasing prices leads to...
Many predictive models learn correlations between your features and your target, and apply those to make predictions. If you change your strategies, you risk changing these correlations. We look at an example where increasing prices leads to...
p-values are commonly used to determine if an effect is statistically significant. Cohen's D gives a measure of how important an effect is. It is possible to see a statistically significant difference (p value small) even if the effect isn't...
As the Zen of Python states, "readability counts". With a few simple tips and tricks, we can make our Pandas dataframes a lot more readable.
Goodhart's law claims "When a measure becomes a target, it ceases to be a good measure". This article explores how bad metrics can create perverse incentives, and how cross-validation fails to catch our errors.
A common technique for transforming categorical variables into a form suitable for machine learning is called "one-hot encoding" or "dummy encoding". This article discusses some of the limitations and folklore around this method (such as the...
Non-numeric features generally have to be encoded into one or more numeric features before applying machine learning models. This article covers some of the different encoding techniques, the category_encoders package, and some of the pros and...
Scikit learn grid search functions include a scoring parameter. Scorers allow us to compare different trained models. Models try to minimize a loss function. While custom scoring is straight-forward, custom losses are not.
A definition cannot be wrong, but it can fail to be useful. Can you repurpose a definition, or should you start from scratch?
In software engineering, it is important to have a single source of truth. In data science, it is a little more complicated.
How to prepare for those annoying questions about precision and recall in interviews.
There are a proliferation of different metrics in classification problems: accuracy, precision, recall, and more! Many of these metrics are defined in terms of True Positives, True Negatives, False Positives, and False Negatives. Here we give...
The ColumnTransformer allows us to easily apply different transformations to different features. For example, now we can scale some numerical features, while leaving binary flags alone! This article walks through two examples using...
Many of the classifiers in sklearn support a predict_proba method for calculating probabilities. Often, these "probabilities" are really just a score from 0 to 1, where a higher score means the model is more confident in the prediction, but it...
We know to split our data into a training and a testing set before we do our preprocessing, let alone our modeling. Often we are not as careful when doing cross-validation; we should really do things like scale our data within cross-validation...
This article contains derivations when applying the shrinkage methods of empirical Bayes to proportion problems.
This article contains derivations when applying the shrinkage methods of empirical Bayes to average rating problems.
The expression "Data science is more art than science" makes my skin crawl. Data science, like all sciences, requires both strict methodology and a lot of creativity.
The highest and lowest rated books, films, and music are those that have very few ratings. This is because for small samples, it is easier for small fluctuations to dominate. Shrinkage is the technique for moving the average for a particular item...
Introduces SimpleProphet, a less automated version of Facebook's time series analysis package Prophet. Compares the approach of Prophet to other standard approaches: ARIMA and LSTMs.
An article that outlines the standard approach to time series.
One technique, sometimes called "target" or "impact" encoding, uses the average value of the target variable per value to encode. The James-Stein encoder is a twist the "shrinks" the target value back to the global average to stop statistical...
The scoring functions used in our models are often baked in (such as using cross-entropy in Logistic Regression). We do get some choices when cross-validating, however. For example, we can pick the regularization parameter by using the ROC area...