How can reducing page memory size increase download times?
In 2009, YouTube made improvements which reduced the video player from 1.2MB to 98kB. However, the time taken to download the page seemed to increase.
In 2009, YouTube made improvements which reduced the video player from 1.2MB to 98kB. However, the time taken to download the page seemed to increase.
Man predictive models learn correlations between your features and your target, and apply those to make predictions. If you change your strategies, you risk changing these correlations. We look at an example where increasing prices leads to...
Many predictive models learn correlations between your features and your target, and apply those to make predictions. If you change your strategies, you risk changing these correlations. We look at an example where increasing prices leads to...
This article shows how you can run Jupyter on a remote server, connect to it, and have Jupyter continue to run - even if you get disconnected.
p-values are commonly used to determine if an effect is statistically significant. Cohen's D gives a measure of how important an effect is. It is possible to see a statistically significant difference (p value small) even if the effect isn't...
An earlier article, "Save the environment with conda", showed how to make a new environment and use it with Jupyter. This article walks through how to fix Jupyter if it isn't using the correct environment.
As the Zen of Python states, "readability counts". With a few simple tips and tricks, we can make our Pandas dataframes a lot more readable.
Goodhart's law claims "When a measure becomes a target, it ceases to be a good measure". This article explores how bad metrics can create perverse incentives, and how cross-validation fails to catch our errors.
A common technique for transforming categorical variables into a form suitable for machine learning is called "one-hot encoding" or "dummy encoding". This article discusses some of the limitations and folklore around this method (such as the...
Non-numeric features generally have to be encoded into one or more numeric features before applying machine learning models. This article covers some of the different encoding techniques, the category_encoders package, and some of the pros and...
Scikit learn grid search functions include a scoring parameter. Scorers allow us to compare different trained models. Models try to minimize a loss function. While custom scoring is straight-forward, custom losses are not.
A definition cannot be wrong, but it can fail to be useful. Can you repurpose a definition, or should you start from scratch?
In software engineering, it is important to have a single source of truth. In data science, it is a little more complicated.
Jupyter's use for quick experimentation encourages the use of global variables, as we may only have one connection to a database, or one dataframe used by all functions. The globals can lead to subtle, hard to debug problems. This article shows...
Jupyter notebooks allow for quick experimentation and exploration, but can encourage some bad habits. One subtle error is the usage of global variables in a Jupyter notebook. This is a quick post to show the error, and some steps you can take to avoid it
How to prepare for those annoying questions about precision and recall in interviews.
There are a proliferation of different metrics in classification problems: accuracy, precision, recall, and more! Many of these metrics are defined in terms of True Positives, True Negatives, False Positives, and False Negatives. Here we give...
The ColumnTransformer allows us to easily apply different transformations to different features. For example, now we can scale some numerical features, while leaving binary flags alone! This article walks through two examples using...
Many of the classifiers in sklearn support a predict_proba method for calculating probabilities. Often, these "probabilities" are really just a score from 0 to 1, where a higher score means the model is more confident in the prediction, but it...
We know to split our data into a training and a testing set before we do our preprocessing, let alone our modeling. Often we are not as careful when doing cross-validation; we should really do things like scale our data within cross-validation...
If your Ubuntu server is shutdown (for example, by your AWS instance rebooting), you may leave Postgres in an inconsistent state. This post walks through the steps of locating the lockfiles and getting Postgres up and running again.
ROC (Receiver Operator Characteristic) curves are a great way for measuring the performance of binary classifiers. They show how well a classifier's score (where a higher score means more likely to be in the "positive" class) does at separating...
Instead of learning how to undo accidentally commiting a large file, what if we could prevent the commit in the first place? This article shows how to use git hooks to check commits automatically for validity before actually doing the commit.
Environments allow you to distribute software to other users, where you don't know what packages they have installed. This is a better solution than using requirements.txt, as the packages you install won't interfere with the users system.
This is the eighth in a series of blog posts where we go through the process of taking a collection of functions and turn them into a deployable Python package. In this post, we summarize the steps needed to make and deploy a Python package.
This is the seventh in a series of blog posts where we go through the process of taking a collection of functions and turn them into a deployable Python package. In this post, we show how to deploy to TestPyPI.
This is the sixth in a series of blog posts where we go through the process of taking a collection of functions and turn them into a deployable Python package. In this post, we show how to include a CSV file into your package. This should be...
This is the fifth in a series of blog posts where we go through the process of taking a collection of functions and turn them into a deployable Python package. In this post, we use the tox package to automate some of the deployment steps
This is the fourth in a series of blog posts where we go through the process of taking a collection of functions and turn them into a deployable Python package. In this post, we use pytest to write unit tests for the roman numeral package.
This is the third in a series of blog posts where we go through the process of taking a collection of functions and turn them into a deployable Python package. In this post, we use setuptools to allow people to install our package on their system.
This is the second in a series of blog posts where we go through the process of taking a collection of functions and turn them into a deployable Python package. In this post, we add docstrings for our users to be able to understand what our package does.
This is the first in a series of blog posts where we go through the process of taking a collection of functions and turning them into a deployable Python package. In this post, we create a Roman Numerals function, and make it into a Python module.
This article contains derivations when applying the shrinkage methods of empirical Bayes to proportion problems.
This article contains derivations when applying the shrinkage methods of empirical Bayes to average rating problems.
The expression "Data science is more art than science" makes my skin crawl. Data science, like all sciences, requires both strict methodology and a lot of creativity.
The highest and lowest rated books, films, and music are those that have very few ratings. This is because for small samples, it is easier for small fluctuations to dominate. Shrinkage is the technique for moving the average for a particular item...
Introduces SimpleProphet, a less automated version of Facebook's time series analysis package Prophet. Compares the approach of Prophet to other standard approaches: ARIMA and LSTMs.
What is the difference between a production database and a data warehouse? How does that differ from a data lake? Why would I use one over the other? With the volume of data around, there are more and more use cases for data storage. This article...
An article that outlines the standard approach to time series.
Links to a couple of useful resources for preparing for the SQL, whether it is for a data science or data analyst position.
Second article in the advanced web-scraping series. Clarifies the difference between static and dynamic pages. Shows how to use Chrome's Network Panel to intercept Javascript and AJAX calls.
First article in the advanced web-scraping series. Clarifies the difference between static and dynamic pages. Outlines different approaches for getting data from pages generated with Javascript and AJAX.
An example of using OAuth2.0 to access an API using Python's requests module, using Spotify as an example.
How much does your name say about your age? We use the database of names from the social security administration, as well as age distribution data from the US Census, to find out! See what your own name's age distribution looks like here.
The principles of Hadley Wickham's tidy data, and how it relates to long and wide form data.
We show how to take an Excel spreadsheet, with merged column headings, and process it for further analysis.
What does it mean for data to be in long form vs wide form, and when would you use each? In Pandas, how do you convert from one form to another?
How to rollback in Github
What do you do when you have committed a large file to GitHub?
One technique, sometimes called "target" or "impact" encoding, uses the average value of the target variable per value to encode. The James-Stein encoder is a twist the "shrinks" the target value back to the global average to stop statistical...
The scoring functions used in our models are often baked in (such as using cross-entropy in Logistic Regression). We do get some choices when cross-validating, however. For example, we can pick the regularization parameter by using the ROC area...
It seems that Starbucks is ubiquitous in Seattle. Where in Seattle is furthest from a Starbucks store? In order to work this out, we need a list of all the stores in Seattle. The open data project Socrata makes it easy to find out - you can pull...
Determine the sample size needed to discover differences between two treatments, given your tolerance for false acceptances of inferior treatments, and false rejection of good treatments. Also includes a simulation of a trial, so that you can see...