From e63471a14580c438f6f6091041a9620f0626dd1b Mon Sep 17 00:00:00 2001 From: Nikhil Reddy Date: Thu, 31 Oct 2024 19:00:15 -0700 Subject: [PATCH] publish note 19 --- _quarto.yml | 2 +- docs/case_study_HCE/case_study_HCE.html | 6 + .../loss_transformations.html | 34 +- .../figure-pdf/cell-13-output-1.pdf | Bin 9193 -> 9193 bytes .../figure-pdf/cell-14-output-1.pdf | Bin 15000 -> 15000 bytes .../figure-pdf/cell-15-output-1.pdf | Bin 8394 -> 8394 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 11041 -> 11041 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 103470 -> 103470 bytes .../figure-pdf/cell-7-output-2.pdf | Bin 11239 -> 11239 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 9752 -> 9752 bytes docs/cv_regularization/cv_reg.html | 20 +- docs/eda/eda.html | 162 +- .../eda_files/figure-pdf/cell-62-output-1.pdf | Bin 16671 -> 16671 bytes .../eda_files/figure-pdf/cell-67-output-1.pdf | Bin 10991 -> 10991 bytes .../eda_files/figure-pdf/cell-68-output-1.pdf | Bin 12638 -> 12638 bytes .../eda_files/figure-pdf/cell-69-output-1.pdf | Bin 9239 -> 9239 bytes .../eda_files/figure-pdf/cell-71-output-1.pdf | Bin 19825 -> 19825 bytes .../eda_files/figure-pdf/cell-75-output-1.pdf | Bin 16799 -> 16799 bytes .../eda_files/figure-pdf/cell-76-output-1.pdf | Bin 21577 -> 21577 bytes .../eda_files/figure-pdf/cell-77-output-1.pdf | Bin 11851 -> 11851 bytes .../feature_engineering.html | 32 +- .../figure-pdf/cell-8-output-2.pdf | Bin 9247 -> 9247 bytes .../figure-pdf/cell-9-output-2.pdf | Bin 9545 -> 9545 bytes docs/gradient_descent/gradient_descent.html | 56 +- .../figure-pdf/cell-21-output-2.pdf | Bin 11767 -> 11767 bytes docs/index.html | 6 + docs/inference_causality/images/bootstrap.png | Bin 0 -> 380665 bytes .../images/bootstrapped_samples.png | Bin 0 -> 320160 bytes .../images/confidence_interval.png | Bin 0 -> 211921 bytes .../inference_causality/images/confounder.png | Bin 0 -> 34527 bytes .../inference_causality/images/experiment.png | Bin 0 -> 65589 bytes .../images/observational.png | Bin 0 -> 85917 bytes .../images/plover_eggs.jpg | Bin 0 -> 182433 bytes .../images/population_samples.png | Bin 0 -> 173371 bytes .../inference_causality.html | 2377 +++++++++++++++++ .../figure-html/cell-14-output-2.png | Bin 0 -> 155998 bytes .../figure-html/cell-16-output-2.png | Bin 0 -> 42642 bytes .../figure-pdf/cell-14-output-2.pdf | Bin 0 -> 20716 bytes .../figure-pdf/cell-16-output-2.pdf | Bin 0 -> 17984 bytes docs/intro_lec/introduction.html | 6 + docs/intro_to_modeling/intro_to_modeling.html | 22 +- .../figure-html/cell-2-output-1.png | Bin 86597 -> 86742 bytes .../figure-pdf/cell-2-output-1.pdf | Bin 9956 -> 9973 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 15408 -> 15408 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 14938 -> 14938 bytes .../figure-pdf/cell-9-output-1.pdf | Bin 16000 -> 16000 bytes docs/ols/ols.html | 12 +- docs/pandas_1/pandas_1.html | 100 +- docs/pandas_2/pandas_2.html | 148 +- docs/pandas_3/pandas_3.html | 1650 ++++++------ docs/probability_1/probability_1.html | 6 + docs/probability_2/probability_2.html | 10 + docs/regex/regex.html | 54 +- docs/sampling/sampling.html | 40 +- .../figure-html/cell-13-output-2.png | Bin 33053 -> 30932 bytes .../figure-html/cell-15-output-2.png | Bin 57756 -> 57172 bytes docs/search.json | 64 +- docs/visualization_1/visualization_1.html | 50 +- .../figure-pdf/cell-10-output-2.pdf | Bin 14751 -> 14751 bytes .../figure-pdf/cell-11-output-1.pdf | Bin 11421 -> 11421 bytes .../figure-pdf/cell-12-output-1.pdf | Bin 12962 -> 12962 bytes .../figure-pdf/cell-13-output-1.pdf | Bin 15653 -> 15653 bytes .../figure-pdf/cell-14-output-1.pdf | Bin 13198 -> 13198 bytes .../figure-pdf/cell-15-output-1.pdf | Bin 13903 -> 13903 bytes .../figure-pdf/cell-17-output-2.pdf | Bin 16169 -> 16169 bytes .../figure-pdf/cell-18-output-2.pdf | Bin 11504 -> 11504 bytes .../figure-pdf/cell-19-output-2.pdf | Bin 13869 -> 13869 bytes .../figure-pdf/cell-20-output-2.pdf | Bin 14660 -> 14660 bytes .../figure-pdf/cell-21-output-1.pdf | Bin 11648 -> 11648 bytes .../figure-pdf/cell-22-output-1.pdf | Bin 11461 -> 11461 bytes .../figure-pdf/cell-23-output-1.pdf | Bin 12128 -> 12128 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 11274 -> 11274 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 11328 -> 11328 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 11395 -> 11395 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 23251 -> 23251 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 11931 -> 11931 bytes .../figure-pdf/cell-9-output-1.pdf | Bin 13379 -> 13379 bytes docs/visualization_2/visualization_2.html | 56 +- .../figure-html/cell-18-output-1.png | Bin 98963 -> 98531 bytes .../figure-pdf/cell-10-output-1.pdf | Bin 10169 -> 10169 bytes .../figure-pdf/cell-11-output-1.pdf | Bin 5887 -> 5887 bytes .../figure-pdf/cell-12-output-1.pdf | Bin 11927 -> 11927 bytes .../figure-pdf/cell-13-output-1.pdf | Bin 14012 -> 14012 bytes .../figure-pdf/cell-14-output-1.pdf | Bin 13643 -> 13643 bytes .../figure-pdf/cell-15-output-1.pdf | Bin 13905 -> 13905 bytes .../figure-pdf/cell-16-output-1.pdf | Bin 17703 -> 17703 bytes .../figure-pdf/cell-17-output-1.pdf | Bin 15914 -> 15914 bytes .../figure-pdf/cell-18-output-1.pdf | Bin 17753 -> 17762 bytes .../figure-pdf/cell-19-output-1.pdf | Bin 15715 -> 15715 bytes .../figure-pdf/cell-20-output-1.pdf | Bin 14911 -> 14911 bytes .../figure-pdf/cell-21-output-1.pdf | Bin 40952 -> 40952 bytes .../figure-pdf/cell-22-output-1.pdf | Bin 13919 -> 13919 bytes .../figure-pdf/cell-23-output-1.pdf | Bin 14978 -> 14978 bytes .../figure-pdf/cell-24-output-1.pdf | Bin 16210 -> 16210 bytes .../figure-pdf/cell-25-output-2.pdf | Bin 16563 -> 16563 bytes .../figure-pdf/cell-26-output-1.pdf | Bin 14791 -> 14791 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 12068 -> 12068 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 9274 -> 9274 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 10244 -> 10244 bytes .../figure-pdf/cell-6-output-1.pdf | Bin 10243 -> 10243 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 10130 -> 10130 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 12591 -> 12591 bytes .../figure-pdf/cell-9-output-1.pdf | Bin 11286 -> 11286 bytes index.tex | 1124 +++++++- 104 files changed, 4770 insertions(+), 1267 deletions(-) create mode 100644 docs/inference_causality/images/bootstrap.png create mode 100644 docs/inference_causality/images/bootstrapped_samples.png create mode 100644 docs/inference_causality/images/confidence_interval.png create mode 100644 docs/inference_causality/images/confounder.png create mode 100644 docs/inference_causality/images/experiment.png create mode 100644 docs/inference_causality/images/observational.png create mode 100644 docs/inference_causality/images/plover_eggs.jpg create mode 100644 docs/inference_causality/images/population_samples.png create mode 100644 docs/inference_causality/inference_causality.html create mode 100644 docs/inference_causality/inference_causality_files/figure-html/cell-14-output-2.png create mode 100644 docs/inference_causality/inference_causality_files/figure-html/cell-16-output-2.png create mode 100644 docs/inference_causality/inference_causality_files/figure-pdf/cell-14-output-2.pdf create mode 100644 docs/inference_causality/inference_causality_files/figure-pdf/cell-16-output-2.pdf diff --git a/_quarto.yml b/_quarto.yml index 543bf95ce..2fa9025b1 100644 --- a/_quarto.yml +++ b/_quarto.yml @@ -34,7 +34,7 @@ book: - cv_regularization/cv_reg.qmd - probability_1/probability_1.qmd - probability_2/probability_2.qmd - # - inference_causality/inference_causality.qmd + - inference_causality/inference_causality.qmd # - case_study_climate/case_study_climate.qmd # - sql_I/sql_I.qmd # - sql_II/sql_II.qmd diff --git a/docs/case_study_HCE/case_study_HCE.html b/docs/case_study_HCE/case_study_HCE.html index 612715fdf..cce21d7cc 100644 --- a/docs/case_study_HCE/case_study_HCE.html +++ b/docs/case_study_HCE/case_study_HCE.html @@ -259,6 +259,12 @@ 18  Estimators, Bias, and Variance + + diff --git a/docs/constant_model_loss_transformations/loss_transformations.html b/docs/constant_model_loss_transformations/loss_transformations.html index 9e7ff6dd6..bbe1e6458 100644 --- a/docs/constant_model_loss_transformations/loss_transformations.html +++ b/docs/constant_model_loss_transformations/loss_transformations.html @@ -288,6 +288,12 @@ 18  Estimators, Bias, and Variance + + @@ -495,7 +501,7 @@

+
Code
import numpy as np
@@ -510,7 +516,7 @@ 

data_linear = dugongs[["Length", "Age"]]

-
+
Code
# Big font helper
@@ -532,7 +538,7 @@ 

plt.style.use("default") # Revert style to default mpl

-
+
Code
# Constant Model + MSE
@@ -565,7 +571,7 @@ 

+
Code
# SLR + MSE
@@ -628,7 +634,7 @@ 

+
Code
# Predictions
@@ -640,7 +646,7 @@ 

yhats_linear = [theta_0_hat + theta_1_hat * x for x in xs]

-
+
Code
# Constant Model Rug Plot
@@ -670,7 +676,7 @@ 

+
Code
# SLR model scatter plot 
@@ -784,7 +790,7 @@ 

11.4 Comparing Loss Functions

We’ve now tried our hand at fitting a model under both MSE and MAE cost functions. How do the two results compare?

Let’s consider a dataset where each entry represents the number of drinks sold at a bubble tea store each day. We’ll fit a constant model to predict the number of drinks that will be sold tomorrow.

-
+
drinks = np.array([20, 21, 22, 29, 33])
 drinks
@@ -792,7 +798,7 @@

+
np.mean(drinks), np.median(drinks)
(np.float64(25.0), np.float64(22.0))
@@ -802,7 +808,7 @@

Notice that the MSE above is a smooth function – it is differentiable at all points, making it easy to minimize using numerical methods. The MAE, in contrast, is not differentiable at each of its “kinks.” We’ll explore how the smoothness of the cost function can impact our ability to apply numerical optimization in a few weeks.

How do outliers affect each cost function? Imagine we replace the largest value in the dataset with 1000. The mean of the data increases substantially, while the median is nearly unaffected.

-
+
drinks_with_outlier = np.append(drinks, 1033)
 display(drinks_with_outlier)
 np.mean(drinks_with_outlier), np.median(drinks_with_outlier)
@@ -816,7 +822,7 @@

This means that under the MSE, the optimal model parameter \(\hat{\theta}\) is strongly affected by the presence of outliers. Under the MAE, the optimal parameter is not as influenced by outlying data. We can generalize this by saying that the MSE is sensitive to outliers, while the MAE is robust to outliers.

Let’s try another experiment. This time, we’ll add an additional, non-outlying datapoint to the data.

-
+
drinks_with_additional_observation = np.append(drinks, 35)
 drinks_with_additional_observation
@@ -888,7 +894,7 @@

+
Code
# `corrcoef` computes the correlation coefficient between two variables
@@ -920,7 +926,7 @@ 

and "Length". What is making the raw data deviate from a linear relationship? Notice that the data points with "Length" greater than 2.6 have disproportionately high values of "Age" relative to the rest of the data. If we could manipulate these data points to have lower "Age" values, we’d “shift” these points downwards and reduce the curvature in the data. Applying a logarithmic transformation to \(y_i\) (that is, taking \(\log(\) "Age" \()\) ) would achieve just that.

An important word on \(\log\): in Data 100 (and most upper-division STEM courses), \(\log\) denotes the natural logarithm with base \(e\). The base-10 logarithm, where relevant, is indicated by \(\log_{10}\).

-
+
Code
z = np.log(y)
@@ -955,7 +961,7 @@ 

\[\log{(y)} = \theta_0 + \theta_1 x\] \[y = e^{\theta_0 + \theta_1 x}\] \[y = (e^{\theta_0})e^{\theta_1 x}\] \[y_i = C e^{k x}\]

For some constants \(C\) and \(k\).

\(y\) is an exponential function of \(x\). Applying an exponential fit to the untransformed variables corroborates this finding.

-
+
Code
plt.figure(dpi=120, figsize=(4, 3))
diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-13-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-13-output-1.pdf
index f42366fa7a3b6631b79790b6d8668e4507292250..07c1e6f65875057472c82471145fe34ff4dce962 100644
GIT binary patch
delta 21
dcmaFq{?dKJYXuHtLqiKw6BF~zpB26_0RUvN2!j9s

delta 21
dcmaFq{?dKJYXuG?OG9%56H|-LpB26_0RUv$2!;Rv

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-14-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-14-output-1.pdf
index 564822db532f08909dd19d021aca798f1087be51..343a8e2338e0b4d89e99793d72aab3262cbf40da 100644
GIT binary patch
delta 21
ccmbPHI-_($t{I21p`nGTiHZ51095S;oB#j-

delta 21
ccmbPHI-_($t{I1srJ=cjiK)frQZr>1096(Sp8x;=

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-15-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-15-output-1.pdf
index 2559b0b5ec5706cf7c91d10e7d2353991456e454..c9c04a5670bbfed9c17a1eb70268a969c020f85a 100644
GIT binary patch
delta 21
dcmX@*c*=3Z16dAZLqiKw6BF~zFJvDv0RUaQ2nhfH

delta 21
dcmX@*c*=3Z16d9uOG9%56H|-LFJvDv0RUa(2n+xK

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-4-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-4-output-1.pdf
index 5ba8efe0070e99ab2b2d2e967b5ee9c0ebca77b0..31b8de9564b87c597ad686a01b1d1cfa64974884 100644
GIT binary patch
delta 21
ccmZ1&wlHkNFLe%MLqiKw6BDz|Od9gc09U03X8-^I

delta 21
ccmZ1&wlHkNFLe$hOG9%56I1ieOd9gc09VciY5)KL

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-5-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-5-output-1.pdf
index 2a5dc156b51bd25c5c3a12ba5047d874d995985d..5dc7499a40600521a2db575f9a5d771d707072e5 100644
GIT binary patch
delta 26
icmZ3tf^FRjwuUW?^ZGfA4Gk?!O-#(UFY9OQU;zM#e+fMR

delta 26
icmZ3tf^FRjwuUW?^ZGfAEDg;KOiV4dFY9OQU;zM#s|i2=

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-7-output-2.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-7-output-2.pdf
index 058630ad7d5ec8755559c9ad7283b362429e6713..3dec4892ffb9b0ed8b6001599c61386c2aff0397 100644
GIT binary patch
delta 21
ccmaDJ{ycoc3=IxrLqiKw6BF~z3pCuB0bh6rTL1t6

delta 21
ccmaDJ{ycoc3=Iw=OG9%56H|-L3pCuB0bij9UH||9

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-8-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-8-output-1.pdf
index 68e9d839bab2ddae221880f7c36a48b426130667..95163c1d7f51ae3387f6ce88f820d61a1d4bc348 100644
GIT binary patch
delta 21
ccmbQ?Gs9=YCnXMJLqiKw6BF~zzm(*d0awTeCIA2c

delta 21
ccmbQ?Gs9=YCnXLeOG9%56H|-Lzm(*d0ax({DF6Tf

diff --git a/docs/cv_regularization/cv_reg.html b/docs/cv_regularization/cv_reg.html
index e6bef47d8..ab7d84677 100644
--- a/docs/cv_regularization/cv_reg.html
+++ b/docs/cv_regularization/cv_reg.html
@@ -291,6 +291,12 @@
   
  18  Estimators, Bias, and Variance
   
+ +
@@ -394,7 +400,7 @@


In sklearn, the train_test_split function (documentation) of the model_selection module allows us to automatically generate train-test splits.

We will work with the vehicles dataset from previous lectures. As before, we will attempt to predict the mpg of a vehicle from transformations of its hp. In the cell below, we allocate 20% of the full dataset to testing, and the remaining 80% to training.

-
+
Code
import pandas as pd
@@ -413,7 +419,7 @@ 

Y = vehicles["mpg"]

-
+
from sklearn.model_selection import train_test_split
 
 # `test_size` specifies the proportion of the full dataset that should be allocated to testing
@@ -435,7 +441,7 @@ 

After performing our train-test split, we fit a model to the training set and assess its performance on the test set.

-
+
import sklearn.linear_model as lm
 from sklearn.metrics import mean_squared_error
 
@@ -615,7 +621,7 @@ 

\(\lambda\) the regularization penalty hyperparameter; it needs to be determined prior to training the model, so we must find the best value via cross-validation.

The process of finding the optimal \(\hat{\theta}\) to minimize our new objective function is called L1 regularization. It is also sometimes known by the acronym “LASSO”, which stands for “Least Absolute Shrinkage and Selection Operator.”

Unlike ordinary least squares, which can be solved via the closed-form solution \(\hat{\theta}_{OLS} = (\mathbb{X}^{\top}\mathbb{X})^{-1}\mathbb{X}^{\top}\mathbb{Y}\), there is no closed-form solution for the optimal parameter vector under L1 regularization. Instead, we use the Lasso model class of sklearn.

-
+
import sklearn.linear_model as lm
 
 # The alpha parameter represents our lambda term
@@ -633,7 +639,7 @@ 

16.2.3 Scaling Features for Regularization

The regularization procedure we just performed had one subtle issue. To see what it is, let’s take a look at the design matrix for our lasso_model.

-
+
Code
X_train.head()
@@ -696,7 +702,7 @@

\(\hat{y}\) because it is so much greater than the values of the other features. For hp to have much of an impact at all on the prediction, it must be scaled by a large model parameter.

By inspecting the fitted parameters of our model, we see that this is the case – the parameter for hp is much larger in magnitude than the parameter for hp^4.

-
+
pd.DataFrame({"Feature":X_train.columns, "Parameter":lasso_model.coef_})
@@ -760,7 +766,7 @@

\[\hat\theta_{\text{ridge}} = (\mathbb{X}^{\top}\mathbb{X} + n\lambda I)^{-1}\mathbb{X}^{\top}\mathbb{Y}\]

This solution exists even if \(\mathbb{X}\) is not full column rank. This is a major reason why L2 regularization is often used – it can produce a solution even when there is colinearity in the features. We will discuss the concept of colinearity in a future lecture, but we will not derive this result in Data 100, as it involves a fair bit of matrix calculus.

In sklearn, we perform L2 regularization using the Ridge class. It runs gradient descent to minimize the L2 objective function. Notice that we scale the data before regularizing.

-
+
ridge_model = lm.Ridge(alpha=1) # alpha represents the hyperparameter lambda
 ridge_model.fit(X_train, Y_train)
 
diff --git a/docs/eda/eda.html b/docs/eda/eda.html
index c6a078d4e..57ae3ec6d 100644
--- a/docs/eda/eda.html
+++ b/docs/eda/eda.html
@@ -291,6 +291,12 @@
   
  18  Estimators, Bias, and Variance
   
+ +
@@ -379,7 +385,7 @@

Data Cleaning and EDA

-
+
Code
import numpy as np
@@ -444,7 +450,7 @@ 

5.1.1.1 CSV

CSVs, which stand for Comma-Separated Values, are a common tabular data format. In the past two pandas lectures, we briefly touched on the idea of file format: the way data is encoded in a file for storage. Specifically, our elections and babynames datasets were stored and loaded as CSVs:

-
+
pd.read_csv("data/elections.csv").head(5)
@@ -515,7 +521,7 @@