From c83a632dd1248aff9c9ac07cec4ef5024858bd26 Mon Sep 17 00:00:00 2001 From: Nikhil Reddy Date: Tue, 17 Dec 2024 11:48:49 -0800 Subject: [PATCH] minor sklearn fix --- .../loss_transformations.html | 28 +- .../figure-pdf/cell-13-output-1.pdf | Bin 9193 -> 9193 bytes .../figure-pdf/cell-14-output-1.pdf | Bin 15000 -> 15000 bytes .../figure-pdf/cell-15-output-1.pdf | Bin 8394 -> 8394 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 11041 -> 11041 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 103470 -> 103470 bytes .../figure-pdf/cell-7-output-2.pdf | Bin 11239 -> 11239 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 9752 -> 9752 bytes docs/cv_regularization/cv_reg.html | 14 +- docs/eda/eda.html | 156 +++++----- .../eda_files/figure-pdf/cell-62-output-1.pdf | Bin 16671 -> 16671 bytes .../eda_files/figure-pdf/cell-67-output-1.pdf | Bin 10991 -> 10991 bytes .../eda_files/figure-pdf/cell-68-output-1.pdf | Bin 12638 -> 12638 bytes .../eda_files/figure-pdf/cell-69-output-1.pdf | Bin 9239 -> 9239 bytes .../eda_files/figure-pdf/cell-71-output-1.pdf | Bin 19825 -> 19825 bytes .../eda_files/figure-pdf/cell-75-output-1.pdf | Bin 16799 -> 16799 bytes .../eda_files/figure-pdf/cell-76-output-1.pdf | Bin 21577 -> 21577 bytes .../eda_files/figure-pdf/cell-77-output-1.pdf | Bin 11851 -> 11851 bytes .../feature_engineering.html | 24 +- .../figure-pdf/cell-8-output-2.pdf | Bin 9247 -> 9247 bytes .../figure-pdf/cell-9-output-2.pdf | Bin 9545 -> 9545 bytes docs/gradient_descent/gradient_descent.html | 52 ++-- .../figure-pdf/cell-21-output-2.pdf | Bin 11767 -> 11767 bytes .../inference_causality.html | 46 +-- .../figure-pdf/cell-14-output-2.pdf | Bin 20716 -> 20716 bytes .../figure-pdf/cell-16-output-2.pdf | Bin 17984 -> 17984 bytes docs/intro_to_modeling/intro_to_modeling.html | 16 +- .../figure-html/cell-2-output-1.png | Bin 86726 -> 86902 bytes .../figure-pdf/cell-2-output-1.pdf | Bin 9952 -> 9976 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 15408 -> 15408 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 14938 -> 14938 bytes .../figure-pdf/cell-9-output-1.pdf | Bin 16000 -> 16000 bytes .../logistic_regression_1/logistic_reg_1.html | 24 +- .../figure-html/cell-3-output-1.png | Bin 119024 -> 118464 bytes .../figure-html/cell-4-output-1.png | Bin 135747 -> 134361 bytes .../figure-html/cell-5-output-1.png | Bin 174579 -> 174071 bytes .../figure-html/cell-8-output-1.png | Bin 182152 -> 181822 bytes .../figure-pdf/cell-10-output-1.pdf | Bin 13791 -> 13791 bytes .../figure-pdf/cell-11-output-1.pdf | Bin 13937 -> 13937 bytes .../figure-pdf/cell-13-output-1.pdf | Bin 10478 -> 10478 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 19603 -> 19601 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 19678 -> 19615 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 19973 -> 19991 bytes .../figure-pdf/cell-6-output-1.pdf | Bin 11733 -> 11733 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 12423 -> 12423 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 25384 -> 25411 bytes docs/ols/ols.html | 6 +- docs/pandas_1/pandas_1.html | 94 +++--- docs/pandas_2/pandas_2.html | 134 ++++---- docs/pandas_3/pandas_3.html | 116 +++---- docs/pca_1/pca_1.html | 34 +-- docs/pca_2/pca_2.html | 106 +++---- docs/regex/regex.html | 48 +-- docs/sampling/sampling.html | 34 +-- .../figure-html/cell-13-output-2.png | Bin 34216 -> 33276 bytes .../figure-html/cell-15-output-2.png | Bin 56590 -> 58296 bytes docs/search.json | 286 +++++++++--------- docs/sql_I/sql_I.html | 40 +-- docs/sql_II/sql_II.html | 140 ++++----- docs/visualization_1/visualization_1.html | 44 +-- .../figure-pdf/cell-10-output-2.pdf | Bin 14751 -> 14751 bytes .../figure-pdf/cell-11-output-1.pdf | Bin 11421 -> 11421 bytes .../figure-pdf/cell-12-output-1.pdf | Bin 12962 -> 12962 bytes .../figure-pdf/cell-13-output-1.pdf | Bin 15653 -> 15653 bytes .../figure-pdf/cell-14-output-1.pdf | Bin 13198 -> 13198 bytes .../figure-pdf/cell-15-output-1.pdf | Bin 13903 -> 13903 bytes .../figure-pdf/cell-17-output-2.pdf | Bin 16169 -> 16169 bytes .../figure-pdf/cell-18-output-2.pdf | Bin 11504 -> 11504 bytes .../figure-pdf/cell-19-output-2.pdf | Bin 13869 -> 13869 bytes .../figure-pdf/cell-20-output-2.pdf | Bin 14660 -> 14660 bytes .../figure-pdf/cell-21-output-1.pdf | Bin 11648 -> 11648 bytes .../figure-pdf/cell-22-output-1.pdf | Bin 11461 -> 11461 bytes .../figure-pdf/cell-23-output-1.pdf | Bin 12128 -> 12128 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 11274 -> 11274 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 11328 -> 11328 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 11395 -> 11395 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 23251 -> 23251 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 11931 -> 11931 bytes .../figure-pdf/cell-9-output-1.pdf | Bin 13379 -> 13379 bytes docs/visualization_2/visualization_2.html | 50 +-- .../figure-html/cell-18-output-1.png | Bin 99112 -> 98631 bytes .../figure-pdf/cell-10-output-1.pdf | Bin 10169 -> 10169 bytes .../figure-pdf/cell-11-output-1.pdf | Bin 5887 -> 5887 bytes .../figure-pdf/cell-12-output-1.pdf | Bin 11927 -> 11927 bytes .../figure-pdf/cell-13-output-1.pdf | Bin 14012 -> 14012 bytes .../figure-pdf/cell-14-output-1.pdf | Bin 13643 -> 13643 bytes .../figure-pdf/cell-15-output-1.pdf | Bin 13905 -> 13905 bytes .../figure-pdf/cell-16-output-1.pdf | Bin 17703 -> 17703 bytes .../figure-pdf/cell-17-output-1.pdf | Bin 15914 -> 15914 bytes .../figure-pdf/cell-18-output-1.pdf | Bin 17743 -> 17771 bytes .../figure-pdf/cell-19-output-1.pdf | Bin 15715 -> 15715 bytes .../figure-pdf/cell-20-output-1.pdf | Bin 14911 -> 14911 bytes .../figure-pdf/cell-21-output-1.pdf | Bin 40952 -> 40952 bytes .../figure-pdf/cell-22-output-1.pdf | Bin 13919 -> 13919 bytes .../figure-pdf/cell-23-output-1.pdf | Bin 14978 -> 14978 bytes .../figure-pdf/cell-24-output-1.pdf | Bin 16210 -> 16210 bytes .../figure-pdf/cell-25-output-2.pdf | Bin 16563 -> 16563 bytes .../figure-pdf/cell-26-output-1.pdf | Bin 14791 -> 14791 bytes .../figure-pdf/cell-3-output-1.pdf | Bin 12068 -> 12068 bytes .../figure-pdf/cell-4-output-1.pdf | Bin 9274 -> 9274 bytes .../figure-pdf/cell-5-output-1.pdf | Bin 10244 -> 10244 bytes .../figure-pdf/cell-6-output-1.pdf | Bin 10243 -> 10243 bytes .../figure-pdf/cell-7-output-1.pdf | Bin 10130 -> 10130 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 12591 -> 12591 bytes .../figure-pdf/cell-9-output-1.pdf | Bin 11286 -> 11286 bytes gradient_descent/gradient_descent.qmd | 2 +- index.tex | 192 ++++++------ 107 files changed, 843 insertions(+), 843 deletions(-) diff --git a/docs/constant_model_loss_transformations/loss_transformations.html b/docs/constant_model_loss_transformations/loss_transformations.html index 6d7bfbff..cf8378c2 100644 --- a/docs/constant_model_loss_transformations/loss_transformations.html +++ b/docs/constant_model_loss_transformations/loss_transformations.html @@ -543,7 +543,7 @@

+
Code
import numpy as np
@@ -558,7 +558,7 @@ 

data_linear = dugongs[["Length", "Age"]]

-
+
Code
# Big font helper
@@ -580,7 +580,7 @@ 

plt.style.use("default") # Revert style to default mpl

-
+
Code
# Constant Model + MSE
@@ -613,7 +613,7 @@ 

+
Code
# SLR + MSE
@@ -676,7 +676,7 @@ 

+
Code
# Predictions
@@ -688,7 +688,7 @@ 

yhats_linear = [theta_0_hat + theta_1_hat * x for x in xs]

-
+
Code
# Constant Model Rug Plot
@@ -718,7 +718,7 @@ 

+
Code
# SLR model scatter plot 
@@ -832,7 +832,7 @@ 

11.4 Comparing Loss Functions

We’ve now tried our hand at fitting a model under both MSE and MAE cost functions. How do the two results compare?

Let’s consider a dataset where each entry represents the number of drinks sold at a bubble tea store each day. We’ll fit a constant model to predict the number of drinks that will be sold tomorrow.

-
+
drinks = np.array([20, 21, 22, 29, 33])
 drinks
@@ -840,7 +840,7 @@

+
np.mean(drinks), np.median(drinks)
(np.float64(25.0), np.float64(22.0))
@@ -850,7 +850,7 @@

Notice that the MSE above is a smooth function – it is differentiable at all points, making it easy to minimize using numerical methods. The MAE, in contrast, is not differentiable at each of its “kinks.” We’ll explore how the smoothness of the cost function can impact our ability to apply numerical optimization in a few weeks.

How do outliers affect each cost function? Imagine we replace the largest value in the dataset with 1000. The mean of the data increases substantially, while the median is nearly unaffected.

-
+
drinks_with_outlier = np.append(drinks, 1033)
 display(drinks_with_outlier)
 np.mean(drinks_with_outlier), np.median(drinks_with_outlier)
@@ -864,7 +864,7 @@

This means that under the MSE, the optimal model parameter \(\hat{\theta}\) is strongly affected by the presence of outliers. Under the MAE, the optimal parameter is not as influenced by outlying data. We can generalize this by saying that the MSE is sensitive to outliers, while the MAE is robust to outliers.

Let’s try another experiment. This time, we’ll add an additional, non-outlying datapoint to the data.

-
+
drinks_with_additional_observation = np.append(drinks, 35)
 drinks_with_additional_observation
@@ -936,7 +936,7 @@

+
Code
# `corrcoef` computes the correlation coefficient between two variables
@@ -968,7 +968,7 @@ 

and "Length". What is making the raw data deviate from a linear relationship? Notice that the data points with "Length" greater than 2.6 have disproportionately high values of "Age" relative to the rest of the data. If we could manipulate these data points to have lower "Age" values, we’d “shift” these points downwards and reduce the curvature in the data. Applying a logarithmic transformation to \(y_i\) (that is, taking \(\log(\) "Age" \()\) ) would achieve just that.

An important word on \(\log\): in Data 100 (and most upper-division STEM courses), \(\log\) denotes the natural logarithm with base \(e\). The base-10 logarithm, where relevant, is indicated by \(\log_{10}\).

-
+
Code
z = np.log(y)
@@ -1003,7 +1003,7 @@ 

\[\log{(y)} = \theta_0 + \theta_1 x\] \[y = e^{\theta_0 + \theta_1 x}\] \[y = (e^{\theta_0})e^{\theta_1 x}\] \[y_i = C e^{k x}\]

For some constants \(C\) and \(k\).

\(y\) is an exponential function of \(x\). Applying an exponential fit to the untransformed variables corroborates this finding.

-
+
Code
plt.figure(dpi=120, figsize=(4, 3))
diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-13-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-13-output-1.pdf
index 9490bafcbd6cac38695d916d71c68426f57b5ecf..9146aa019515ca8f08269bf6f4f4c21eec4cca3d 100644
GIT binary patch
delta 20
ccmaFq{?dKJ8wGZALqiioL&MFV6}~Y609~L6LI3~&

delta 20
ccmaFq{?dKJ8wGYlLrX(5L(|Ql6}~Y60A0-pN&o-=

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-14-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-14-output-1.pdf
index 7ffcc258dc64d74a89a321c077e8249b64d96538..a28ccf37e5181b39c321acbab66623930b4b09c7 100644
GIT binary patch
delta 20
bcmbPHI-_($o*BEjp`nSPq2cCIGi4S4PKpLw

delta 20
bcmbPHI-_($o*BELp{1djq3PyQGi4S4PPYbQ

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-15-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-15-output-1.pdf
index 2a78fe0712d99b77f6ea7e50d68d0c99765c20c9..d5e90e14dd63bbea41b11b057deefbd9cc595501 100644
GIT binary patch
delta 20
ccmX@*c*=3ZLs@onLqiioL&ME4WFIjB09OJB$N&HU

delta 20
ccmX@*c*=3ZLs@o1LrX(5L(|PKWFIjB09P*u&;S4c

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-4-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-4-output-1.pdf
index 7ff0b78d2642c70926d4d30b28fa93722448e16d..9a9713cd6f2719f43f545798c81cdb46f5c23cb7 100644
GIT binary patch
delta 20
bcmZ1&wlHkNZ*_KaLqiioLxass8uH8lQ1AvO

delta 20
bcmZ1&wlHkNZ*_Ju2m>0RVs}2>Sp4

delta 25
hcmZ3tf^FRjwuUW?^ZVHi4J{4L3{AH$>u2m>0RVtw2?PKD

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-7-output-2.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-7-output-2.pdf
index eb99b21d7a4bc470f8e4e286d21da64d5fbbae0b..568bf6d514c9e8e7d0d86eaacc27adb2ecc1e869 100644
GIT binary patch
delta 20
bcmaDJ{ycocObvE(LqiioL&MDrG~Ag1Se6GG

delta 20
bcmaDJ{ycocObvEJLrX(5L(|O*G~Ag1Si=V*

diff --git a/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-8-output-1.pdf b/docs/constant_model_loss_transformations/loss_transformations_files/figure-pdf/cell-8-output-1.pdf
index 25e6b7889ebda19b53ba96cc3196194fbbc25849..a9f7c97e00650b41dba1d7204107d420e8000cf0 100644
GIT binary patch
delta 20
bcmbQ?Gs9=YXC-!XLqiioL&MF#l;oHJQAP&k

delta 20
bcmbQ?Gs9=YXC-z+LrX(5L(|Q_l;oHJQF8|E

diff --git a/docs/cv_regularization/cv_reg.html b/docs/cv_regularization/cv_reg.html
index 83966994..64fa4759 100644
--- a/docs/cv_regularization/cv_reg.html
+++ b/docs/cv_regularization/cv_reg.html
@@ -442,7 +442,7 @@ 


In sklearn, the train_test_split function (documentation) of the model_selection module allows us to automatically generate train-test splits.

We will work with the vehicles dataset from previous lectures. As before, we will attempt to predict the mpg of a vehicle from transformations of its hp. In the cell below, we allocate 20% of the full dataset to testing, and the remaining 80% to training.

-
+
Code
import pandas as pd
@@ -461,7 +461,7 @@ 

Y = vehicles["mpg"]

-
+
from sklearn.model_selection import train_test_split
 
 # `test_size` specifies the proportion of the full dataset that should be allocated to testing
@@ -483,7 +483,7 @@ 

After performing our train-test split, we fit a model to the training set and assess its performance on the test set.

-
+
import sklearn.linear_model as lm
 from sklearn.metrics import mean_squared_error
 
@@ -663,7 +663,7 @@ 

\(\lambda\) the regularization penalty hyperparameter; it needs to be determined prior to training the model, so we must find the best value via cross-validation.

The process of finding the optimal \(\hat{\theta}\) to minimize our new objective function is called L1 regularization. It is also sometimes known by the acronym “LASSO”, which stands for “Least Absolute Shrinkage and Selection Operator.”

Unlike ordinary least squares, which can be solved via the closed-form solution \(\hat{\theta}_{OLS} = (\mathbb{X}^{\top}\mathbb{X})^{-1}\mathbb{X}^{\top}\mathbb{Y}\), there is no closed-form solution for the optimal parameter vector under L1 regularization. Instead, we use the Lasso model class of sklearn.

-
+
import sklearn.linear_model as lm
 
 # The alpha parameter represents our lambda term
@@ -681,7 +681,7 @@ 

16.2.3 Scaling Features for Regularization

The regularization procedure we just performed had one subtle issue. To see what it is, let’s take a look at the design matrix for our lasso_model.

-
+
Code
X_train.head()
@@ -744,7 +744,7 @@

\(\hat{y}\) because it is so much greater than the values of the other features. For hp to have much of an impact at all on the prediction, it must be scaled by a large model parameter.

By inspecting the fitted parameters of our model, we see that this is the case – the parameter for hp is much larger in magnitude than the parameter for hp^4.

-
+
pd.DataFrame({"Feature":X_train.columns, "Parameter":lasso_model.coef_})
@@ -808,7 +808,7 @@

\[\hat\theta_{\text{ridge}} = (\mathbb{X}^{\top}\mathbb{X} + n\lambda I)^{-1}\mathbb{X}^{\top}\mathbb{Y}\]

This solution exists even if \(\mathbb{X}\) is not full column rank. This is a major reason why L2 regularization is often used – it can produce a solution even when there is collinearity in the features. We will discuss the concept of collinearity in a future lecture, but we will not derive this result in Data 100, as it involves a fair bit of matrix calculus.

In sklearn, we perform L2 regularization using the Ridge class. It runs gradient descent to minimize the L2 objective function. Notice that we scale the data before regularizing.

-
+
ridge_model = lm.Ridge(alpha=1) # alpha represents the hyperparameter lambda
 ridge_model.fit(X_train, Y_train)
 
diff --git a/docs/eda/eda.html b/docs/eda/eda.html
index 4c4d4712..4a43fe1e 100644
--- a/docs/eda/eda.html
+++ b/docs/eda/eda.html
@@ -427,7 +427,7 @@ 

Data Cleaning and EDA

-
+
Code
import numpy as np
@@ -492,7 +492,7 @@ 

5.1.1.1 CSV

CSVs, which stand for Comma-Separated Values, are a common tabular data format. In the past two pandas lectures, we briefly touched on the idea of file format: the way data is encoded in a file for storage. Specifically, our elections and babynames datasets were stored and loaded as CSVs:

-
+
pd.read_csv("data/elections.csv").head(5)
@@ -563,7 +563,7 @@