From 214bfb95d5e3009769ed896a72b97cac8a3bc8aa Mon Sep 17 00:00:00 2001 From: ishani07 Date: Wed, 27 Mar 2024 15:05:09 -0700 Subject: [PATCH] note 14 fix --- .../feature_engineering.html | 4 +- docs/gradient_descent/gradient_descent.html | 16 +- .../figure-pdf/cell-6-output-2.pdf | Bin 20716 -> 0 bytes .../figure-pdf/cell-8-output-1.pdf | Bin 17984 -> 0 bytes .../figure-html/cell-2-output-1.png | Bin 86415 -> 86874 bytes docs/pandas_2/pandas_2.html | 74 ++++----- docs/pandas_3/pandas_3.html | 14 +- docs/regex/regex.html | 4 +- docs/sampling/sampling.html | 6 +- .../figure-html/cell-13-output-1.png | Bin 33173 -> 30952 bytes .../figure-html/cell-15-output-1.png | Bin 58438 -> 57246 bytes .../figure-html/cell-17-output-2.png | Bin 99182 -> 98786 bytes feature_engineering/feature_engineering.qmd | 2 +- index.log | 2 +- index.pdf | Bin 408693 -> 408737 bytes index.tex | 152 +++++++++--------- 16 files changed, 137 insertions(+), 137 deletions(-) delete mode 100644 docs/inference_causality/inference_causality_files/figure-pdf/cell-6-output-2.pdf delete mode 100644 docs/inference_causality/inference_causality_files/figure-pdf/cell-8-output-1.pdf diff --git a/docs/feature_engineering/feature_engineering.html b/docs/feature_engineering/feature_engineering.html index 026aba6a..f426070b 100644 --- a/docs/feature_engineering/feature_engineering.html +++ b/docs/feature_engineering/feature_engineering.html @@ -360,7 +360,7 @@

\(f_{\vec{\theta}}(\vec{x})\) for \(\hat{y}\), our loss function becomes \(l(\vec{\theta}, \vec{x}, \hat{y}) = (y_i - \theta_0x_0 - \theta_1x_1)^2\).

+

Plugging in \(f_{\vec{\theta}}(\vec{x})\) for \(\hat{y}\), our loss function becomes \(l(\vec{\theta}, \vec{x}, y_i) = (y_i - \theta_0x_0 - \theta_1x_1)^2\).

To calculate our gradient vector, we can start by computing the partial derivative of the loss function with respect to \(\theta_0\): \[\frac{\partial}{\partial \theta_{0}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_0)\]

Let’s now do the same but with respect to \(\theta_1\): \[\frac{\partial}{\partial \theta_{1}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_1)\]

Putting this together, our gradient vector is: \[\nabla_{\theta} l(\vec{\theta}, \vec{x}, y_i) = \begin{bmatrix} -2(y_i - \theta_0x_0 - \theta_1x_1)(x_0) \\ -2(y_i - \theta_0x_0 - \theta_1x_1)(x_1) \end{bmatrix}\]

@@ -1199,7 +1199,7 @@

\end{align} $$ -Plugging in $f_{\vec{\theta}}(\vec{x})$ for $\hat{y}$, our loss function becomes $l(\vec{\theta}, \vec{x}, \hat{y}) = (y_i - \theta_0x_0 - \theta_1x_1)^2$. +Plugging in $f_{\vec{\theta}}(\vec{x})$ for $\hat{y}$, our loss function becomes $l(\vec{\theta}, \vec{x}, y_i) = (y_i - \theta_0x_0 - \theta_1x_1)^2$. To calculate our gradient vector, we can start by computing the partial derivative of the loss function with respect to $\theta_0$: $$\frac{\partial}{\partial \theta_{0}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_0)$$ diff --git a/docs/gradient_descent/gradient_descent.html b/docs/gradient_descent/gradient_descent.html index 1b459ccd..154abb3f 100644 --- a/docs/gradient_descent/gradient_descent.html +++ b/docs/gradient_descent/gradient_descent.html @@ -746,9 +746,9 @@

-