Skip to content

Commit

Permalink
note 14 fix
Browse files Browse the repository at this point in the history
  • Loading branch information
ishani07 committed Mar 27, 2024
1 parent c0f0aaf commit 214bfb9
Show file tree
Hide file tree
Showing 16 changed files with 137 additions and 137 deletions.
4 changes: 2 additions & 2 deletions docs/feature_engineering/feature_engineering.html
Original file line number Diff line number Diff line change
Expand Up @@ -360,7 +360,7 @@ <h3 data-number="14.1.1" class="anchored" data-anchor-id="gradient-descent-revie
\\l(y, \hat{y}) &amp;= (y - \hat{y})^2
\end{align}
\]</span></p>
<p>Plugging in <span class="math inline">\(f_{\vec{\theta}}(\vec{x})\)</span> for <span class="math inline">\(\hat{y}\)</span>, our loss function becomes <span class="math inline">\(l(\vec{\theta}, \vec{x}, \hat{y}) = (y_i - \theta_0x_0 - \theta_1x_1)^2\)</span>.</p>
<p>Plugging in <span class="math inline">\(f_{\vec{\theta}}(\vec{x})\)</span> for <span class="math inline">\(\hat{y}\)</span>, our loss function becomes <span class="math inline">\(l(\vec{\theta}, \vec{x}, y_i) = (y_i - \theta_0x_0 - \theta_1x_1)^2\)</span>.</p>
<p>To calculate our gradient vector, we can start by computing the partial derivative of the loss function with respect to <span class="math inline">\(\theta_0\)</span>: <span class="math display">\[\frac{\partial}{\partial \theta_{0}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_0)\]</span></p>
<p>Let’s now do the same but with respect to <span class="math inline">\(\theta_1\)</span>: <span class="math display">\[\frac{\partial}{\partial \theta_{1}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_1)\]</span></p>
<p>Putting this together, our gradient vector is: <span class="math display">\[\nabla_{\theta} l(\vec{\theta}, \vec{x}, y_i) = \begin{bmatrix} -2(y_i - \theta_0x_0 - \theta_1x_1)(x_0) \\ -2(y_i - \theta_0x_0 - \theta_1x_1)(x_1) \end{bmatrix}\]</span></p>
Expand Down Expand Up @@ -1199,7 +1199,7 @@ <h2 data-number="14.7" class="anchored" data-anchor-id="bonus-stochastic-gradien
<span id="cb8-78"><a href="#cb8-78" aria-hidden="true" tabindex="-1"></a>\end{align}</span>
<span id="cb8-79"><a href="#cb8-79" aria-hidden="true" tabindex="-1"></a>$$</span>
<span id="cb8-80"><a href="#cb8-80" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb8-81"><a href="#cb8-81" aria-hidden="true" tabindex="-1"></a>Plugging in $f_{\vec{\theta}}(\vec{x})$ for $\hat{y}$, our loss function becomes $l(\vec{\theta}, \vec{x}, \hat{y}) = (y_i - \theta_0x_0 - \theta_1x_1)^2$.</span>
<span id="cb8-81"><a href="#cb8-81" aria-hidden="true" tabindex="-1"></a>Plugging in $f_{\vec{\theta}}(\vec{x})$ for $\hat{y}$, our loss function becomes $l(\vec{\theta}, \vec{x}, y_i) = (y_i - \theta_0x_0 - \theta_1x_1)^2$.</span>
<span id="cb8-82"><a href="#cb8-82" aria-hidden="true" tabindex="-1"></a></span>
<span id="cb8-83"><a href="#cb8-83" aria-hidden="true" tabindex="-1"></a>To calculate our gradient vector, we can start by computing the partial derivative of the loss function with respect to $\theta_0$: $$\frac{\partial}{\partial \theta_{0}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_0)$$</span>
<span id="cb8-84"><a href="#cb8-84" aria-hidden="true" tabindex="-1"></a></span>
Expand Down
16 changes: 8 additions & 8 deletions docs/gradient_descent/gradient_descent.html

Large diffs are not rendered by default.

Binary file not shown.
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
74 changes: 37 additions & 37 deletions docs/pandas_2/pandas_2.html
Original file line number Diff line number Diff line change
Expand Up @@ -1644,12 +1644,12 @@ <h3 data-number="3.3.4" class="anchored" data-anchor-id="sample"><span class="he
</thead>
<tbody>
<tr class="odd">
<td data-quarto-table-cell-role="th">67773</td>
<td data-quarto-table-cell-role="th">322054</td>
<td>CA</td>
<td>F</td>
<td>1973</td>
<td>James</td>
<td>35</td>
<td>M</td>
<td>1991</td>
<td>Sherwin</td>
<td>6</td>
</tr>
</tbody>
</table>
Expand All @@ -1676,34 +1676,34 @@ <h3 data-number="3.3.4" class="anchored" data-anchor-id="sample"><span class="he
</thead>
<tbody>
<tr class="odd">
<td data-quarto-table-cell-role="th">166333</td>
<td>2004</td>
<td>Kimberli</td>
<td>12</td>
<td data-quarto-table-cell-role="th">348524</td>
<td>2002</td>
<td>Kenji</td>
<td>17</td>
</tr>
<tr class="even">
<td data-quarto-table-cell-role="th">35003</td>
<td>1955</td>
<td>Bianca</td>
<td>11</td>
<td data-quarto-table-cell-role="th">197715</td>
<td>2012</td>
<td>Rylee</td>
<td>221</td>
</tr>
<tr class="odd">
<td data-quarto-table-cell-role="th">388142</td>
<td>2016</td>
<td>Bernardo</td>
<td>30</td>
<td data-quarto-table-cell-role="th">239390</td>
<td>2022</td>
<td>Nahia</td>
<td>5</td>
</tr>
<tr class="even">
<td data-quarto-table-cell-role="th">35814</td>
<td>1956</td>
<td>Marsha</td>
<td>175</td>
<td data-quarto-table-cell-role="th">175096</td>
<td>2006</td>
<td>Rylin</td>
<td>9</td>
</tr>
<tr class="odd">
<td data-quarto-table-cell-role="th">327479</td>
<td>1993</td>
<td>Sandor</td>
<td>5</td>
<td data-quarto-table-cell-role="th">29180</td>
<td>1951</td>
<td>Pearl</td>
<td>28</td>
</tr>
</tbody>
</table>
Expand All @@ -1729,28 +1729,28 @@ <h3 data-number="3.3.4" class="anchored" data-anchor-id="sample"><span class="he
</thead>
<tbody>
<tr class="odd">
<td data-quarto-table-cell-role="th">150866</td>
<td data-quarto-table-cell-role="th">150014</td>
<td>2000</td>
<td>Jakeline</td>
<td>12</td>
<td>Allyssa</td>
<td>31</td>
</tr>
<tr class="even">
<td data-quarto-table-cell-role="th">342578</td>
<td data-quarto-table-cell-role="th">152815</td>
<td>2000</td>
<td>Alberto</td>
<td>428</td>
<td>Zahraa</td>
<td>5</td>
</tr>
<tr class="odd">
<td data-quarto-table-cell-role="th">150363</td>
<td data-quarto-table-cell-role="th">342879</td>
<td>2000</td>
<td>Kaylene</td>
<td>20</td>
<td>Tucker</td>
<td>63</td>
</tr>
<tr class="even">
<td data-quarto-table-cell-role="th">149584</td>
<td data-quarto-table-cell-role="th">344867</td>
<td>2000</td>
<td>Makenzie</td>
<td>70</td>
<td>Nery</td>
<td>5</td>
</tr>
</tbody>
</table>
Expand Down
14 changes: 7 additions & 7 deletions docs/pandas_3/pandas_3.html

Large diffs are not rendered by default.

4 changes: 2 additions & 2 deletions docs/regex/regex.html
Original file line number Diff line number Diff line change
Expand Up @@ -680,11 +680,11 @@ <h4 data-number="6.2.1.2" class="anchored" data-anchor-id="canonicalization-with
<span id="cb6-13"><a href="#cb6-13" aria-hidden="true" tabindex="-1"></a>county_and_state[<span class="st">'clean_county_pandas'</span>] <span class="op">=</span> canonicalize_county_series(county_and_state[<span class="st">'County'</span>])</span>
<span id="cb6-14"><a href="#cb6-14" aria-hidden="true" tabindex="-1"></a>display(county_and_pop), display(county_and_state)<span class="op">;</span></span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
<div class="cell-output cell-output-stderr">
<pre><code>/var/folders/7t/zbwy02ts2m7cn64fvwjqb8xw0000gp/T/ipykernel_24239/2523629438.py:3: FutureWarning:
<pre><code>/var/folders/7t/zbwy02ts2m7cn64fvwjqb8xw0000gp/T/ipykernel_90453/2523629438.py:3: FutureWarning:

The default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.

/var/folders/7t/zbwy02ts2m7cn64fvwjqb8xw0000gp/T/ipykernel_24239/2523629438.py:3: FutureWarning:
/var/folders/7t/zbwy02ts2m7cn64fvwjqb8xw0000gp/T/ipykernel_90453/2523629438.py:3: FutureWarning:

The default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.
</code></pre>
Expand Down
6 changes: 3 additions & 3 deletions docs/sampling/sampling.html
Original file line number Diff line number Diff line change
Expand Up @@ -698,7 +698,7 @@ <h4 data-number="9.3.3.3" class="anchored" data-anchor-id="simple-random-sample"
<span id="cb13-2"><a href="#cb13-2" aria-hidden="true" tabindex="-1"></a>random_sample <span class="op">=</span> movie.sample(n, replace <span class="op">=</span> <span class="va">False</span>) <span class="co">## By default, replace = False</span></span>
<span id="cb13-3"><a href="#cb13-3" aria-hidden="true" tabindex="-1"></a>np.mean(random_sample[<span class="st">"barbie"</span>])</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
<div class="cell-output cell-output-display" data-execution_count="9">
<pre><code>0.5295718371935135</code></pre>
<pre><code>0.5302284944740621</code></pre>
</div>
</div>
<p>This is very close to the actual vote of 0.5302792307692308!</p>
Expand All @@ -716,7 +716,7 @@ <h4 data-number="9.3.3.3" class="anchored" data-anchor-id="simple-random-sample"
<span id="cb15-10"><a href="#cb15-10" aria-hidden="true" tabindex="-1"></a>Markdown(<span class="ss">f"**Actual** = </span><span class="sc">{</span>actual_barbie<span class="sc">:.4f}</span><span class="ss">, **Sample** = </span><span class="sc">{</span>sample_barbie<span class="sc">:.4f}</span><span class="ss">, "</span></span>
<span id="cb15-11"><a href="#cb15-11" aria-hidden="true" tabindex="-1"></a> <span class="ss">f"**Err** = </span><span class="sc">{</span><span class="dv">100</span><span class="op">*</span>err<span class="sc">:.2f}</span><span class="ss">%."</span>)</span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
<div class="cell-output cell-output-display" data-execution_count="10">
<p><strong>Actual</strong> = 0.5303, <strong>Sample</strong> = 0.5075, <strong>Err</strong> = 4.30%.</p>
<p><strong>Actual</strong> = 0.5303, <strong>Sample</strong> = 0.5663, <strong>Err</strong> = 6.78%.</p>
</div>
</div>
<p>We’ll learn how to choose this number when we (re)learn the Central Limit Theorem later in the semester.</p>
Expand Down Expand Up @@ -749,7 +749,7 @@ <h4 data-number="9.3.3.4" class="anchored" data-anchor-id="quantifying-chance-er
<div class="sourceCode cell-code" id="cb18"><pre class="sourceCode python code-with-copy"><code class="sourceCode python"><span id="cb18-1"><a href="#cb18-1" aria-hidden="true" tabindex="-1"></a>poll_result <span class="op">=</span> pd.Series(poll_result)</span>
<span id="cb18-2"><a href="#cb18-2" aria-hidden="true" tabindex="-1"></a>np.<span class="bu">sum</span>(poll_result <span class="op">&gt;</span> <span class="fl">0.5</span>)<span class="op">/</span><span class="dv">1000</span></span></code><button title="Copy to Clipboard" class="code-copy-button"><i class="bi"></i></button></pre></div>
<div class="cell-output cell-output-display" data-execution_count="13">
<pre><code>0.957</code></pre>
<pre><code>0.96</code></pre>
</div>
</div>
<p>You can see the curve looks roughly Gaussian/normal. Using KDE:</p>
Expand Down
Binary file modified docs/sampling/sampling_files/figure-html/cell-13-output-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified docs/sampling/sampling_files/figure-html/cell-15-output-1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
2 changes: 1 addition & 1 deletion feature_engineering/feature_engineering.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ f_{\vec{\theta}}(\vec{x}) &= \vec{x}^T\vec{\theta} = \theta_0x_0 + \theta_1x_1
\end{align}
$$

Plugging in $f_{\vec{\theta}}(\vec{x})$ for $\hat{y}$, our loss function becomes $l(\vec{\theta}, \vec{x}, \hat{y}) = (y_i - \theta_0x_0 - \theta_1x_1)^2$.
Plugging in $f_{\vec{\theta}}(\vec{x})$ for $\hat{y}$, our loss function becomes $l(\vec{\theta}, \vec{x}, y_i) = (y_i - \theta_0x_0 - \theta_1x_1)^2$.

To calculate our gradient vector, we can start by computing the partial derivative of the loss function with respect to $\theta_0$: $$\frac{\partial}{\partial \theta_{0}} l(\vec{\theta}, \vec{x}, y_i) = 2(y_i - \theta_0x_0 - \theta_1x_1)(-x_0)$$

Expand Down
2 changes: 1 addition & 1 deletion index.log
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
This is XeTeX, Version 3.141592653-2.6-0.999995 (TeX Live 2023) (preloaded format=xelatex 2024.3.3) 21 MAR 2024 22:28
This is XeTeX, Version 3.141592653-2.6-0.999995 (TeX Live 2023) (preloaded format=xelatex 2024.3.3) 27 MAR 2024 15:03
entering extended mode
restricted \write18 enabled.
%&-line parsing enabled.
Expand Down
Binary file modified index.pdf
Binary file not shown.
Loading

0 comments on commit 214bfb9

Please sign in to comment.