From e61c7e7b11bf27c361ba7ce7e90576ace7461422 Mon Sep 17 00:00:00 2001 From: ishani07 Date: Sun, 25 Feb 2024 18:27:20 -0800 Subject: [PATCH] adding search --- docs/search.json | 471 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 471 insertions(+) create mode 100644 docs/search.json diff --git a/docs/search.json b/docs/search.json new file mode 100644 index 00000000..28689604 --- /dev/null +++ b/docs/search.json @@ -0,0 +1,471 @@ +[ + { + "objectID": "index.html", + "href": "index.html", + "title": "Principles and Techniques of Data Science", + "section": "", + "text": "Welcome" + }, + { + "objectID": "index.html#about-the-course-notes", + "href": "index.html#about-the-course-notes", + "title": "Principles and Techniques of Data Science", + "section": "About the Course Notes", + "text": "About the Course Notes\nThis text offers supplementary resources to accompany lectures presented in the Spring 2024 Edition of the UC Berkeley course Data 100: Principles and Techniques of Data Science.\nNew notes will be added each week to accompany live lectures. See the full calendar of lectures on the course website.\nIf you spot any typos or would like to suggest any changes, please fill out the Data 100 Content Feedback Form (Spring 2024). Note that this link will only work if you have an @berkeley.edu email address. If you’re not a student at Berkeley and would like to provide feedback, please email us at data100.instructors@berkeley.edu." + }, + { + "objectID": "intro_lec/introduction.html#data-science-lifecycle", + "href": "intro_lec/introduction.html#data-science-lifecycle", + "title": "1  Introduction", + "section": "1.1 Data Science Lifecycle", + "text": "1.1 Data Science Lifecycle\nThe data science lifecycle is a high-level overview of the data science workflow. It’s a cycle of stages that a data scientist should explore as they conduct a thorough analysis of a data-driven problem.\nThere are many variations of the key ideas present in the data science lifecycle. In Data 100, we visualize the stages of the lifecycle using a flow diagram. Notice how there are two entry points.\n\n\n\n\n1.1.1 Ask a Question\nWhether by curiosity or necessity, data scientists constantly ask questions. For example, in the business world, data scientists may be interested in predicting the profit generated by a certain investment. In the field of medicine, they may ask whether some patients are more likely than others to benefit from a treatment.\nPosing questions is one of the primary ways the data science lifecycle begins. It helps to fully define the question. Here are some things you should ask yourself before framing a question.\n\nWhat do we want to know?\n\nA question that is too ambiguous may lead to confusion.\n\nWhat problems are we trying to solve?\n\nThe goal of asking a question should be clear in order to justify your efforts to stakeholders.\n\nWhat are the hypotheses we want to test?\n\nThis gives a clear perspective from which to analyze final results.\n\nWhat are the metrics for our success?\n\nThis establishes a clear point to know when to conclude the project.\n\n\n\n\n\n\n\n1.1.2 Obtain Data\nThe second entry point to the lifecycle is by obtaining data. A careful analysis of any problem requires the use of data. Data may be readily available to us, or we may have to embark on a process to collect it. When doing so, it is crucial to ask the following:\n\nWhat data do we have, and what data do we need?\n\nDefine the units of the data (people, cities, points in time, etc.) and what features to measure.\n\nHow will we sample more data?\n\nScrape the web, collect manually, run experiments, etc.\n\nIs our data representative of the population we want to study?\n\nIf our data is not representative of our population of interest, then we can come to incorrect conclusions.\n\n\nKey procedures: data acquisition, data cleaning\n\n\n\n\n\n1.1.3 Understand the Data\nRaw data itself is not inherently useful. It’s impossible to discern all the patterns and relationships between variables without carefully investigating them. Therefore, translating pure data into actionable insights is a key job of a data scientist. For example, we may choose to ask:\n\nHow is our data organized, and what does it contain?\n\nKnowing what the data says about the world helps us better understand the world.\n\nDo we have relevant data?\n\nIf the data we have collected is not useful to the question at hand, then we must collect more data.\n\nWhat are the biases, anomalies, or other issues with the data?\n\nThese can lead to many false conclusions if ignored, so data scientists must always be aware of these issues.\n\nHow do we transform the data to enable effective analysis?\n\nData is not always easy to interpret at first glance, so a data scientist should strive to reveal the hidden insights.\n\n\nKey procedures: exploratory data analysis, data visualization.\n\n\n\n\n\n1.1.4 Understand the World\nAfter observing the patterns in our data, we can begin answering our questions. This may require that we predict a quantity (machine learning) or measure the effect of some treatment (inference).\nFrom here, we may choose to report our results, or possibly conduct more analysis. We may not be satisfied with our findings, or our initial exploration may have brought up new questions that require new data.\n\nWhat does the data say about the world?\n\nGiven our models, the data will lead us to certain conclusions about the real world.\n\n\nDoes it answer our questions or accurately solve the problem?\n\nIf our model and data can not accomplish our goals, then we must reform our question, model, or both.\n\n\nHow robust are our conclusions and can we trust the predictions?\n\nInaccurate models can lead to false conclusions.\n\n\nKey procedures: model creation, prediction, inference." + }, + { + "objectID": "intro_lec/introduction.html#conclusion", + "href": "intro_lec/introduction.html#conclusion", + "title": "1  Introduction", + "section": "1.2 Conclusion", + "text": "1.2 Conclusion\nThe data science lifecycle is meant to be a set of general guidelines rather than a hard set of requirements. In our journey exploring the lifecycle, we’ll cover both the underlying theory and technologies used in data science. By the end of the course, we hope that you start to see yourself as a data scientist.\nWith that, we’ll begin by introducing one of the most important tools in exploratory data analysis: pandas." + }, + { + "objectID": "pandas_1/pandas_1.html#tabular-data", + "href": "pandas_1/pandas_1.html#tabular-data", + "title": "2  Pandas I", + "section": "2.1 Tabular Data", + "text": "2.1 Tabular Data\nData scientists work with data stored in a variety of formats. This class focuses primarily on tabular data — data that is stored in a table.\nTabular data is one of the most common systems that data scientists use to organize data. This is in large part due to the simplicity and flexibility of tables. Tables allow us to represent each observation, or instance of collecting data from an individual, as its own row. We can record each observation’s distinct characteristics, or features, in separate columns.\nTo see this in action, we’ll explore the elections dataset, which stores information about political candidates who ran for president of the United States in previous years.\nIn the elections dataset, each row (blue box) represents one instance of a candidate running for president in a particular year. For example, the first row represents Andrew Jackson running for president in the year 1824. Each column (yellow box) represents one characteristic piece of information about each presidential candidate. For example, the column named “Result” stores whether or not the candidate won the election.\n\n\n\nYour work in Data 8 helped you grow very familiar with using and interpreting data stored in a tabular format. Back then, you used the Table class of the datascience library, a special programming library created specifically for Data 8 students.\nIn Data 100, we will be working with the programming library pandas, which is generally accepted in the data science community as the industry- and academia-standard tool for manipulating tabular data (as well as the inspiration for Petey, our panda bear mascot).\nUsing pandas, we can\n\nArrange data in a tabular format.\nExtract useful information filtered by specific conditions.\nOperate on data to gain new insights.\nApply NumPy functions to our data (our friends from Data 8).\nPerform vectorized computations to speed up our analysis (Lab 1)." + }, + { + "objectID": "pandas_1/pandas_1.html#series-dataframes-and-indices", + "href": "pandas_1/pandas_1.html#series-dataframes-and-indices", + "title": "2  Pandas I", + "section": "2.2 Series, DataFrames, and Indices", + "text": "2.2 Series, DataFrames, and Indices\nTo begin our work in pandas, we must first import the library into our Python environment. This will allow us to use pandas data structures and methods in our code.\n\n# `pd` is the conventional alias for Pandas, as `np` is for NumPy\nimport pandas as pd\n\nThere are three fundamental data structures in pandas:\n\nSeries: 1D labeled array data; best thought of as columnar data.\nDataFrame: 2D tabular data with rows and columns.\nIndex: A sequence of row/column labels.\n\nDataFrames, Series, and Indices can be represented visually in the following diagram, which considers the first few rows of the elections dataset.\n\n\n\nNotice how the DataFrame is a two-dimensional object — it contains both rows and columns. The Series above is a singular column of this DataFrame, namely the Result column. Both contain an Index, or a shared list of row labels (the integers from 0 to 4, inclusive).\n\n2.2.1 Series\nA Series represents a column of a DataFrame; more generally, it can be any 1-dimensional array-like object. It contains both:\n\nA sequence of values of the same type.\nA sequence of data labels called the index.\n\nIn the cell below, we create a Series named s.\n\ns = pd.Series([\"welcome\", \"to\", \"data 100\"])\ns\n\n0 welcome\n1 to\n2 data 100\ndtype: object\n\n\n\n # Accessing data values within the Series\n s.values\n\narray(['welcome', 'to', 'data 100'], dtype=object)\n\n\n\n # Accessing the Index of the Series\n s.index\n\nRangeIndex(start=0, stop=3, step=1)\n\n\nBy default, the index of a Series is a sequential list of integers beginning from 0. Optionally, a manually specified list of desired indices can be passed to the index argument.\n\ns = pd.Series([-1, 10, 2], index = [\"a\", \"b\", \"c\"])\ns\n\na -1\nb 10\nc 2\ndtype: int64\n\n\n\ns.index\n\nIndex(['a', 'b', 'c'], dtype='object')\n\n\nIndices can also be changed after initialization.\n\ns.index = [\"first\", \"second\", \"third\"]\ns\n\nfirst -1\nsecond 10\nthird 2\ndtype: int64\n\n\n\ns.index\n\nIndex(['first', 'second', 'third'], dtype='object')\n\n\n\n2.2.1.1 Selection in Series\nMuch like when working with NumPy arrays, we can select a single value or a set of values from a Series. To do so, there are three primary methods:\n\nA single label.\nA list of labels.\nA filtering condition.\n\nTo demonstrate this, let’s define the Series ser.\n\nser = pd.Series([4, -2, 0, 6], index = [\"a\", \"b\", \"c\", \"d\"])\nser\n\na 4\nb -2\nc 0\nd 6\ndtype: int64\n\n\n\n2.2.1.1.1 A Single Label\n\n# We return the value stored at the index label \"a\"\nser[\"a\"] \n\n4\n\n\n\n\n2.2.1.1.2 A List of Labels\n\n# We return a Series of the values stored at the index labels \"a\" and \"c\"\nser[[\"a\", \"c\"]] \n\na 4\nc 0\ndtype: int64\n\n\n\n\n2.2.1.1.3 A Filtering Condition\nPerhaps the most interesting (and useful) method of selecting data from a Series is by using a filtering condition.\nFirst, we apply a boolean operation to the Series. This creates a new Series of boolean values.\n\n# Filter condition: select all elements greater than 0\nser > 0 \n\na True\nb False\nc False\nd True\ndtype: bool\n\n\nWe then use this boolean condition to index into our original Series. pandas will select only the entries in the original Series that satisfy the condition.\n\nser[ser > 0] \n\na 4\nd 6\ndtype: int64\n\n\n\n\n\n\n2.2.2 DataFrames\nTypically, we will work with Series using the perspective that they are columns in a DataFrame. We can think of a DataFrame as a collection of Series that all share the same Index.\nIn Data 8, you encountered the Table class of the datascience library, which represented tabular data. In Data 100, we’ll be using the DataFrame class of the pandas library.\n\n2.2.2.1 Creating a DataFrame\nThere are many ways to create a DataFrame. Here, we will cover the most popular approaches:\n\nFrom a CSV file.\nUsing a list and column name(s).\nFrom a dictionary.\nFrom a Series.\n\nMore generally, the syntax for creating a DataFrame is:\n pandas.DataFrame(data, index, columns)\n\n2.2.2.1.1 From a CSV file\nIn Data 100, our data are typically stored in a CSV (comma-separated values) file format. We can import a CSV file into a DataFrame by passing the data path as an argument to the following pandas function.  pd.read_csv(\"filename.csv\")\nWith our new understanding of pandas in hand, let’s return to the elections dataset from before. Now, we can recognize that it is represented as a pandas DataFrame.\n\nelections = pd.read_csv(\"data/elections.csv\")\nelections\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.210122\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.789878\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.203927\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.796073\n\n\n4\n1832\nAndrew Jackson\nDemocratic\n702735\nwin\n54.574789\n\n\n...\n...\n...\n...\n...\n...\n...\n\n\n177\n2016\nJill Stein\nGreen\n1457226\nloss\n1.073699\n\n\n178\n2020\nJoseph Biden\nDemocratic\n81268924\nwin\n51.311515\n\n\n179\n2020\nDonald Trump\nRepublican\n74216154\nloss\n46.858542\n\n\n180\n2020\nJo Jorgensen\nLibertarian\n1865724\nloss\n1.177979\n\n\n181\n2020\nHoward Hawkins\nGreen\n405035\nloss\n0.255731\n\n\n\n\n182 rows × 6 columns\n\n\n\nThis code stores our DataFrame object in the elections variable. Upon inspection, our elections DataFrame has 182 rows and 6 columns (Year, Candidate, Party, Popular Vote, Result, %). Each row represents a single record — in our example, a presidential candidate from some particular year. Each column represents a single attribute or feature of the record.\n\n\n2.2.2.1.2 Using a List and Column Name(s)\nWe’ll now explore creating a DataFrame with data of our own.\nConsider the following examples. The first code cell creates a DataFrame with a single column Numbers.\n\ndf_list = pd.DataFrame([1, 2, 3], columns=[\"Numbers\"])\ndf_list\n\n\n\n\n\n\n\n\nNumbers\n\n\n\n\n0\n1\n\n\n1\n2\n\n\n2\n3\n\n\n\n\n\n\n\nThe second creates a DataFrame with the columns Numbers and Description. Notice how a 2D list of values is required to initialize the second DataFrame — each nested list represents a single row of data.\n\ndf_list = pd.DataFrame([[1, \"one\"], [2, \"two\"]], columns = [\"Number\", \"Description\"])\ndf_list\n\n\n\n\n\n\n\n\nNumber\nDescription\n\n\n\n\n0\n1\none\n\n\n1\n2\ntwo\n\n\n\n\n\n\n\n\n\n2.2.2.1.3 From a Dictionary\nA third (and more common) way to create a DataFrame is with a dictionary. The dictionary keys represent the column names, and the dictionary values represent the column values.\nBelow are two ways of implementing this approach. The first is based on specifying the columns of the DataFrame, whereas the second is based on specifying the rows of the DataFrame.\n\ndf_dict = pd.DataFrame({\n \"Fruit\": [\"Strawberry\", \"Orange\"], \n \"Price\": [5.49, 3.99]\n})\ndf_dict\n\n\n\n\n\n\n\n\nFruit\nPrice\n\n\n\n\n0\nStrawberry\n5.49\n\n\n1\nOrange\n3.99\n\n\n\n\n\n\n\n\ndf_dict = pd.DataFrame(\n [\n {\"Fruit\":\"Strawberry\", \"Price\":5.49}, \n {\"Fruit\": \"Orange\", \"Price\":3.99}\n ]\n)\ndf_dict\n\n\n\n\n\n\n\n\nFruit\nPrice\n\n\n\n\n0\nStrawberry\n5.49\n\n\n1\nOrange\n3.99\n\n\n\n\n\n\n\n\n\n2.2.2.1.4 From a Series\nEarlier, we explained how a Series was synonymous to a column in a DataFrame. It follows, then, that a DataFrame is equivalent to a collection of Series, which all share the same Index.\nIn fact, we can initialize a DataFrame by merging two or more Series. Consider the Series s_a and s_b.\n\n# Notice how our indices, or row labels, are the same\n\ns_a = pd.Series([\"a1\", \"a2\", \"a3\"], index = [\"r1\", \"r2\", \"r3\"])\ns_b = pd.Series([\"b1\", \"b2\", \"b3\"], index = [\"r1\", \"r2\", \"r3\"])\n\nWe can turn individual Series into a DataFrame using two common methods (shown below):\n\npd.DataFrame(s_a)\n\n\n\n\n\n\n\n\n0\n\n\n\n\nr1\na1\n\n\nr2\na2\n\n\nr3\na3\n\n\n\n\n\n\n\n\ns_b.to_frame()\n\n\n\n\n\n\n\n\n0\n\n\n\n\nr1\nb1\n\n\nr2\nb2\n\n\nr3\nb3\n\n\n\n\n\n\n\nTo merge the two Series and specify their column names, we use the following syntax:\n\npd.DataFrame({\n \"A-column\": s_a, \n \"B-column\": s_b\n})\n\n\n\n\n\n\n\n\nA-column\nB-column\n\n\n\n\nr1\na1\nb1\n\n\nr2\na2\nb2\n\n\nr3\na3\nb3\n\n\n\n\n\n\n\n\n\n\n\n2.2.3 Indices\nOn a more technical note, an index doesn’t have to be an integer, nor does it have to be unique. For example, we can set the index of the elections DataFrame to be the name of presidential candidates.\n\n# Creating a DataFrame from a CSV file and specifying the index column\nelections = pd.read_csv(\"data/elections.csv\", index_col = \"Candidate\")\nelections\n\n\n\n\n\n\n\n\nYear\nParty\nPopular vote\nResult\n%\n\n\nCandidate\n\n\n\n\n\n\n\n\n\nAndrew Jackson\n1824\nDemocratic-Republican\n151271\nloss\n57.210122\n\n\nJohn Quincy Adams\n1824\nDemocratic-Republican\n113142\nwin\n42.789878\n\n\nAndrew Jackson\n1828\nDemocratic\n642806\nwin\n56.203927\n\n\nJohn Quincy Adams\n1828\nNational Republican\n500897\nloss\n43.796073\n\n\nAndrew Jackson\n1832\nDemocratic\n702735\nwin\n54.574789\n\n\n...\n...\n...\n...\n...\n...\n\n\nJill Stein\n2016\nGreen\n1457226\nloss\n1.073699\n\n\nJoseph Biden\n2020\nDemocratic\n81268924\nwin\n51.311515\n\n\nDonald Trump\n2020\nRepublican\n74216154\nloss\n46.858542\n\n\nJo Jorgensen\n2020\nLibertarian\n1865724\nloss\n1.177979\n\n\nHoward Hawkins\n2020\nGreen\n405035\nloss\n0.255731\n\n\n\n\n182 rows × 5 columns\n\n\n\nWe can also select a new column and set it as the index of the DataFrame. For example, we can set the index of the elections DataFrame to represent the candidate’s party.\n\nelections.reset_index(inplace = True) # Resetting the index so we can set it again\n# This sets the index to the \"Party\" column\nelections.set_index(\"Party\")\n\n\n\n\n\n\n\n\nCandidate\nYear\nPopular vote\nResult\n%\n\n\nParty\n\n\n\n\n\n\n\n\n\nDemocratic-Republican\nAndrew Jackson\n1824\n151271\nloss\n57.210122\n\n\nDemocratic-Republican\nJohn Quincy Adams\n1824\n113142\nwin\n42.789878\n\n\nDemocratic\nAndrew Jackson\n1828\n642806\nwin\n56.203927\n\n\nNational Republican\nJohn Quincy Adams\n1828\n500897\nloss\n43.796073\n\n\nDemocratic\nAndrew Jackson\n1832\n702735\nwin\n54.574789\n\n\n...\n...\n...\n...\n...\n...\n\n\nGreen\nJill Stein\n2016\n1457226\nloss\n1.073699\n\n\nDemocratic\nJoseph Biden\n2020\n81268924\nwin\n51.311515\n\n\nRepublican\nDonald Trump\n2020\n74216154\nloss\n46.858542\n\n\nLibertarian\nJo Jorgensen\n2020\n1865724\nloss\n1.177979\n\n\nGreen\nHoward Hawkins\n2020\n405035\nloss\n0.255731\n\n\n\n\n182 rows × 5 columns\n\n\n\nAnd, if we’d like, we can revert the index back to the default list of integers.\n\n# This resets the index to be the default list of integer\nelections.reset_index(inplace=True) \nelections.index\n\nRangeIndex(start=0, stop=182, step=1)\n\n\nIt is also important to note that the row labels that constitute an index don’t have to be unique. While index values can be unique and numeric, acting as a row number, they can also be named and non-unique.\nHere we see unique and numeric index values.\n\n\n\nHowever, here the index values are not unique." + }, + { + "objectID": "pandas_1/pandas_1.html#dataframe-attributes-index-columns-and-shape", + "href": "pandas_1/pandas_1.html#dataframe-attributes-index-columns-and-shape", + "title": "2  Pandas I", + "section": "2.3 DataFrame Attributes: Index, Columns, and Shape", + "text": "2.3 DataFrame Attributes: Index, Columns, and Shape\nOn the other hand, column names in a DataFrame are almost always unique. Looking back to the elections dataset, it wouldn’t make sense to have two columns named \"Candidate\". Sometimes, you’ll want to extract these different values, in particular, the list of row and column labels.\nFor index/row labels, use DataFrame.index:\n\nelections.set_index(\"Party\", inplace = True)\nelections.index\n\nIndex(['Democratic-Republican', 'Democratic-Republican', 'Democratic',\n 'National Republican', 'Democratic', 'National Republican',\n 'Anti-Masonic', 'Whig', 'Democratic', 'Whig',\n ...\n 'Constitution', 'Republican', 'Independent', 'Libertarian',\n 'Democratic', 'Green', 'Democratic', 'Republican', 'Libertarian',\n 'Green'],\n dtype='object', name='Party', length=182)\n\n\nFor column labels, use DataFrame.columns:\n\nelections.columns\n\nIndex(['index', 'Candidate', 'Year', 'Popular vote', 'Result', '%'], dtype='object')\n\n\nAnd for the shape of the DataFrame, we can use DataFrame.shape to get the number of rows followed by the number of columns:\n\nelections.shape\n\n(182, 6)" + }, + { + "objectID": "pandas_1/pandas_1.html#slicing-in-dataframes", + "href": "pandas_1/pandas_1.html#slicing-in-dataframes", + "title": "2  Pandas I", + "section": "2.4 Slicing in DataFrames", + "text": "2.4 Slicing in DataFrames\nNow that we’ve learned more about DataFrames, let’s dive deeper into their capabilities.\nThe API (Application Programming Interface) for the DataFrame class is enormous. In this section, we’ll discuss several methods of the DataFrame API that allow us to extract subsets of data.\nThe simplest way to manipulate a DataFrame is to extract a subset of rows and columns, known as slicing.\nCommon ways we may want to extract data are grabbing:\n\nThe first or last n rows in the DataFrame.\nData with a certain label.\nData at a certain position.\n\nWe will do so with four primary methods of the DataFrame class:\n\n.head and .tail\n.loc\n.iloc\n[]\n\n\n2.4.1 Extracting data with .head and .tail\nThe simplest scenario in which we want to extract data is when we simply want to select the first or last few rows of the DataFrame.\nTo extract the first n rows of a DataFrame df, we use the syntax df.head(n).\n\n\nCode\nelections = pd.read_csv(\"data/elections.csv\")\n\n\n\n# Extract the first 5 rows of the DataFrame\nelections.head(5)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.210122\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.789878\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.203927\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.796073\n\n\n4\n1832\nAndrew Jackson\nDemocratic\n702735\nwin\n54.574789\n\n\n\n\n\n\n\nSimilarly, calling df.tail(n) allows us to extract the last n rows of the DataFrame.\n\n# Extract the last 5 rows of the DataFrame\nelections.tail(5)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n177\n2016\nJill Stein\nGreen\n1457226\nloss\n1.073699\n\n\n178\n2020\nJoseph Biden\nDemocratic\n81268924\nwin\n51.311515\n\n\n179\n2020\nDonald Trump\nRepublican\n74216154\nloss\n46.858542\n\n\n180\n2020\nJo Jorgensen\nLibertarian\n1865724\nloss\n1.177979\n\n\n181\n2020\nHoward Hawkins\nGreen\n405035\nloss\n0.255731\n\n\n\n\n\n\n\n\n\n2.4.2 Label-based Extraction: Indexing with .loc\nFor the more complex task of extracting data with specific column or index labels, we can use .loc. The .loc accessor allows us to specify the labels of rows and columns we wish to extract. The labels (commonly referred to as the indices) are the bold text on the far left of a DataFrame, while the column labels are the column names found at the top of a DataFrame.\n\n\n\nTo grab data with .loc, we must specify the row and column label(s) where the data exists. The row labels are the first argument to the .loc function; the column labels are the second.\nArguments to .loc can be:\n\nA single value.\nA slice.\nA list.\n\nFor example, to select a single value, we can select the row labeled 0 and the column labeled Candidate from the elections DataFrame.\n\nelections.loc[0, 'Candidate']\n\n'Andrew Jackson'\n\n\nKeep in mind that passing in just one argument as a single value will produce a Series. Below, we’ve extracted a subset of the \"Popular vote\" column as a Series.\n\nelections.loc[[87, 25, 179], \"Popular vote\"]\n\n87 15761254\n25 848019\n179 74216154\nName: Popular vote, dtype: int64\n\n\nTo select multiple rows and columns, we can use Python slice notation. Here, we select the rows from labels 0 to 3 and the columns from labels \"Year\" to \"Popular vote\". Notice that unlike Python slicing, .loc is inclusive of the right upper bound.\n\nelections.loc[0:3, 'Year':'Popular vote']\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\n\n\n\n\n\n\n\nSuppose that instead, we want to extract all column values for the first four rows in the elections DataFrame. The shorthand : is useful for this.\n\nelections.loc[0:3, :]\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.210122\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.789878\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.203927\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.796073\n\n\n\n\n\n\n\nWe can use the same shorthand to extract all rows.\n\nelections.loc[:, [\"Year\", \"Candidate\", \"Result\"]]\n\n\n\n\n\n\n\n\nYear\nCandidate\nResult\n\n\n\n\n0\n1824\nAndrew Jackson\nloss\n\n\n1\n1824\nJohn Quincy Adams\nwin\n\n\n2\n1828\nAndrew Jackson\nwin\n\n\n3\n1828\nJohn Quincy Adams\nloss\n\n\n4\n1832\nAndrew Jackson\nwin\n\n\n...\n...\n...\n...\n\n\n177\n2016\nJill Stein\nloss\n\n\n178\n2020\nJoseph Biden\nwin\n\n\n179\n2020\nDonald Trump\nloss\n\n\n180\n2020\nJo Jorgensen\nloss\n\n\n181\n2020\nHoward Hawkins\nloss\n\n\n\n\n182 rows × 3 columns\n\n\n\nThere are a couple of things we should note. Firstly, unlike conventional Python, pandas allows us to slice string values (in our example, the column labels). Secondly, slicing with .loc is inclusive. Notice how our resulting DataFrame includes every row and column between and including the slice labels we specified.\nEquivalently, we can use a list to obtain multiple rows and columns in our elections DataFrame.\n\nelections.loc[[0, 1, 2, 3], ['Year', 'Candidate', 'Party', 'Popular vote']]\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\n\n\n\n\n\n\n\nLastly, we can interchange list and slicing notation.\n\nelections.loc[[0, 1, 2, 3], :]\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.210122\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.789878\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.203927\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.796073\n\n\n\n\n\n\n\n\n\n2.4.3 Integer-based Extraction: Indexing with .iloc\nSlicing with .iloc works similarly to .loc. However, .iloc uses the index positions of rows and columns rather than the labels (think to yourself: loc uses lables; iloc uses indices). The arguments to the .iloc function also behave similarly — single values, lists, indices, and any combination of these are permitted.\nLet’s begin reproducing our results from above. We’ll begin by selecting the first presidential candidate in our elections DataFrame:\n\n# elections.loc[0, \"Candidate\"] - Previous approach\nelections.iloc[0, 1]\n\n'Andrew Jackson'\n\n\nNotice how the first argument to both .loc and .iloc are the same. This is because the row with a label of 0 is conveniently in the \\(0^{\\text{th}}\\) (equivalently, the first position) of the elections DataFrame. Generally, this is true of any DataFrame where the row labels are incremented in ascending order from 0.\nAnd, as before, if we were to pass in only one single value argument, our result would be a Series.\n\nelections.iloc[[1,2,3],1]\n\n1 John Quincy Adams\n2 Andrew Jackson\n3 John Quincy Adams\nName: Candidate, dtype: object\n\n\nHowever, when we select the first four rows and columns using .iloc, we notice something.\n\n# elections.loc[0:3, 'Year':'Popular vote'] - Previous approach\nelections.iloc[0:4, 0:4]\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\n\n\n\n\n\n\n\nSlicing is no longer inclusive in .iloc — it’s exclusive. In other words, the right end of a slice is not included when using .iloc. This is one of the subtleties of pandas syntax; you will get used to it with practice.\nList behavior works just as expected.\n\n#elections.loc[[0, 1, 2, 3], ['Year', 'Candidate', 'Party', 'Popular vote']] - Previous Approach\nelections.iloc[[0, 1, 2, 3], [0, 1, 2, 3]]\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\n\n\n\n\n\n\n\nAnd just like with .loc, we can use a colon with .iloc to extract all rows or columns.\n\nelections.iloc[:, 0:3]\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n\n\n4\n1832\nAndrew Jackson\nDemocratic\n\n\n...\n...\n...\n...\n\n\n177\n2016\nJill Stein\nGreen\n\n\n178\n2020\nJoseph Biden\nDemocratic\n\n\n179\n2020\nDonald Trump\nRepublican\n\n\n180\n2020\nJo Jorgensen\nLibertarian\n\n\n181\n2020\nHoward Hawkins\nGreen\n\n\n\n\n182 rows × 3 columns\n\n\n\nThis discussion begs the question: when should we use .loc vs. .iloc? In most cases, .loc is generally safer to use. You can imagine .iloc may return incorrect values when applied to a dataset where the ordering of data can change. However, .iloc can still be useful — for example, if you are looking at a DataFrame of sorted movie earnings and want to get the median earnings for a given year, you can use .iloc to index into the middle.\nOverall, it is important to remember that:\n\n.loc performances label-based extraction.\n.iloc performs integer-based extraction.\n\n\n\n2.4.4 Context-dependent Extraction: Indexing with []\nThe [] selection operator is the most baffling of all, yet the most commonly used. It only takes a single argument, which may be one of the following:\n\nA slice of row numbers.\nA list of column labels.\nA single-column label.\n\nThat is, [] is context-dependent. Let’s see some examples.\n\n2.4.4.1 A slice of row numbers\nSay we wanted the first four rows of our elections DataFrame.\n\nelections[0:4]\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.210122\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.789878\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.203927\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.796073\n\n\n\n\n\n\n\n\n\n2.4.4.2 A list of column labels\nSuppose we now want the first four columns.\n\nelections[[\"Year\", \"Candidate\", \"Party\", \"Popular vote\"]]\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\n\n\n4\n1832\nAndrew Jackson\nDemocratic\n702735\n\n\n...\n...\n...\n...\n...\n\n\n177\n2016\nJill Stein\nGreen\n1457226\n\n\n178\n2020\nJoseph Biden\nDemocratic\n81268924\n\n\n179\n2020\nDonald Trump\nRepublican\n74216154\n\n\n180\n2020\nJo Jorgensen\nLibertarian\n1865724\n\n\n181\n2020\nHoward Hawkins\nGreen\n405035\n\n\n\n\n182 rows × 4 columns\n\n\n\n\n\n2.4.4.3 A single-column label\nLastly, [] allows us to extract only the \"Candidate\" column.\n\nelections[\"Candidate\"]\n\n0 Andrew Jackson\n1 John Quincy Adams\n2 Andrew Jackson\n3 John Quincy Adams\n4 Andrew Jackson\n ... \n177 Jill Stein\n178 Joseph Biden\n179 Donald Trump\n180 Jo Jorgensen\n181 Howard Hawkins\nName: Candidate, Length: 182, dtype: object\n\n\nThe output is a Series! In this course, we’ll become very comfortable with [], especially for selecting columns. In practice, [] is much more common than .loc, especially since it is far more concise." + }, + { + "objectID": "pandas_1/pandas_1.html#parting-note", + "href": "pandas_1/pandas_1.html#parting-note", + "title": "2  Pandas I", + "section": "2.5 Parting Note", + "text": "2.5 Parting Note\nThe pandas library is enormous and contains many useful functions. Here is a link to its documentation. We certainly don’t expect you to memorize each and every method of the library, and we will give you a reference sheet for exams.\nThe introductory Data 100 pandas lectures will provide a high-level view of the key data structures and methods that will form the foundation of your pandas knowledge. A goal of this course is to help you build your familiarity with the real-world programming practice of … Googling! Answers to your questions can be found in documentation, Stack Overflow, etc. Being able to search for, read, and implement documentation is an important life skill for any data scientist.\nWith that, we will move on to Pandas II!" + }, + { + "objectID": "pandas_2/pandas_2.html#conditional-selection", + "href": "pandas_2/pandas_2.html#conditional-selection", + "title": "3  Pandas II", + "section": "3.1 Conditional Selection", + "text": "3.1 Conditional Selection\nConditional selection allows us to select a subset of rows in a DataFrame that satisfy some specified condition.\nTo understand how to use conditional selection, we must look at another possible input of the .loc and [] methods – a boolean array, which is simply an array or Series where each element is either True or False. This boolean array must have a length equal to the number of rows in the DataFrame. It will return all rows that correspond to a value of True in the array. We used a very similar technique when performing conditional extraction from a Series in the last lecture.\nTo see this in action, let’s select all even-indexed rows in the first 10 rows of our DataFrame.\n\n# Ask yourself: why is :9 is the correct slice to select the first 10 rows?\nbabynames_first_10_rows = babynames.loc[:9, :]\n\n# Notice how we have exactly 10 elements in our boolean array argument\nbabynames_first_10_rows[[True, False, True, False, True, False, True, False, True, False]]\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n\n\n2\nCA\nF\n1910\nDorothy\n220\n\n\n4\nCA\nF\n1910\nFrances\n134\n\n\n6\nCA\nF\n1910\nEvelyn\n126\n\n\n8\nCA\nF\n1910\nVirginia\n101\n\n\n\n\n\n\n\nWe can perform a similar operation using .loc.\n\nbabynames_first_10_rows.loc[[True, False, True, False, True, False, True, False, True, False], :]\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n\n\n2\nCA\nF\n1910\nDorothy\n220\n\n\n4\nCA\nF\n1910\nFrances\n134\n\n\n6\nCA\nF\n1910\nEvelyn\n126\n\n\n8\nCA\nF\n1910\nVirginia\n101\n\n\n\n\n\n\n\nThese techniques worked well in this example, but you can imagine how tedious it might be to list out True and Falsefor every row in a larger DataFrame. To make things easier, we can instead provide a logical condition as an input to .loc or [] that returns a boolean array with the necessary length.\nFor example, to return all names associated with F sex:\n\n# First, use a logical condition to generate a boolean array\nlogical_operator = (babynames[\"Sex\"] == \"F\")\n\n# Then, use this boolean array to filter the DataFrame\nbabynames[logical_operator].head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n\n\n1\nCA\nF\n1910\nHelen\n239\n\n\n2\nCA\nF\n1910\nDorothy\n220\n\n\n3\nCA\nF\n1910\nMargaret\n163\n\n\n4\nCA\nF\n1910\nFrances\n134\n\n\n\n\n\n\n\nRecall from the previous lecture that .head() will return only the first few rows in the DataFrame. In reality, babynames[logical operator] contains as many rows as there are entries in the original babynames DataFrame with sex \"F\".\nHere, logical_operator evaluates to a Series of boolean values with length 407428.\n\n\nCode\nprint(\"There are a total of {} values in 'logical_operator'\".format(len(logical_operator)))\n\n\nThere are a total of 407428 values in 'logical_operator'\n\n\nRows starting at row 0 and ending at row 239536 evaluate to True and are thus returned in the DataFrame. Rows from 239537 onwards evaluate to False and are omitted from the output.\n\n\nCode\nprint(\"The 0th item in this 'logical_operator' is: {}\".format(logical_operator.iloc[0]))\nprint(\"The 239536th item in this 'logical_operator' is: {}\".format(logical_operator.iloc[239536]))\nprint(\"The 239537th item in this 'logical_operator' is: {}\".format(logical_operator.iloc[239537]))\n\n\nThe 0th item in this 'logical_operator' is: True\nThe 239536th item in this 'logical_operator' is: True\nThe 239537th item in this 'logical_operator' is: False\n\n\nPassing a Series as an argument to babynames[] has the same effect as using a boolean array. In fact, the [] selection operator can take a boolean Series, array, and list as arguments. These three are used interchangeably throughout the course.\nWe can also use .loc to achieve similar results.\n\nbabynames.loc[babynames[\"Sex\"] == \"F\"].head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n\n\n1\nCA\nF\n1910\nHelen\n239\n\n\n2\nCA\nF\n1910\nDorothy\n220\n\n\n3\nCA\nF\n1910\nMargaret\n163\n\n\n4\nCA\nF\n1910\nFrances\n134\n\n\n\n\n\n\n\nBoolean conditions can be combined using various bitwise operators, allowing us to filter results by multiple conditions. In the table below, p and q are boolean arrays or Series.\n\n\n\nSymbol\nUsage\nMeaning\n\n\n\n\n~\n~p\nReturns negation of p\n\n\n|\np | q\np OR q\n\n\n&\np & q\np AND q\n\n\n^\np ^ q\np XOR q (exclusive or)\n\n\n\nWhen combining multiple conditions with logical operators, we surround each individual condition with a set of parenthesis (). This imposes an order of operations on pandas evaluating your logic and can avoid code erroring.\nFor example, if we want to return data on all names with sex \"F\" born before the year 2000, we can write:\n\nbabynames[(babynames[\"Sex\"] == \"F\") & (babynames[\"Year\"] < 2000)].head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n\n\n1\nCA\nF\n1910\nHelen\n239\n\n\n2\nCA\nF\n1910\nDorothy\n220\n\n\n3\nCA\nF\n1910\nMargaret\n163\n\n\n4\nCA\nF\n1910\nFrances\n134\n\n\n\n\n\n\n\nNote that we’re working with Series, so using and in place of &, or or in place | will error.\n\n# This line of code will raise a ValueError\n# babynames[(babynames[\"Sex\"] == \"F\") and (babynames[\"Year\"] < 2000)].head()\n\nIf we want to return data on all names with sex \"F\" or all born before the year 2000, we can write:\n\nbabynames[(babynames[\"Sex\"] == \"F\") | (babynames[\"Year\"] < 2000)].head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n\n\n1\nCA\nF\n1910\nHelen\n239\n\n\n2\nCA\nF\n1910\nDorothy\n220\n\n\n3\nCA\nF\n1910\nMargaret\n163\n\n\n4\nCA\nF\n1910\nFrances\n134\n\n\n\n\n\n\n\nBoolean array selection is a useful tool, but can lead to overly verbose code for complex conditions. In the example below, our boolean condition is long enough to extend for several lines of code.\n\n# Note: The parentheses surrounding the code make it possible to break the code on to multiple lines for readability\n(\n babynames[(babynames[\"Name\"] == \"Bella\") | \n (babynames[\"Name\"] == \"Alex\") |\n (babynames[\"Name\"] == \"Ani\") |\n (babynames[\"Name\"] == \"Lisa\")]\n).head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n6289\nCA\nF\n1923\nBella\n5\n\n\n7512\nCA\nF\n1925\nBella\n8\n\n\n12368\nCA\nF\n1932\nLisa\n5\n\n\n14741\nCA\nF\n1936\nLisa\n8\n\n\n17084\nCA\nF\n1939\nLisa\n5\n\n\n\n\n\n\n\nFortunately, pandas provides many alternative methods for constructing boolean filters.\nThe .isin function is one such example. This method evaluates if the values in a Series are contained in a different sequence (list, array, or Series) of values. In the cell below, we achieve equivalent results to the DataFrame above with far more concise code.\n\nnames = [\"Bella\", \"Alex\", \"Narges\", \"Lisa\"]\nbabynames[\"Name\"].isin(names).head()\n\n0 False\n1 False\n2 False\n3 False\n4 False\nName: Name, dtype: bool\n\n\n\nbabynames[babynames[\"Name\"].isin(names)].head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n6289\nCA\nF\n1923\nBella\n5\n\n\n7512\nCA\nF\n1925\nBella\n8\n\n\n12368\nCA\nF\n1932\nLisa\n5\n\n\n14741\nCA\nF\n1936\nLisa\n8\n\n\n17084\nCA\nF\n1939\nLisa\n5\n\n\n\n\n\n\n\nThe function str.startswith can be used to define a filter based on string values in a Series object. It checks to see if string values in a Series start with a particular character.\n\n# Identify whether names begin with the letter \"N\"\nbabynames[\"Name\"].str.startswith(\"N\").head()\n\n0 False\n1 False\n2 False\n3 False\n4 False\nName: Name, dtype: bool\n\n\n\n# Extracting names that begin with the letter \"N\"\nbabynames[babynames[\"Name\"].str.startswith(\"N\")].head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n76\nCA\nF\n1910\nNorma\n23\n\n\n83\nCA\nF\n1910\nNellie\n20\n\n\n127\nCA\nF\n1910\nNina\n11\n\n\n198\nCA\nF\n1910\nNora\n6\n\n\n310\nCA\nF\n1911\nNellie\n23" + }, + { + "objectID": "pandas_2/pandas_2.html#adding-removing-and-modifying-columns", + "href": "pandas_2/pandas_2.html#adding-removing-and-modifying-columns", + "title": "3  Pandas II", + "section": "3.2 Adding, Removing, and Modifying Columns", + "text": "3.2 Adding, Removing, and Modifying Columns\nIn many data science tasks, we may need to change the columns contained in our DataFrame in some way. Fortunately, the syntax to do so is fairly straightforward.\nTo add a new column to a DataFrame, we use a syntax similar to that used when accessing an existing column. Specify the name of the new column by writing df[\"column\"], then assign this to a Series or array containing the values that will populate this column.\n\n# Create a Series of the length of each name. \nbabyname_lengths = babynames[\"Name\"].str.len()\n\n# Add a column named \"name_lengths\" that includes the length of each name\nbabynames[\"name_lengths\"] = babyname_lengths\nbabynames.head(5)\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\nname_lengths\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n4\n\n\n1\nCA\nF\n1910\nHelen\n239\n5\n\n\n2\nCA\nF\n1910\nDorothy\n220\n7\n\n\n3\nCA\nF\n1910\nMargaret\n163\n8\n\n\n4\nCA\nF\n1910\nFrances\n134\n7\n\n\n\n\n\n\n\nIf we need to later modify an existing column, we can do so by referencing this column again with the syntax df[\"column\"], then re-assigning it to a new Series or array of the appropriate length.\n\n# Modify the “name_lengths” column to be one less than its original value\nbabynames[\"name_lengths\"] = babynames[\"name_lengths\"] - 1\nbabynames.head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\nname_lengths\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n3\n\n\n1\nCA\nF\n1910\nHelen\n239\n4\n\n\n2\nCA\nF\n1910\nDorothy\n220\n6\n\n\n3\nCA\nF\n1910\nMargaret\n163\n7\n\n\n4\nCA\nF\n1910\nFrances\n134\n6\n\n\n\n\n\n\n\nWe can rename a column using the .rename() method. It takes in a dictionary that maps old column names to their new ones.\n\n# Rename “name_lengths” to “Length”\nbabynames = babynames.rename(columns={\"name_lengths\":\"Length\"})\nbabynames.head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\nLength\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n3\n\n\n1\nCA\nF\n1910\nHelen\n239\n4\n\n\n2\nCA\nF\n1910\nDorothy\n220\n6\n\n\n3\nCA\nF\n1910\nMargaret\n163\n7\n\n\n4\nCA\nF\n1910\nFrances\n134\n6\n\n\n\n\n\n\n\nIf we want to remove a column or row of a DataFrame, we can call the .drop (documentation) method. Use the axis parameter to specify whether a column or row should be dropped. Unless otherwise specified, pandas will assume that we are dropping a row by default.\n\n# Drop our new \"Length\" column from the DataFrame\nbabynames = babynames.drop(\"Length\", axis=\"columns\")\nbabynames.head(5)\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n\n\n1\nCA\nF\n1910\nHelen\n239\n\n\n2\nCA\nF\n1910\nDorothy\n220\n\n\n3\nCA\nF\n1910\nMargaret\n163\n\n\n4\nCA\nF\n1910\nFrances\n134\n\n\n\n\n\n\n\nNotice that we re-assigned babynames to the result of babynames.drop(...). This is a subtle but important point: pandas table operations do not occur in-place. Calling df.drop(...) will output a copy of df with the row/column of interest removed without modifying the original df table.\nIn other words, if we simply call:\n\n# This creates a copy of `babynames` and removes the column \"Name\"...\nbabynames.drop(\"Name\", axis=\"columns\")\n\n# ...but the original `babynames` is unchanged! \n# Notice that the \"Name\" column is still present\nbabynames.head(5)\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n\n\n1\nCA\nF\n1910\nHelen\n239\n\n\n2\nCA\nF\n1910\nDorothy\n220\n\n\n3\nCA\nF\n1910\nMargaret\n163\n\n\n4\nCA\nF\n1910\nFrances\n134" + }, + { + "objectID": "pandas_2/pandas_2.html#useful-utility-functions", + "href": "pandas_2/pandas_2.html#useful-utility-functions", + "title": "3  Pandas II", + "section": "3.3 Useful Utility Functions", + "text": "3.3 Useful Utility Functions\npandas contains an extensive library of functions that can help shorten the process of setting and getting information from its data structures. In the following section, we will give overviews of each of the main utility functions that will help us in Data 100.\nDiscussing all functionality offered by pandas could take an entire semester! We will walk you through the most commonly-used functions and encourage you to explore and experiment on your own.\n\nNumPy and built-in function support\n.shape\n.size\n.describe()\n.sample()\n.value_counts()\n.unique()\n.sort_values()\n\nThe pandas documentation will be a valuable resource in Data 100 and beyond.\n\n3.3.1 NumPy\npandas is designed to work well with NumPy, the framework for array computations you encountered in Data 8. Just about any NumPy function can be applied to pandas DataFrames and Series.\n\n# Pull out the number of babies named Yash each year\nyash_count = babynames[babynames[\"Name\"] == \"Yash\"][\"Count\"]\nyash_count.head()\n\n331824 8\n334114 9\n336390 11\n338773 12\n341387 10\nName: Count, dtype: int64\n\n\n\n# Average number of babies named Yash each year\nnp.mean(yash_count)\n\n17.142857142857142\n\n\n\n# Max number of babies named Yash born in any one year\nnp.max(yash_count)\n\n29\n\n\n\n\n3.3.2 .shape and .size\n.shape and .size are attributes of Series and DataFrames that measure the “amount” of data stored in the structure. Calling .shape returns a tuple containing the number of rows and columns present in the DataFrame or Series. .size is used to find the total number of elements in a structure, equivalent to the number of rows times the number of columns.\nMany functions strictly require the dimensions of the arguments along certain axes to match. Calling these dimension-finding functions is much faster than counting all of the items by hand.\n\n# Return the shape of the DataFrame, in the format (num_rows, num_columns)\nbabynames.shape\n\n(407428, 5)\n\n\n\n# Return the size of the DataFrame, equal to num_rows * num_columns\nbabynames.size\n\n2037140\n\n\n\n\n3.3.3 .describe()\nIf many statistics are required from a DataFrame (minimum value, maximum value, mean value, etc.), then .describe() (documentation) can be used to compute all of them at once.\n\nbabynames.describe()\n\n\n\n\n\n\n\n\nYear\nCount\n\n\n\n\ncount\n407428.000000\n407428.000000\n\n\nmean\n1985.733609\n79.543456\n\n\nstd\n27.007660\n293.698654\n\n\nmin\n1910.000000\n5.000000\n\n\n25%\n1969.000000\n7.000000\n\n\n50%\n1992.000000\n13.000000\n\n\n75%\n2008.000000\n38.000000\n\n\nmax\n2022.000000\n8260.000000\n\n\n\n\n\n\n\nA different set of statistics will be reported if .describe() is called on a Series.\n\nbabynames[\"Sex\"].describe()\n\ncount 407428\nunique 2\ntop F\nfreq 239537\nName: Sex, dtype: object\n\n\n\n\n3.3.4 .sample()\nAs we will see later in the semester, random processes are at the heart of many data science techniques (for example, train-test splits, bootstrapping, and cross-validation). .sample() (documentation) lets us quickly select random entries (a row if called from a DataFrame, or a value if called from a Series).\nBy default, .sample() selects entries without replacement. Pass in the argument replace=True to sample with replacement.\n\n# Sample a single row\nbabynames.sample()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n110183\nCA\nF\n1989\nKaleigh\n24\n\n\n\n\n\n\n\nNaturally, this can be chained with other methods and operators (iloc, etc.).\n\n# Sample 5 random rows, and select all columns after column 2\nbabynames.sample(5).iloc[:, 2:]\n\n\n\n\n\n\n\n\nYear\nName\nCount\n\n\n\n\n9940\n1929\nGeorgia\n67\n\n\n311550\n1987\nDaron\n14\n\n\n270752\n1959\nDuke\n16\n\n\n110554\n1989\nBritny\n15\n\n\n247957\n1929\nLawrence\n183\n\n\n\n\n\n\n\n\n# Randomly sample 4 names from the year 2000, with replacement, and select all columns after column 2\nbabynames[babynames[\"Year\"] == 2000].sample(4, replace = True).iloc[:, 2:]\n\n\n\n\n\n\n\n\nYear\nName\nCount\n\n\n\n\n151301\n2000\nJaimee\n9\n\n\n150510\n2000\nKameron\n17\n\n\n342553\n2000\nCody\n532\n\n\n344582\n2000\nTyran\n6\n\n\n\n\n\n\n\n\n\n3.3.5 .value_counts()\nThe Series.value_counts() (documentation) method counts the number of occurrence of each unique value in a Series. In other words, it counts the number of times each unique value appears. This is often useful for determining the most or least common entries in a Series.\nIn the example below, we can determine the name with the most years in which at least one person has taken that name by counting the number of times each name appears in the \"Name\" column of babynames. Note that the return value is also a Series.\n\nbabynames[\"Name\"].value_counts().head()\n\nJean 223\nFrancis 221\nGuadalupe 218\nJessie 217\nMarion 214\nName: Name, dtype: int64\n\n\n\n\n3.3.6 .unique()\nIf we have a Series with many repeated values, then .unique() (documentation) can be used to identify only the unique values. Here we return an array of all the names in babynames.\n\nbabynames[\"Name\"].unique()\n\narray(['Mary', 'Helen', 'Dorothy', ..., 'Zae', 'Zai', 'Zayvier'],\n dtype=object)\n\n\n\n\n3.3.7 .sort_values()\nOrdering a DataFrame can be useful for isolating extreme values. For example, the first 5 entries of a row sorted in descending order (that is, from highest to lowest) are the largest 5 values. .sort_values (documentation) allows us to order a DataFrame or Series by a specified column. We can choose to either receive the rows in ascending order (default) or descending order.\n\n# Sort the \"Count\" column from highest to lowest\nbabynames.sort_values(by=\"Count\", ascending=False).head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n268041\nCA\nM\n1957\nMichael\n8260\n\n\n267017\nCA\nM\n1956\nMichael\n8258\n\n\n317387\nCA\nM\n1990\nMichael\n8246\n\n\n281850\nCA\nM\n1969\nMichael\n8245\n\n\n283146\nCA\nM\n1970\nMichael\n8196\n\n\n\n\n\n\n\nUnlike when calling .value_counts() on a DataFrame, we do not need to explicitly specify the column used for sorting when calling .value_counts() on a Series. We can still specify the ordering paradigm – that is, whether values are sorted in ascending or descending order.\n\n# Sort the \"Name\" Series alphabetically\nbabynames[\"Name\"].sort_values(ascending=True).head()\n\n366001 Aadan\n384005 Aadan\n369120 Aadan\n398211 Aadarsh\n370306 Aaden\nName: Name, dtype: object" + }, + { + "objectID": "pandas_2/pandas_2.html#parting-note", + "href": "pandas_2/pandas_2.html#parting-note", + "title": "3  Pandas II", + "section": "3.4 Parting Note", + "text": "3.4 Parting Note\nManipulating DataFrames is not a skill that is mastered in just one day. Due to the flexibility of pandas, there are many different ways to get from point A to point B. We recommend trying multiple different ways to solve the same problem to gain even more practice and reach that point of mastery sooner.\nNext, we will start digging deeper into the mechanics behind grouping data." + }, + { + "objectID": "pandas_3/pandas_3.html#custom-sorts", + "href": "pandas_3/pandas_3.html#custom-sorts", + "title": "4  Pandas III", + "section": "4.1 Custom Sorts", + "text": "4.1 Custom Sorts\nFirst, let’s finish our discussion about sorting. Let’s try to solve a sorting problem using different approaches. Assume we want to find the longest baby names and sort our data accordingly.\nWe’ll start by loading the babynames dataset. Note that this dataset is filtered to only contain data from California.\n\n\nCode\n# This code pulls census data and loads it into a DataFrame\n# We won't cover it explicitly in this class, but you are welcome to explore it on your own\nimport pandas as pd\nimport numpy as np\nimport urllib.request\nimport os.path\nimport zipfile\n\ndata_url = \"https://www.ssa.gov/oact/babynames/state/namesbystate.zip\"\nlocal_filename = \"data/babynamesbystate.zip\"\nif not os.path.exists(local_filename): # If the data exists don't download again\n with urllib.request.urlopen(data_url) as resp, open(local_filename, 'wb') as f:\n f.write(resp.read())\n\nzf = zipfile.ZipFile(local_filename, 'r')\n\nca_name = 'STATE.CA.TXT'\nfield_names = ['State', 'Sex', 'Year', 'Name', 'Count']\nwith zf.open(ca_name) as fh:\n babynames = pd.read_csv(fh, header=None, names=field_names)\n\nbabynames.tail(10)\n\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n407418\nCA\nM\n2022\nZach\n5\n\n\n407419\nCA\nM\n2022\nZadkiel\n5\n\n\n407420\nCA\nM\n2022\nZae\n5\n\n\n407421\nCA\nM\n2022\nZai\n5\n\n\n407422\nCA\nM\n2022\nZay\n5\n\n\n407423\nCA\nM\n2022\nZayvier\n5\n\n\n407424\nCA\nM\n2022\nZia\n5\n\n\n407425\nCA\nM\n2022\nZora\n5\n\n\n407426\nCA\nM\n2022\nZuriel\n5\n\n\n407427\nCA\nM\n2022\nZylo\n5\n\n\n\n\n\n\n\n\n4.1.1 Approach 1: Create a Temporary Column\nOne method to do this is to first start by creating a column that contains the lengths of the names.\n\n# Create a Series of the length of each name\nbabyname_lengths = babynames[\"Name\"].str.len()\n\n# Add a column named \"name_lengths\" that includes the length of each name\nbabynames[\"name_lengths\"] = babyname_lengths\nbabynames.head(5)\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\nname_lengths\n\n\n\n\n0\nCA\nF\n1910\nMary\n295\n4\n\n\n1\nCA\nF\n1910\nHelen\n239\n5\n\n\n2\nCA\nF\n1910\nDorothy\n220\n7\n\n\n3\nCA\nF\n1910\nMargaret\n163\n8\n\n\n4\nCA\nF\n1910\nFrances\n134\n7\n\n\n\n\n\n\n\nWe can then sort the DataFrame by that column using .sort_values():\n\n# Sort by the temporary column\nbabynames = babynames.sort_values(by=\"name_lengths\", ascending=False)\nbabynames.head(5)\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\nname_lengths\n\n\n\n\n334166\nCA\nM\n1996\nFranciscojavier\n8\n15\n\n\n337301\nCA\nM\n1997\nFranciscojavier\n5\n15\n\n\n339472\nCA\nM\n1998\nFranciscojavier\n6\n15\n\n\n321792\nCA\nM\n1991\nRyanchristopher\n7\n15\n\n\n327358\nCA\nM\n1993\nJohnchristopher\n5\n15\n\n\n\n\n\n\n\nFinally, we can drop the name_length column from babynames to prevent our table from getting cluttered.\n\n# Drop the 'name_length' column\nbabynames = babynames.drop(\"name_lengths\", axis='columns')\nbabynames.head(5)\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n334166\nCA\nM\n1996\nFranciscojavier\n8\n\n\n337301\nCA\nM\n1997\nFranciscojavier\n5\n\n\n339472\nCA\nM\n1998\nFranciscojavier\n6\n\n\n321792\nCA\nM\n1991\nRyanchristopher\n7\n\n\n327358\nCA\nM\n1993\nJohnchristopher\n5\n\n\n\n\n\n\n\n\n\n4.1.2 Approach 2: Sorting using the key Argument\nAnother way to approach this is to use the key argument of .sort_values(). Here we can specify that we want to sort \"Name\" values by their length.\n\nbabynames.sort_values(\"Name\", key=lambda x: x.str.len(), ascending=False).head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n334166\nCA\nM\n1996\nFranciscojavier\n8\n\n\n327472\nCA\nM\n1993\nRyanchristopher\n5\n\n\n337301\nCA\nM\n1997\nFranciscojavier\n5\n\n\n337477\nCA\nM\n1997\nRyanchristopher\n5\n\n\n312543\nCA\nM\n1987\nFranciscojavier\n5\n\n\n\n\n\n\n\n\n\n4.1.3 Approach 3: Sorting using the map Function\nWe can also use the map function on a Series to solve this. Say we want to sort the babynames table by the number of \"dr\"’s and \"ea\"’s in each \"Name\". We’ll define the function dr_ea_count to help us out.\n\n# First, define a function to count the number of times \"dr\" or \"ea\" appear in each name\ndef dr_ea_count(string):\n return string.count('dr') + string.count('ea')\n\n# Then, use `map` to apply `dr_ea_count` to each name in the \"Name\" column\nbabynames[\"dr_ea_count\"] = babynames[\"Name\"].map(dr_ea_count)\n\n# Sort the DataFrame by the new \"dr_ea_count\" column so we can see our handiwork\nbabynames = babynames.sort_values(by=\"dr_ea_count\", ascending=False)\nbabynames.head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\ndr_ea_count\n\n\n\n\n115957\nCA\nF\n1990\nDeandrea\n5\n3\n\n\n101976\nCA\nF\n1986\nDeandrea\n6\n3\n\n\n131029\nCA\nF\n1994\nLeandrea\n5\n3\n\n\n108731\nCA\nF\n1988\nDeandrea\n5\n3\n\n\n308131\nCA\nM\n1985\nDeandrea\n6\n3\n\n\n\n\n\n\n\nWe can drop the dr_ea_count once we’re done using it to maintain a neat table.\n\n# Drop the `dr_ea_count` column\nbabynames = babynames.drop(\"dr_ea_count\", axis = 'columns')\nbabynames.head(5)\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\n\n\n\n\n115957\nCA\nF\n1990\nDeandrea\n5\n\n\n101976\nCA\nF\n1986\nDeandrea\n6\n\n\n131029\nCA\nF\n1994\nLeandrea\n5\n\n\n108731\nCA\nF\n1988\nDeandrea\n5\n\n\n308131\nCA\nM\n1985\nDeandrea\n6" + }, + { + "objectID": "pandas_3/pandas_3.html#aggregating-data-with-.groupby", + "href": "pandas_3/pandas_3.html#aggregating-data-with-.groupby", + "title": "4  Pandas III", + "section": "4.2 Aggregating Data with .groupby", + "text": "4.2 Aggregating Data with .groupby\nUp until this point, we have been working with individual rows of DataFrames. As data scientists, we often wish to investigate trends across a larger subset of our data. For example, we may want to compute some summary statistic (the mean, median, sum, etc.) for a group of rows in our DataFrame. To do this, we’ll use pandas GroupBy objects. Our goal is to group together rows that fall under the same category and perform an operation that aggregates across all rows in the category.\nLet’s say we wanted to aggregate all rows in babynames for a given year.\n\nbabynames.groupby(\"Year\")\n\n<pandas.core.groupby.generic.DataFrameGroupBy object at 0x10545ac70>\n\n\nWhat does this strange output mean? Calling .groupby (documentation) has generated a GroupBy object. You can imagine this as a set of “mini” sub-DataFrames, where each subframe contains all of the rows from babynames that correspond to a particular year.\nThe diagram below shows a simplified view of babynames to help illustrate this idea.\n\n\n\nWe can’t work with a GroupBy object directly – that is why you saw that strange output earlier rather than a standard view of a DataFrame. To actually manipulate values within these “mini” DataFrames, we’ll need to call an aggregation method. This is a method that tells pandas how to aggregate the values within the GroupBy object. Once the aggregation is applied, pandas will return a normal (now grouped) DataFrame.\nThe first aggregation method we’ll consider is .agg. The .agg method takes in a function as its argument; this function is then applied to each column of a “mini” grouped DataFrame. We end up with a new DataFrame with one aggregated row per subframe. Let’s see this in action by finding the sum of all counts for each year in babynames – this is equivalent to finding the number of babies born in each year.\n\nbabynames[[\"Year\", \"Count\"]].groupby(\"Year\").agg(sum).head(5)\n\n\n\n\n\n\n\n\nCount\n\n\nYear\n\n\n\n\n\n1910\n9163\n\n\n1911\n9983\n\n\n1912\n17946\n\n\n1913\n22094\n\n\n1914\n26926\n\n\n\n\n\n\n\nWe can relate this back to the diagram we used above. Remember that the diagram uses a simplified version of babynames, which is why we see smaller values for the summed counts.\n\n\n\nPerforming an aggregation\n\n\nCalling .agg has condensed each subframe back into a single row. This gives us our final output: a DataFrame that is now indexed by \"Year\", with a single row for each unique year in the original babynames DataFrame.\nThere are many different aggregation functions we can use, all of which are useful in different applications.\n\nbabynames[[\"Year\", \"Count\"]].groupby(\"Year\").agg(min).head(5)\n\n\n\n\n\n\n\n\nCount\n\n\nYear\n\n\n\n\n\n1910\n5\n\n\n1911\n5\n\n\n1912\n5\n\n\n1913\n5\n\n\n1914\n5\n\n\n\n\n\n\n\n\nbabynames[[\"Year\", \"Count\"]].groupby(\"Year\").agg(max).head(5)\n\n\n\n\n\n\n\n\nCount\n\n\nYear\n\n\n\n\n\n1910\n295\n\n\n1911\n390\n\n\n1912\n534\n\n\n1913\n614\n\n\n1914\n773\n\n\n\n\n\n\n\n\n# Same result, but now we explicitly tell pandas to only consider the \"Count\" column when summing\nbabynames.groupby(\"Year\")[[\"Count\"]].agg(sum).head(5)\n\n\n\n\n\n\n\n\nCount\n\n\nYear\n\n\n\n\n\n1910\n9163\n\n\n1911\n9983\n\n\n1912\n17946\n\n\n1913\n22094\n\n\n1914\n26926\n\n\n\n\n\n\n\nThere are many different aggregations that can be applied to the grouped data. The primary requirement is that an aggregation function must:\n\nTake in a Series of data (a single column of the grouped subframe).\nReturn a single value that aggregates this Series.\n\n\n4.2.1 Aggregation Functions\nBecause of this fairly broad requirement, pandas offers many ways of computing an aggregation.\nIn-built Python operations – such as sum, max, and min – are automatically recognized by pandas.\n\n# What is the minimum count for each name in any year?\nbabynames.groupby(\"Name\")[[\"Count\"]].agg(min).head()\n\n\n\n\n\n\n\n\nCount\n\n\nName\n\n\n\n\n\nAadan\n5\n\n\nAadarsh\n6\n\n\nAaden\n10\n\n\nAadhav\n6\n\n\nAadhini\n6\n\n\n\n\n\n\n\n\n# What is the largest single-year count of each name?\nbabynames.groupby(\"Name\")[[\"Count\"]].agg(max).head()\n\n\n\n\n\n\n\n\nCount\n\n\nName\n\n\n\n\n\nAadan\n7\n\n\nAadarsh\n6\n\n\nAaden\n158\n\n\nAadhav\n8\n\n\nAadhini\n6\n\n\n\n\n\n\n\nAs mentioned previously, functions from the NumPy library, such as np.mean, np.max, np.min, and np.sum, are also fair game in pandas.\n\n# What is the average count for each name across all years?\nbabynames.groupby(\"Name\")[[\"Count\"]].agg(np.mean).head()\n\n\n\n\n\n\n\n\nCount\n\n\nName\n\n\n\n\n\nAadan\n6.000000\n\n\nAadarsh\n6.000000\n\n\nAaden\n46.214286\n\n\nAadhav\n6.750000\n\n\nAadhini\n6.000000\n\n\n\n\n\n\n\npandas also offers a number of in-built functions. Functions that are native to pandas can be referenced using their string name within a call to .agg. Some examples include:\n\n.agg(\"sum\")\n.agg(\"max\")\n.agg(\"min\")\n.agg(\"mean\")\n.agg(\"first\")\n.agg(\"last\")\n\nThe latter two entries in this list – \"first\" and \"last\" – are unique to pandas. They return the first or last entry in a subframe column. Why might this be useful? Consider a case where multiple columns in a group share identical information. To represent this information in the grouped output, we can simply grab the first or last entry, which we know will be identical to all other entries.\nLet’s illustrate this with an example. Say we add a new column to babynames that contains the first letter of each name.\n\n# Imagine we had an additional column, \"First Letter\". We'll explain this code next week\nbabynames[\"First Letter\"] = babynames[\"Name\"].str[0]\n\n# We construct a simplified DataFrame containing just a subset of columns\nbabynames_new = babynames[[\"Name\", \"First Letter\", \"Year\"]]\nbabynames_new.head()\n\n\n\n\n\n\n\n\nName\nFirst Letter\nYear\n\n\n\n\n115957\nDeandrea\nD\n1990\n\n\n101976\nDeandrea\nD\n1986\n\n\n131029\nLeandrea\nL\n1994\n\n\n108731\nDeandrea\nD\n1988\n\n\n308131\nDeandrea\nD\n1985\n\n\n\n\n\n\n\nIf we form groups for each name in the dataset, \"First Letter\" will be the same for all members of the group. This means that if we simply select the first entry for \"First Letter\" in the group, we’ll represent all data in that group.\nWe can use a dictionary to apply different aggregation functions to each column during grouping.\n\n\n\nAggregating using “first”\n\n\n\nbabynames_new.groupby(\"Name\").agg({\"First Letter\":\"first\", \"Year\":\"max\"}).head()\n\n\n\n\n\n\n\n\nFirst Letter\nYear\n\n\nName\n\n\n\n\n\n\nAadan\nA\n2014\n\n\nAadarsh\nA\n2019\n\n\nAaden\nA\n2020\n\n\nAadhav\nA\n2019\n\n\nAadhini\nA\n2022\n\n\n\n\n\n\n\n\n\n4.2.2 Plotting Birth Counts\nLet’s use .agg to find the total number of babies born in each year. Recall that using .agg with .groupby() follows the format: df.groupby(column_name).agg(aggregation_function). The line of code below gives us the total number of babies born in each year.\n\n\nCode\nbabynames.groupby(\"Year\")[[\"Count\"]].agg(sum).head(5)\n# Alternative 1\n# babynames.groupby(\"Year\")[[\"Count\"]].sum()\n# Alternative 2\n# babynames.groupby(\"Year\").sum(numeric_only=True)\n\n\n\n\n\n\n\n\n\nCount\n\n\nYear\n\n\n\n\n\n1910\n9163\n\n\n1911\n9983\n\n\n1912\n17946\n\n\n1913\n22094\n\n\n1914\n26926\n\n\n\n\n\n\n\nHere’s an illustration of the process:\n\nPlotting the Dataframe we obtain tells an interesting story.\n\n\nCode\nimport plotly.express as px\npuzzle2 = babynames.groupby(\"Year\")[[\"Count\"]].agg(sum)\npx.line(puzzle2, y = \"Count\")\n\n\n\n \n\n\nA word of warning: we made an enormous assumption when we decided to use this dataset to estimate birth rate. According to this article from the Legistlative Analyst Office, the true number of babies born in California in 2020 was 421,275. However, our plot shows 362,882 babies —— what happened?\n\n\n4.2.3 Summary of the .groupby() Function\nA groupby operation involves some combination of splitting a DataFrame into grouped subframes, applying a function, and combining the results.\nFor some arbitrary DataFrame df below, the code df.groupby(\"year\").agg(sum) does the following:\n\nSplits the DataFrame into sub-DataFrames with rows belonging to the same year.\nApplies the sum function to each column of each sub-DataFrame.\nCombines the results of sum into a single DataFrame, indexed by year.\n\n\n\n\n4.2.4 Revisiting the .agg() Function\n.agg() can take in any function that aggregates several values into one summary value. Some commonly-used aggregation functions can even be called directly, without explicit use of .agg(). For example, we can call .mean() on .groupby():\nbabynames.groupby(\"Year\").mean().head()\nWe can now put this all into practice. Say we want to find the baby name with sex “F” that has fallen in popularity the most in California. To calculate this, we can first create a metric: “Ratio to Peak” (RTP). The RTP is the ratio of babies born with a given name in 2022 to the maximum number of babies born with the name in any year.\nLet’s start with calculating this for one baby, “Jennifer”.\n\n# We filter by babies with sex \"F\" and sort by \"Year\"\nf_babynames = babynames[babynames[\"Sex\"] == \"F\"]\nf_babynames = f_babynames.sort_values([\"Year\"])\n\n# Determine how many Jennifers were born in CA per year\njenn_counts_series = f_babynames[f_babynames[\"Name\"] == \"Jennifer\"][\"Count\"]\n\n# Determine the max number of Jennifers born in a year and the number born in 2022 \n# to calculate RTP\nmax_jenn = max(f_babynames[f_babynames[\"Name\"] == \"Jennifer\"][\"Count\"])\ncurr_jenn = f_babynames[f_babynames[\"Name\"] == \"Jennifer\"][\"Count\"].iloc[-1]\nrtp = curr_jenn / max_jenn\nrtp\n\n0.018796372629843364\n\n\nBy creating a function to calculate RTP and applying it to our DataFrame by using .groupby(), we can easily compute the RTP for all names at once!\n\ndef ratio_to_peak(series):\n return series.iloc[-1] / max(series)\n\n#Using .groupby() to apply the function\nrtp_table = f_babynames.groupby(\"Name\")[[\"Year\", \"Count\"]].agg(ratio_to_peak)\nrtp_table.head()\n\n\n\n\n\n\n\n\nYear\nCount\n\n\nName\n\n\n\n\n\n\nAadhini\n1.0\n1.000000\n\n\nAadhira\n1.0\n0.500000\n\n\nAadhya\n1.0\n0.660000\n\n\nAadya\n1.0\n0.586207\n\n\nAahana\n1.0\n0.269231\n\n\n\n\n\n\n\nIn the rows shown above, we can see that every row shown has a Year value of 1.0.\nThis is the “pandas-ification” of logic you saw in Data 8. Much of the logic you’ve learned in Data 8 will serve you well in Data 100.\n\n\n4.2.5 Nuisance Columns\nNote that you must be careful with which columns you apply the .agg() function to. If we were to apply our function to the table as a whole by doing f_babynames.groupby(\"Name\").agg(ratio_to_peak), executing our .agg() call would result in a TypeError.\n\nWe can avoid this issue (and prevent unintentional loss of data) by explicitly selecting column(s) we want to apply our aggregation function to BEFORE calling .agg(),\n\n\n4.2.6 Renaming Columns After Grouping\nBy default, .groupby will not rename any aggregated columns. As we can see in the table above, the aggregated column is still named Count even though it now represents the RTP. For better readability, we can rename Count to Count RTP\n\nrtp_table = rtp_table.rename(columns = {\"Count\": \"Count RTP\"})\nrtp_table\n\n\n\n\n\n\n\n\nYear\nCount RTP\n\n\nName\n\n\n\n\n\n\nAadhini\n1.0\n1.000000\n\n\nAadhira\n1.0\n0.500000\n\n\nAadhya\n1.0\n0.660000\n\n\nAadya\n1.0\n0.586207\n\n\nAahana\n1.0\n0.269231\n\n\n...\n...\n...\n\n\nZyanya\n1.0\n0.466667\n\n\nZyla\n1.0\n1.000000\n\n\nZylah\n1.0\n1.000000\n\n\nZyra\n1.0\n1.000000\n\n\nZyrah\n1.0\n0.833333\n\n\n\n\n13782 rows × 2 columns\n\n\n\n\n\n4.2.7 Some Data Science Payoff\nBy sorting rtp_table, we can see the names whose popularity has decreased the most.\n\nrtp_table = rtp_table.rename(columns = {\"Count\": \"Count RTP\"})\nrtp_table.sort_values(\"Count RTP\").head()\n\n\n\n\n\n\n\n\nYear\nCount RTP\n\n\nName\n\n\n\n\n\n\nDebra\n1.0\n0.001260\n\n\nDebbie\n1.0\n0.002815\n\n\nCarol\n1.0\n0.003180\n\n\nTammy\n1.0\n0.003249\n\n\nSusan\n1.0\n0.003305\n\n\n\n\n\n\n\nTo visualize the above DataFrame, let’s look at the line plot below:\n\n\nCode\nimport plotly.express as px\npx.line(f_babynames[f_babynames[\"Name\"] == \"Debra\"], x = \"Year\", y = \"Count\")\n\n\n\n \n\n\nWe can get the list of the top 10 names and then plot popularity with the following code:\n\ntop10 = rtp_table.sort_values(\"Count RTP\").head(10).index\npx.line(\n f_babynames[f_babynames[\"Name\"].isin(top10)], \n x = \"Year\", \n y = \"Count\", \n color = \"Name\"\n)\n\n\n \n\n\nAs a quick exercise, consider what code would compute the total number of babies with each name.\n\n\nCode\nbabynames.groupby(\"Name\")[[\"Count\"]].agg(sum).head()\n# alternative solution: \n# babynames.groupby(\"Name\")[[\"Count\"]].sum()\n\n\n\n\n\n\n\n\n\nCount\n\n\nName\n\n\n\n\n\nAadan\n18\n\n\nAadarsh\n6\n\n\nAaden\n647\n\n\nAadhav\n27\n\n\nAadhini\n6" + }, + { + "objectID": "pandas_3/pandas_3.html#groupby-continued", + "href": "pandas_3/pandas_3.html#groupby-continued", + "title": "4  Pandas III", + "section": "4.3 .groupby(), Continued", + "text": "4.3 .groupby(), Continued\nWe’ll work with the elections DataFrame again.\n\n\nCode\nimport pandas as pd\nimport numpy as np\n\nelections = pd.read_csv(\"data/elections.csv\")\nelections.head(5)\n\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.210122\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.789878\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.203927\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.796073\n\n\n4\n1832\nAndrew Jackson\nDemocratic\n702735\nwin\n54.574789\n\n\n\n\n\n\n\n\n4.3.1 Raw GroupBy Objects\nThe result of groupby applied to a DataFrame is a DataFrameGroupBy object, not a DataFrame.\n\ngrouped_by_year = elections.groupby(\"Year\")\ntype(grouped_by_year)\n\npandas.core.groupby.generic.DataFrameGroupBy\n\n\nThere are several ways to look into DataFrameGroupBy objects:\n\ngrouped_by_party = elections.groupby(\"Party\")\ngrouped_by_party.groups\n\n{'American': [22, 126], 'American Independent': [115, 119, 124], 'Anti-Masonic': [6], 'Anti-Monopoly': [38], 'Citizens': [127], 'Communist': [89], 'Constitution': [160, 164, 172], 'Constitutional Union': [24], 'Democratic': [2, 4, 8, 10, 13, 14, 17, 20, 28, 29, 34, 37, 39, 45, 47, 52, 55, 57, 64, 70, 74, 77, 81, 83, 86, 91, 94, 97, 100, 105, 108, 111, 114, 116, 118, 123, 129, 134, 137, 140, 144, 151, 158, 162, 168, 176, 178], 'Democratic-Republican': [0, 1], 'Dixiecrat': [103], 'Farmer–Labor': [78], 'Free Soil': [15, 18], 'Green': [149, 155, 156, 165, 170, 177, 181], 'Greenback': [35], 'Independent': [121, 130, 143, 161, 167, 174], 'Liberal Republican': [31], 'Libertarian': [125, 128, 132, 138, 139, 146, 153, 159, 163, 169, 175, 180], 'National Democratic': [50], 'National Republican': [3, 5], 'National Union': [27], 'Natural Law': [148], 'New Alliance': [136], 'Northern Democratic': [26], 'Populist': [48, 61, 141], 'Progressive': [68, 82, 101, 107], 'Prohibition': [41, 44, 49, 51, 54, 59, 63, 67, 73, 75, 99], 'Reform': [150, 154], 'Republican': [21, 23, 30, 32, 33, 36, 40, 43, 46, 53, 56, 60, 65, 69, 72, 79, 80, 84, 87, 90, 96, 98, 104, 106, 109, 112, 113, 117, 120, 122, 131, 133, 135, 142, 145, 152, 157, 166, 171, 173, 179], 'Socialist': [58, 62, 66, 71, 76, 85, 88, 92, 95, 102], 'Southern Democratic': [25], 'States' Rights': [110], 'Taxpayers': [147], 'Union': [93], 'Union Labor': [42], 'Whig': [7, 9, 11, 12, 16, 19]}\n\n\n\ngrouped_by_party.get_group(\"Socialist\")\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n58\n1904\nEugene V. Debs\nSocialist\n402810\nloss\n2.985897\n\n\n62\n1908\nEugene V. Debs\nSocialist\n420852\nloss\n2.850866\n\n\n66\n1912\nEugene V. Debs\nSocialist\n901551\nloss\n6.004354\n\n\n71\n1916\nAllan L. Benson\nSocialist\n590524\nloss\n3.194193\n\n\n76\n1920\nEugene V. Debs\nSocialist\n913693\nloss\n3.428282\n\n\n85\n1928\nNorman Thomas\nSocialist\n267478\nloss\n0.728623\n\n\n88\n1932\nNorman Thomas\nSocialist\n884885\nloss\n2.236211\n\n\n92\n1936\nNorman Thomas\nSocialist\n187910\nloss\n0.412876\n\n\n95\n1940\nNorman Thomas\nSocialist\n116599\nloss\n0.234237\n\n\n102\n1948\nNorman Thomas\nSocialist\n139569\nloss\n0.286312\n\n\n\n\n\n\n\n\n\n4.3.2 Other GroupBy Methods\nThere are many aggregation methods we can use with .agg. Some useful options are:\n\n.mean: creates a new DataFrame with the mean value of each group\n.sum: creates a new DataFrame with the sum of each group\n.max and .min: creates a new DataFrame with the maximum/minimum value of each group\n.first and .last: creates a new DataFrame with the first/last row in each group\n.size: creates a new Series with the number of entries in each group\n.count: creates a new DataFrame with the number of entries, excluding missing values.\n\nLet’s illustrate some examples by creating a DataFrame called df.\n\ndf = pd.DataFrame({'letter':['A','A','B','C','C','C'], \n 'num':[1,2,3,4,np.NaN,4], \n 'state':[np.NaN, 'tx', 'fl', 'hi', np.NaN, 'ak']})\ndf\n\n\n\n\n\n\n\n\nletter\nnum\nstate\n\n\n\n\n0\nA\n1.0\nNaN\n\n\n1\nA\n2.0\ntx\n\n\n2\nB\n3.0\nfl\n\n\n3\nC\n4.0\nhi\n\n\n4\nC\nNaN\nNaN\n\n\n5\nC\n4.0\nak\n\n\n\n\n\n\n\nNote the slight difference between .size() and .count(): while .size() returns a Series and counts the number of entries including the missing values, .count() returns a DataFrame and counts the number of entries in each column excluding missing values.\n\ndf.groupby(\"letter\").size()\n\nletter\nA 2\nB 1\nC 3\ndtype: int64\n\n\n\ndf.groupby(\"letter\").count()\n\n\n\n\n\n\n\n\nnum\nstate\n\n\nletter\n\n\n\n\n\n\nA\n2\n1\n\n\nB\n1\n1\n\n\nC\n2\n2\n\n\n\n\n\n\n\nYou might recall that the value_counts() function in the previous note does something similar. It turns out value_counts() and groupby.size() are the same, except value_counts() sorts the resulting Series in descending order automatically.\n\ndf[\"letter\"].value_counts()\n\nC 3\nA 2\nB 1\nName: letter, dtype: int64\n\n\nThese (and other) aggregation functions are so common that pandas allows for writing shorthand. Instead of explicitly stating the use of .agg, we can call the function directly on the GroupBy object.\nFor example, the following are equivalent:\n\nelections.groupby(\"Candidate\").agg(mean)\nelections.groupby(\"Candidate\").mean()\n\nThere are many other methods that pandas supports. You can check them out on the pandas documentation.\n\n\n4.3.3 Filtering by Group\nAnother common use for GroupBy objects is to filter data by group.\ngroupby.filter takes an argument func, where func is a function that:\n\nTakes a DataFrame object as input\nReturns a single True or False.\n\ngroupby.filter applies func to each group/sub-DataFrame:\n\nIf func returns True for a group, then all rows belonging to the group are preserved.\nIf func returns False for a group, then all rows belonging to that group are filtered out.\n\nIn other words, sub-DataFrames that correspond to True are returned in the final result, whereas those with a False value are not. Importantly, groupby.filter is different from groupby.agg in that an entire sub-DataFrame is returned in the final DataFrame, not just a single row. As a result, groupby.filter preserves the original indices and the column we grouped on does NOT become the index!\n\nTo illustrate how this happens, let’s go back to the elections dataset. Say we want to identify “tight” election years – that is, we want to find all rows that correspond to election years where all candidates in that year won a similar portion of the total vote. Specifically, let’s find all rows corresponding to a year where no candidate won more than 45% of the total vote.\nIn other words, we want to:\n\nFind the years where the maximum % in that year is less than 45%\nReturn all DataFrame rows that correspond to these years\n\nFor each year, we need to find the maximum % among all rows for that year. If this maximum % is lower than 45%, we will tell pandas to keep all rows corresponding to that year.\n\nelections.groupby(\"Year\").filter(lambda sf: sf[\"%\"].max() < 45).head(9)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n23\n1860\nAbraham Lincoln\nRepublican\n1855993\nwin\n39.699408\n\n\n24\n1860\nJohn Bell\nConstitutional Union\n590901\nloss\n12.639283\n\n\n25\n1860\nJohn C. Breckinridge\nSouthern Democratic\n848019\nloss\n18.138998\n\n\n26\n1860\nStephen A. Douglas\nNorthern Democratic\n1380202\nloss\n29.522311\n\n\n66\n1912\nEugene V. Debs\nSocialist\n901551\nloss\n6.004354\n\n\n67\n1912\nEugene W. Chafin\nProhibition\n208156\nloss\n1.386325\n\n\n68\n1912\nTheodore Roosevelt\nProgressive\n4122721\nloss\n27.457433\n\n\n69\n1912\nWilliam Taft\nRepublican\n3486242\nloss\n23.218466\n\n\n70\n1912\nWoodrow Wilson\nDemocratic\n6296284\nwin\n41.933422\n\n\n\n\n\n\n\nWhat’s going on here? In this example, we’ve defined our filtering function, func, to be lambda sf: sf[\"%\"].max() < 45. This filtering function will find the maximum \"%\" value among all entries in the grouped sub-DataFrame, which we call sf. If the maximum value is less than 45, then the filter function will return True and all rows in that grouped sub-DataFrame will appear in the final output DataFrame.\nExamine the DataFrame above. Notice how, in this preview of the first 9 rows, all entries from the years 1860 and 1912 appear. This means that in 1860 and 1912, no candidate in that year won more than 45% of the total vote.\nYou may ask: how is the groupby.filter procedure different to the boolean filtering we’ve seen previously? Boolean filtering considers individual rows when applying a boolean condition. For example, the code elections[elections[\"%\"] < 45] will check the \"%\" value of every single row in elections; if it is less than 45, then that row will be kept in the output. groupby.filter, in contrast, applies a boolean condition across all rows in a group. If not all rows in that group satisfy the condition specified by the filter, the entire group will be discarded in the output.\n\n\n4.3.4 Aggregation with lambda Functions\nWhat if we wish to aggregate our DataFrame using a non-standard function – for example, a function of our own design? We can do so by combining .agg with lambda expressions.\nLet’s first consider a puzzle to jog our memory. We will attempt to find the Candidate from each Party with the highest % of votes.\nA naive approach may be to group by the Party column and aggregate by the maximum.\n\nelections.groupby(\"Party\").agg(max).head(10)\n\n\n\n\n\n\n\n\nYear\nCandidate\nPopular vote\nResult\n%\n\n\nParty\n\n\n\n\n\n\n\n\n\nAmerican\n1976\nThomas J. Anderson\n873053\nloss\n21.554001\n\n\nAmerican Independent\n1976\nLester Maddox\n9901118\nloss\n13.571218\n\n\nAnti-Masonic\n1832\nWilliam Wirt\n100715\nloss\n7.821583\n\n\nAnti-Monopoly\n1884\nBenjamin Butler\n134294\nloss\n1.335838\n\n\nCitizens\n1980\nBarry Commoner\n233052\nloss\n0.270182\n\n\nCommunist\n1932\nWilliam Z. Foster\n103307\nloss\n0.261069\n\n\nConstitution\n2016\nMichael Peroutka\n203091\nloss\n0.152398\n\n\nConstitutional Union\n1860\nJohn Bell\n590901\nloss\n12.639283\n\n\nDemocratic\n2020\nWoodrow Wilson\n81268924\nwin\n61.344703\n\n\nDemocratic-Republican\n1824\nJohn Quincy Adams\n151271\nwin\n57.210122\n\n\n\n\n\n\n\nThis approach is clearly wrong – the DataFrame claims that Woodrow Wilson won the presidency in 2020.\nWhy is this happening? Here, the max aggregation function is taken over every column independently. Among Democrats, max is computing:\n\nThe most recent Year a Democratic candidate ran for president (2020)\nThe Candidate with the alphabetically “largest” name (“Woodrow Wilson”)\nThe Result with the alphabetically “largest” outcome (“win”)\n\nInstead, let’s try a different approach. We will:\n\nSort the DataFrame so that rows are in descending order of %\nGroup by Party and select the first row of each sub-DataFrame\n\nWhile it may seem unintuitive, sorting elections by descending order of % is extremely helpful. If we then group by Party, the first row of each GroupBy object will contain information about the Candidate with the highest voter %.\n\nelections_sorted_by_percent = elections.sort_values(\"%\", ascending=False)\nelections_sorted_by_percent.head(5)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n114\n1964\nLyndon Johnson\nDemocratic\n43127041\nwin\n61.344703\n\n\n91\n1936\nFranklin Roosevelt\nDemocratic\n27752648\nwin\n60.978107\n\n\n120\n1972\nRichard Nixon\nRepublican\n47168710\nwin\n60.907806\n\n\n79\n1920\nWarren Harding\nRepublican\n16144093\nwin\n60.574501\n\n\n133\n1984\nRonald Reagan\nRepublican\n54455472\nwin\n59.023326\n\n\n\n\n\n\n\n\nelections_sorted_by_percent.groupby(\"Party\").agg(lambda x : x.iloc[0]).head(10)\n\n# Equivalent to the below code\n# elections_sorted_by_percent.groupby(\"Party\").agg('first').head(10)\n\n\n\n\n\n\n\n\nYear\nCandidate\nPopular vote\nResult\n%\n\n\nParty\n\n\n\n\n\n\n\n\n\nAmerican\n1856\nMillard Fillmore\n873053\nloss\n21.554001\n\n\nAmerican Independent\n1968\nGeorge Wallace\n9901118\nloss\n13.571218\n\n\nAnti-Masonic\n1832\nWilliam Wirt\n100715\nloss\n7.821583\n\n\nAnti-Monopoly\n1884\nBenjamin Butler\n134294\nloss\n1.335838\n\n\nCitizens\n1980\nBarry Commoner\n233052\nloss\n0.270182\n\n\nCommunist\n1932\nWilliam Z. Foster\n103307\nloss\n0.261069\n\n\nConstitution\n2008\nChuck Baldwin\n199750\nloss\n0.152398\n\n\nConstitutional Union\n1860\nJohn Bell\n590901\nloss\n12.639283\n\n\nDemocratic\n1964\nLyndon Johnson\n43127041\nwin\n61.344703\n\n\nDemocratic-Republican\n1824\nAndrew Jackson\n151271\nloss\n57.210122\n\n\n\n\n\n\n\nHere’s an illustration of the process:\n\nNotice how our code correctly determines that Lyndon Johnson from the Democratic Party has the highest voter %.\nMore generally, lambda functions are used to design custom aggregation functions that aren’t pre-defined by Python. The input parameter x to the lambda function is a GroupBy object. Therefore, it should make sense why lambda x : x.iloc[0] selects the first row in each groupby object.\nIn fact, there’s a few different ways to approach this problem. Each approach has different tradeoffs in terms of readability, performance, memory consumption, complexity, etc. We’ve given a few examples below.\nNote: Understanding these alternative solutions is not required. They are given to demonstrate the vast number of problem-solving approaches in pandas.\n\n# Using the idxmax function\nbest_per_party = elections.loc[elections.groupby('Party')['%'].idxmax()]\nbest_per_party.head(5)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n22\n1856\nMillard Fillmore\nAmerican\n873053\nloss\n21.554001\n\n\n115\n1968\nGeorge Wallace\nAmerican Independent\n9901118\nloss\n13.571218\n\n\n6\n1832\nWilliam Wirt\nAnti-Masonic\n100715\nloss\n7.821583\n\n\n38\n1884\nBenjamin Butler\nAnti-Monopoly\n134294\nloss\n1.335838\n\n\n127\n1980\nBarry Commoner\nCitizens\n233052\nloss\n0.270182\n\n\n\n\n\n\n\n\n# Using the .drop_duplicates function\nbest_per_party2 = elections.sort_values('%').drop_duplicates(['Party'], keep='last')\nbest_per_party2.head(5)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n148\n1996\nJohn Hagelin\nNatural Law\n113670\nloss\n0.118219\n\n\n164\n2008\nChuck Baldwin\nConstitution\n199750\nloss\n0.152398\n\n\n110\n1956\nT. Coleman Andrews\nStates' Rights\n107929\nloss\n0.174883\n\n\n147\n1996\nHoward Phillips\nTaxpayers\n184656\nloss\n0.192045\n\n\n136\n1988\nLenora Fulani\nNew Alliance\n217221\nloss\n0.237804" + }, + { + "objectID": "pandas_3/pandas_3.html#aggregating-data-with-pivot-tables", + "href": "pandas_3/pandas_3.html#aggregating-data-with-pivot-tables", + "title": "4  Pandas III", + "section": "4.4 Aggregating Data with Pivot Tables", + "text": "4.4 Aggregating Data with Pivot Tables\nWe know now that .groupby gives us the ability to group and aggregate data across our DataFrame. The examples above formed groups using just one column in the DataFrame. It’s possible to group by multiple columns at once by passing in a list of column names to .groupby.\nLet’s consider the babynames dataset again. In this problem, we will find the total number of baby names associated with each sex for each year. To do this, we’ll group by both the \"Year\" and \"Sex\" columns.\n\nbabynames.head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\nFirst Letter\n\n\n\n\n115957\nCA\nF\n1990\nDeandrea\n5\nD\n\n\n101976\nCA\nF\n1986\nDeandrea\n6\nD\n\n\n131029\nCA\nF\n1994\nLeandrea\n5\nL\n\n\n108731\nCA\nF\n1988\nDeandrea\n5\nD\n\n\n308131\nCA\nM\n1985\nDeandrea\n6\nD\n\n\n\n\n\n\n\n\n# Find the total number of baby names associated with each sex for each \n# year in the data\nbabynames.groupby([\"Year\", \"Sex\"])[[\"Count\"]].agg(sum).head(6)\n\n\n\n\n\n\n\n\n\nCount\n\n\nYear\nSex\n\n\n\n\n\n1910\nF\n5950\n\n\nM\n3213\n\n\n1911\nF\n6602\n\n\nM\n3381\n\n\n1912\nF\n9804\n\n\nM\n8142\n\n\n\n\n\n\n\nNotice that both \"Year\" and \"Sex\" serve as the index of the DataFrame (they are both rendered in bold). We’ve created a multi-index DataFrame where two different index values, the year and sex, are used to uniquely identify each row.\nThis isn’t the most intuitive way of representing this data – and, because multi-indexed DataFrames have multiple dimensions in their index, they can often be difficult to use.\nAnother strategy to aggregate across two columns is to create a pivot table. You saw these back in Data 8. One set of values is used to create the index of the pivot table; another set is used to define the column names. The values contained in each cell of the table correspond to the aggregated data for each index-column pair.\nHere’s an illustration of the process:\n\nThe best way to understand pivot tables is to see one in action. Let’s return to our original goal of summing the total number of names associated with each combination of year and sex. We’ll call the pandas .pivot_table method to create a new table.\n\n# The `pivot_table` method is used to generate a Pandas pivot table\nimport numpy as np\nbabynames.pivot_table(\n index = \"Year\",\n columns = \"Sex\", \n values = \"Count\", \n aggfunc = np.sum, \n).head(5)\n\n\n\n\n\n\n\nSex\nF\nM\n\n\nYear\n\n\n\n\n\n\n1910\n5950\n3213\n\n\n1911\n6602\n3381\n\n\n1912\n9804\n8142\n\n\n1913\n11860\n10234\n\n\n1914\n13815\n13111\n\n\n\n\n\n\n\nLooks a lot better! Now, our DataFrame is structured with clear index-column combinations. Each entry in the pivot table represents the summed count of names for a given combination of \"Year\" and \"Sex\".\nLet’s take a closer look at the code implemented above.\n\nindex = \"Year\" specifies the column name in the original DataFrame that should be used as the index of the pivot table\ncolumns = \"Sex\" specifies the column name in the original DataFrame that should be used to generate the columns of the pivot table\nvalues = \"Count\" indicates what values from the original DataFrame should be used to populate the entry for each index-column combination\naggfunc = np.sum tells pandas what function to use when aggregating the data specified by values. Here, we are summing the name counts for each pair of \"Year\" and \"Sex\"\n\nWe can even include multiple values in the index or columns of our pivot tables.\n\nbabynames_pivot = babynames.pivot_table(\n index=\"Year\", # the rows (turned into index)\n columns=\"Sex\", # the column values\n values=[\"Count\", \"Name\"], \n aggfunc=max, # group operation\n)\nbabynames_pivot.head(6)\n\n\n\n\n\n\n\n\nCount\nName\n\n\nSex\nF\nM\nF\nM\n\n\nYear\n\n\n\n\n\n\n\n\n1910\n295\n237\nYvonne\nWilliam\n\n\n1911\n390\n214\nZelma\nWillis\n\n\n1912\n534\n501\nYvonne\nWoodrow\n\n\n1913\n584\n614\nZelma\nYoshio\n\n\n1914\n773\n769\nZelma\nYoshio\n\n\n1915\n998\n1033\nZita\nYukio\n\n\n\n\n\n\n\nNote that each row provides the number of girls and number of boys having that year’s most common name, and also lists the alphabetically largest girl name and boy name. The counts for number of girls/boys in the resulting DataFrame do not correspond to the names listed. For example, in 1910, the most popular girl name is given to 295 girls, but that name was likely not Yvonne." + }, + { + "objectID": "pandas_3/pandas_3.html#joining-tables", + "href": "pandas_3/pandas_3.html#joining-tables", + "title": "4  Pandas III", + "section": "4.5 Joining Tables", + "text": "4.5 Joining Tables\nWhen working on data science projects, we’re unlikely to have absolutely all the data we want contained in a single DataFrame – a real-world data scientist needs to grapple with data coming from multiple sources. If we have access to multiple datasets with related information, we can join two or more tables into a single DataFrame.\nTo put this into practice, we’ll revisit the elections dataset.\n\nelections.head(5)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.210122\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.789878\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.203927\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.796073\n\n\n4\n1832\nAndrew Jackson\nDemocratic\n702735\nwin\n54.574789\n\n\n\n\n\n\n\nSay we want to understand the popularity of the names of each presidential candidate in 2022. To do this, we’ll need the combined data of babynames and elections.\nWe’ll start by creating a new column containing the first name of each presidential candidate. This will help us join each name in elections to the corresponding name data in babynames.\n\n# This `str` operation splits each candidate's full name at each \n# blank space, then takes just the candidate's first name\nelections[\"First Name\"] = elections[\"Candidate\"].str.split().str[0]\nelections.head(5)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\nFirst Name\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.210122\nAndrew\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.789878\nJohn\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.203927\nAndrew\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.796073\nJohn\n\n\n4\n1832\nAndrew Jackson\nDemocratic\n702735\nwin\n54.574789\nAndrew\n\n\n\n\n\n\n\n\n# Here, we'll only consider `babynames` data from 2022\nbabynames_2022 = babynames[babynames[\"Year\"]==2022]\nbabynames_2022.head()\n\n\n\n\n\n\n\n\nState\nSex\nYear\nName\nCount\nFirst Letter\n\n\n\n\n237964\nCA\nF\n2022\nLeandra\n10\nL\n\n\n404916\nCA\nM\n2022\nLeandro\n99\nL\n\n\n405892\nCA\nM\n2022\nAndreas\n14\nA\n\n\n235927\nCA\nF\n2022\nAndrea\n322\nA\n\n\n405695\nCA\nM\n2022\nDeandre\n18\nD\n\n\n\n\n\n\n\nNow, we’re ready to join the two tables. pd.merge is the pandas method used to join DataFrames together.\n\nmerged = pd.merge(left = elections, right = babynames_2022, \\\n left_on = \"First Name\", right_on = \"Name\")\nmerged.head()\n# Notice that pandas automatically specifies `Year_x` and `Year_y` \n# when both merged DataFrames have the same column name to avoid confusion\n\n# Second option\n# merged = elections.merge(right = babynames_2022, \\\n # left_on = \"First Name\", right_on = \"Name\")\n\n\n\n\n\n\n\n\nYear_x\nCandidate\nParty\nPopular vote\nResult\n%\nFirst Name\nState\nSex\nYear_y\nName\nCount\nFirst Letter\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.210122\nAndrew\nCA\nM\n2022\nAndrew\n741\nA\n\n\n1\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.203927\nAndrew\nCA\nM\n2022\nAndrew\n741\nA\n\n\n2\n1832\nAndrew Jackson\nDemocratic\n702735\nwin\n54.574789\nAndrew\nCA\nM\n2022\nAndrew\n741\nA\n\n\n3\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.789878\nJohn\nCA\nM\n2022\nJohn\n490\nJ\n\n\n4\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.796073\nJohn\nCA\nM\n2022\nJohn\n490\nJ\n\n\n\n\n\n\n\nLet’s take a closer look at the parameters:\n\nleft and right parameters are used to specify the DataFrames to be joined.\nleft_on and right_on parameters are assigned to the string names of the columns to be used when performing the join. These two on parameters tell pandas what values should act as pairing keys to determine which rows to merge across the DataFrames. We’ll talk more about this idea of a pairing key next lecture." + }, + { + "objectID": "pandas_3/pandas_3.html#parting-note", + "href": "pandas_3/pandas_3.html#parting-note", + "title": "4  Pandas III", + "section": "4.6 Parting Note", + "text": "4.6 Parting Note\nCongratulations! We finally tackled pandas. Don’t worry if you are still not feeling very comfortable with it—you will have plenty of chances to practice over the next few weeks.\nNext, we will get our hands dirty with some real-world datasets and use our pandas knowledge to conduct some exploratory data analysis." + }, + { + "objectID": "eda/eda.html#structure", + "href": "eda/eda.html#structure", + "title": "5  Data Cleaning and EDA", + "section": "5.1 Structure", + "text": "5.1 Structure\nWe often prefer rectangular data for data analysis. Rectangular structures are easy to manipulate and analyze. A key element of data cleaning is about transforming data to be more rectangular.\nThere are two kinds of rectangular data: tables and matrices. Tables have named columns with different data types and are manipulated using data transformation languages. Matrices contain numeric data of the same type and are manipulated using linear algebra.\n\n5.1.1 File Formats\nThere are many file types for storing structured data: TSV, JSON, XML, ASCII, SAS, etc. We’ll only cover CSV, TSV, and JSON in lecture, but you’ll likely encounter other formats as you work with different datasets. Reading documentation is your best bet for understanding how to process the multitude of different file types.\n\n5.1.1.1 CSV\nCSVs, which stand for Comma-Separated Values, are a common tabular data format. In the past two pandas lectures, we briefly touched on the idea of file format: the way data is encoded in a file for storage. Specifically, our elections and babynames datasets were stored and loaded as CSVs:\n\npd.read_csv(\"data/elections.csv\").head(5)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.21\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.79\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.20\n\n\n3\n1828\nJohn Quincy Adams\nNational Republican\n500897\nloss\n43.80\n\n\n4\n1832\nAndrew Jackson\nDemocratic\n702735\nwin\n54.57\n\n\n\n\n\n\n\nTo better understand the properties of a CSV, let’s take a look at the first few rows of the raw data file to see what it looks like before being loaded into a DataFrame. We’ll use the repr() function to return the raw string with its special characters:\n\nwith open(\"data/elections.csv\", \"r\") as table:\n i = 0\n for row in table:\n print(repr(row))\n i += 1\n if i > 3:\n break\n\n'Year,Candidate,Party,Popular vote,Result,%\\n'\n'1824,Andrew Jackson,Democratic-Republican,151271,loss,57.21012204\\n'\n'1824,John Quincy Adams,Democratic-Republican,113142,win,42.78987796\\n'\n'1828,Andrew Jackson,Democratic,642806,win,56.20392707\\n'\n\n\nEach row, or record, in the data is delimited by a newline \\n. Each column, or field, in the data is delimited by a comma , (hence, comma-separated!).\n\n\n5.1.1.2 TSV\nAnother common file type is TSV (Tab-Separated Values). In a TSV, records are still delimited by a newline \\n, while fields are delimited by \\t tab character.\nLet’s check out the first few rows of the raw TSV file. Again, we’ll use the repr() function so that print shows the special characters.\n\nwith open(\"data/elections.txt\", \"r\") as table:\n i = 0\n for row in table:\n print(repr(row))\n i += 1\n if i > 3:\n break\n\n'\\ufeffYear\\tCandidate\\tParty\\tPopular vote\\tResult\\t%\\n'\n'1824\\tAndrew Jackson\\tDemocratic-Republican\\t151271\\tloss\\t57.21012204\\n'\n'1824\\tJohn Quincy Adams\\tDemocratic-Republican\\t113142\\twin\\t42.78987796\\n'\n'1828\\tAndrew Jackson\\tDemocratic\\t642806\\twin\\t56.20392707\\n'\n\n\nTSVs can be loaded into pandas using pd.read_csv. We’ll need to specify the delimiter with parametersep='\\t' (documentation).\n\npd.read_csv(\"data/elections.txt\", sep='\\t').head(3)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.21\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.79\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.20\n\n\n\n\n\n\n\nAn issue with CSVs and TSVs comes up whenever there are commas or tabs within the records. How does pandas differentiate between a comma delimiter vs. a comma within the field itself, for example 8,900? To remedy this, check out the quotechar parameter.\n\n\n5.1.1.3 JSON\nJSON (JavaScript Object Notation) files behave similarly to Python dictionaries. A raw JSON is shown below.\n\nwith open(\"data/elections.json\", \"r\") as table:\n i = 0\n for row in table:\n print(row)\n i += 1\n if i > 8:\n break\n\n[\n\n {\n\n \"Year\": 1824,\n\n \"Candidate\": \"Andrew Jackson\",\n\n \"Party\": \"Democratic-Republican\",\n\n \"Popular vote\": 151271,\n\n \"Result\": \"loss\",\n\n \"%\": 57.21012204\n\n },\n\n\n\nJSON files can be loaded into pandas using pd.read_json.\n\npd.read_json('data/elections.json').head(3)\n\n\n\n\n\n\n\n\nYear\nCandidate\nParty\nPopular vote\nResult\n%\n\n\n\n\n0\n1824\nAndrew Jackson\nDemocratic-Republican\n151271\nloss\n57.21\n\n\n1\n1824\nJohn Quincy Adams\nDemocratic-Republican\n113142\nwin\n42.79\n\n\n2\n1828\nAndrew Jackson\nDemocratic\n642806\nwin\n56.20\n\n\n\n\n\n\n\n\n5.1.1.3.1 EDA with JSON: Berkeley COVID-19 Data\nThe City of Berkeley Open Data website has a dataset with COVID-19 Confirmed Cases among Berkeley residents by date. Let’s download the file and save it as a JSON (note the source URL file type is also a JSON). In the interest of reproducible data science, we will download the data programatically. We have defined some helper functions in the ds100_utils.py file that we can reuse these helper functions in many different notebooks.\n\nfrom ds100_utils import fetch_and_cache\n\ncovid_file = fetch_and_cache(\n \"https://data.cityofberkeley.info/api/views/xn6j-b766/rows.json?accessType=DOWNLOAD\",\n \"confirmed-cases.json\",\n force=False)\ncovid_file # a file path wrapper object\n\nUsing cached version that was downloaded (UTC): Fri Aug 18 22:19:42 2023\n\n\nPosixPath('data/confirmed-cases.json')\n\n\n\n5.1.1.3.1.1 File Size\nLet’s start our analysis by getting a rough estimate of the size of the dataset to inform the tools we use to view the data. For relatively small datasets, we can use a text editor or spreadsheet. For larger datasets, more programmatic exploration or distributed computing tools may be more fitting. Here we will use Python tools to probe the file.\nSince there seem to be text files, let’s investigate the number of lines, which often corresponds to the number of records\n\nimport os\n\nprint(covid_file, \"is\", os.path.getsize(covid_file) / 1e6, \"MB\")\n\nwith open(covid_file, \"r\") as f:\n print(covid_file, \"is\", sum(1 for l in f), \"lines.\")\n\ndata/confirmed-cases.json is 0.116367 MB\ndata/confirmed-cases.json is 1110 lines.\n\n\n\n\n5.1.1.3.1.2 Unix Commands\nAs part of the EDA workflow, Unix commands can come in very handy. In fact, there’s an entire book called “Data Science at the Command Line” that explores this idea in depth! In Jupyter/IPython, you can prefix lines with ! to execute arbitrary Unix commands, and within those lines, you can refer to Python variables and expressions with the syntax {expr}.\nHere, we use the ls command to list files, using the -lh flags, which request “long format with information in human-readable form.” We also use the wc command for “word count,” but with the -l flag, which asks for line counts instead of words.\nThese two give us the same information as the code above, albeit in a slightly different form:\n\n!ls -lh {covid_file}\n!wc -l {covid_file}\n\n-rw-r--r-- 1 Ishani staff 114K Aug 18 2023 data/confirmed-cases.json\n\n\n 1109 data/confirmed-cases.json\n\n\n\n\n5.1.1.3.1.3 File Contents\nLet’s explore the data format using Python.\n\nwith open(covid_file, \"r\") as f:\n for i, row in enumerate(f):\n print(repr(row)) # print raw strings\n if i >= 4: break\n\n'{\\n'\n' \"meta\" : {\\n'\n' \"view\" : {\\n'\n' \"id\" : \"xn6j-b766\",\\n'\n' \"name\" : \"COVID-19 Confirmed Cases\",\\n'\n\n\nWe can use the head Unix command (which is where pandas’ head method comes from!) to see the first few lines of the file:\n\n!head -5 {covid_file}\n\n{\n \"meta\" : {\n \"view\" : {\n \"id\" : \"xn6j-b766\",\n \"name\" : \"COVID-19 Confirmed Cases\",\n\n\nIn order to load the JSON file into pandas, Let’s first do some EDA with Oython’s json package to understand the particular structure of this JSON file so that we can decide what (if anything) to load into pandas. Python has relatively good support for JSON data since it closely matches the internal python object model. In the following cell we import the entire JSON datafile into a python dictionary using the json package.\n\nimport json\n\nwith open(covid_file, \"rb\") as f:\n covid_json = json.load(f)\n\nThe covid_json variable is now a dictionary encoding the data in the file:\n\ntype(covid_json)\n\ndict\n\n\nWe can examine what keys are in the top level JSON object by listing out the keys.\n\ncovid_json.keys()\n\ndict_keys(['meta', 'data'])\n\n\nObservation: The JSON dictionary contains a meta key which likely refers to metadata (data about the data). Metadata is often maintained with the data and can be a good source of additional information.\nWe can investigate the metadata further by examining the keys associated with the metadata.\n\ncovid_json['meta'].keys()\n\ndict_keys(['view'])\n\n\nThe meta key contains another dictionary called view. This likely refers to metadata about a particular “view” of some underlying database. We will learn more about views when we study SQL later in the class.\n\ncovid_json['meta']['view'].keys()\n\ndict_keys(['id', 'name', 'assetType', 'attribution', 'averageRating', 'category', 'createdAt', 'description', 'displayType', 'downloadCount', 'hideFromCatalog', 'hideFromDataJson', 'newBackend', 'numberOfComments', 'oid', 'provenance', 'publicationAppendEnabled', 'publicationDate', 'publicationGroup', 'publicationStage', 'rowsUpdatedAt', 'rowsUpdatedBy', 'tableId', 'totalTimesRated', 'viewCount', 'viewLastModified', 'viewType', 'approvals', 'columns', 'grants', 'metadata', 'owner', 'query', 'rights', 'tableAuthor', 'tags', 'flags'])\n\n\nNotice that this a nested/recursive data structure. As we dig deeper we reveal more and more keys and the corresponding data:\nmeta\n|-> data\n | ... (haven't explored yet)\n|-> view\n | -> id\n | -> name\n | -> attribution \n ...\n | -> description\n ...\n | -> columns\n ...\nThere is a key called description in the view sub dictionary. This likely contains a description of the data:\n\nprint(covid_json['meta']['view']['description'])\n\nCounts of confirmed COVID-19 cases among Berkeley residents by date.\n\n\n\n\n5.1.1.3.1.4 Examining the Data Field for Records\nWe can look at a few entries in the data field. This is what we’ll load into pandas.\n\nfor i in range(3):\n print(f\"{i:03} | {covid_json['data'][i]}\")\n\n000 | ['row-kzbg.v7my-c3y2', '00000000-0000-0000-0405-CB14DE51DAA7', 0, 1643733903, None, 1643733903, None, '{ }', '2020-02-28T00:00:00', '1', '1']\n001 | ['row-jkyx_9u4r-h2yw', '00000000-0000-0000-F806-86D0DBE0E17F', 0, 1643733903, None, 1643733903, None, '{ }', '2020-02-29T00:00:00', '0', '1']\n002 | ['row-qifg_4aug-y3ym', '00000000-0000-0000-2DCE-4D1872F9B216', 0, 1643733903, None, 1643733903, None, '{ }', '2020-03-01T00:00:00', '0', '1']\n\n\nObservations: * These look like equal-length records, so maybe data is a table! * But what do each of values in the record mean? Where can we find column headers?\nFor that, we’ll need the columns key in the metadata dictionary. This returns a list:\n\ntype(covid_json['meta']['view']['columns'])\n\nlist\n\n\n\n\n5.1.1.3.1.5 Summary of exploring the JSON file\n\nThe above metadata tells us a lot about the columns in the data including column names, potential data anomalies, and a basic statistic.\nBecause of its non-tabular structure, JSON makes it easier (than CSV) to create self-documenting data, meaning that information about the data is stored in the same file as the data.\nSelf-documenting data can be helpful since it maintains its own description and these descriptions are more likely to be updated as data changes.\n\n\n\n5.1.1.3.1.6 Loading COVID Data into pandas\nFinally, let’s load the data (not the metadata) into a pandas DataFrame. In the following block of code we:\n\nTranslate the JSON records into a DataFrame:\n\nfields: covid_json['meta']['view']['columns']\nrecords: covid_json['data']\n\nRemove columns that have no metadata description. This would be a bad idea in general, but here we remove these columns since the above analysis suggests they are unlikely to contain useful information.\nExamine the tail of the table.\n\n\n# Load the data from JSON and assign column titles\ncovid = pd.DataFrame(\n covid_json['data'],\n columns=[c['name'] for c in covid_json['meta']['view']['columns']])\n\ncovid.tail()\n\n\n\n\n\n\n\n\nsid\nid\nposition\ncreated_at\ncreated_meta\nupdated_at\nupdated_meta\nmeta\nDate\nNew Cases\nCumulative Cases\n\n\n\n\n699\nrow-49b6_x8zv.gyum\n00000000-0000-0000-A18C-9174A6D05774\n0\n1643733903\nNone\n1643733903\nNone\n{ }\n2022-01-27T00:00:00\n106\n10694\n\n\n700\nrow-gs55-p5em.y4v9\n00000000-0000-0000-F41D-5724AEABB4D6\n0\n1643733903\nNone\n1643733903\nNone\n{ }\n2022-01-28T00:00:00\n223\n10917\n\n\n701\nrow-3pyj.tf95-qu67\n00000000-0000-0000-BEE3-B0188D2518BD\n0\n1643733903\nNone\n1643733903\nNone\n{ }\n2022-01-29T00:00:00\n139\n11056\n\n\n702\nrow-cgnd.8syv.jvjn\n00000000-0000-0000-C318-63CF75F7F740\n0\n1643733903\nNone\n1643733903\nNone\n{ }\n2022-01-30T00:00:00\n33\n11089\n\n\n703\nrow-qywv_24x6-237y\n00000000-0000-0000-FE92-9789FED3AA20\n0\n1643733903\nNone\n1643733903\nNone\n{ }\n2022-01-31T00:00:00\n42\n11131\n\n\n\n\n\n\n\n\n\n\n\n\n5.1.2 Primary and Foreign Keys\nLast time, we introduced .merge as the pandas method for joining multiple DataFrames together. In our discussion of joins, we touched on the idea of using a “key” to determine what rows should be merged from each table. Let’s take a moment to examine this idea more closely.\nThe primary key is the column or set of columns in a table that uniquely determine the values of the remaining columns. It can be thought of as the unique identifier for each individual row in the table. For example, a table of Data 100 students might use each student’s Cal ID as the primary key.\n\n\n\n\n\n\n\n\n\nCal ID\nName\nMajor\n\n\n\n\n0\n3034619471\nOski\nData Science\n\n\n1\n3035619472\nOllie\nComputer Science\n\n\n2\n3025619473\nOrrie\nData Science\n\n\n3\n3046789372\nOllie\nEconomics\n\n\n\n\n\n\n\nThe foreign key is the column or set of columns in a table that reference primary keys in other tables. Knowing a dataset’s foreign keys can be useful when assigning the left_on and right_on parameters of .merge. In the table of office hour tickets below, \"Cal ID\" is a foreign key referencing the previous table.\n\n\n\n\n\n\n\n\n\nOH Request\nCal ID\nQuestion\n\n\n\n\n0\n1\n3034619471\nHW 2 Q1\n\n\n1\n2\n3035619472\nHW 2 Q3\n\n\n2\n3\n3025619473\nLab 3 Q4\n\n\n3\n4\n3035619472\nHW 2 Q7\n\n\n\n\n\n\n\n\n\n5.1.3 Variable Types\nVariables are columns. A variable is a measurement of a particular concept. Variables have two common properties: data type/storage type and variable type/feature type. The data type of a variable indicates how each variable value is stored in memory (integer, floating point, boolean, etc.) and affects which pandas functions are used. The variable type is a conceptualized measurement of information (and therefore indicates what values a variable can take on). Variable type is identified through expert knowledge, exploring the data itself, or consulting the data codebook. The variable type affects how one visualizes and inteprets the data. In this class, “variable types” are conceptual.\nAfter loading data into a file, it’s a good idea to take the time to understand what pieces of information are encoded in the dataset. In particular, we want to identify what variable types are present in our data. Broadly speaking, we can categorize variables into one of two overarching types.\nQuantitative variables describe some numeric quantity or amount. We can divide quantitative data further into:\n\nContinuous quantitative variables: numeric data that can be measured on a continuous scale to arbitrary precision. Continuous variables do not have a strict set of possible values – they can be recorded to any number of decimal places. For example, weights, GPA, or CO2 concentrations.\nDiscrete quantitative variables: numeric data that can only take on a finite set of possible values. For example, someone’s age or the number of siblings they have.\n\nQualitative variables, also known as categorical variables, describe data that isn’t measuring some quantity or amount. The sub-categories of categorical data are:\n\nOrdinal qualitative variables: categories with ordered levels. Specifically, ordinal variables are those where the difference between levels has no consistent, quantifiable meaning. Some examples include levels of education (high school, undergrad, grad, etc.), income bracket (low, medium, high), or Yelp rating.\nNominal qualitative variables: categories with no specific order. For example, someone’s political affiliation or Cal ID number.\n\n\n\n\nClassification of variable types\n\n\nNote that many variables don’t sit neatly in just one of these categories. Qualitative variables could have numeric levels, and conversely, quantitative variables could be stored as strings." + }, + { + "objectID": "eda/eda.html#granularity-scope-and-temporality", + "href": "eda/eda.html#granularity-scope-and-temporality", + "title": "5  Data Cleaning and EDA", + "section": "5.2 Granularity, Scope, and Temporality", + "text": "5.2 Granularity, Scope, and Temporality\nAfter understanding the structure of the dataset, the next task is to determine what exactly the data represents. We’ll do so by considering the data’s granularity, scope, and temporality.\n\n5.2.1 Granularity\nThe granularity of a dataset is what a single row represents. You can also think of it as the level of detail included in the data. To determine the data’s granularity, ask: what does each row in the dataset represent? Fine-grained data contains a high level of detail, with a single row representing a small individual unit. For example, each record may represent one person. Coarse-grained data is encoded such that a single row represents a large individual unit – for example, each record may represent a group of people.\n\n\n5.2.2 Scope\nThe scope of a dataset is the subset of the population covered by the data. If we were investigating student performance in Data Science courses, a dataset with a narrow scope might encompass all students enrolled in Data 100 whereas a dataset with an expansive scope might encompass all students in California.\n\n\n5.2.3 Temporality\nThe temporality of a dataset describes the periodicity over which the data was collected as well as when the data was most recently collected or updated.\nTime and date fields of a dataset could represent a few things:\n\nwhen the “event” happened\nwhen the data was collected, or when it was entered into the system\nwhen the data was copied into the database\n\nTo fully understand the temporality of the data, it also may be necessary to standardize time zones or inspect recurring time-based trends in the data (do patterns recur in 24-hour periods? Over the course of a month? Seasonally?). The convention for standardizing time is the Coordinated Universal Time (UTC), an international time standard measured at 0 degrees latitude that stays consistent throughout the year (no daylight savings). We can represent Berkeley’s time zone, Pacific Standard Time (PST), as UTC-7 (with daylight savings).\n\n5.2.3.1 Temporality with pandas’ dt accessors\nLet’s briefly look at how we can use pandas’ dt accessors to work with dates/times in a dataset using the dataset you’ll see in Lab 3: the Berkeley PD Calls for Service dataset.\n\n\nCode\ncalls = pd.read_csv(\"data/Berkeley_PD_-_Calls_for_Service.csv\")\ncalls.head()\n\n\n\n\n\n\n\n\n\nCASENO\nOFFENSE\nEVENTDT\nEVENTTM\nCVLEGEND\nCVDOW\nInDbDate\nBlock_Location\nBLKADDR\nCity\nState\n\n\n\n\n0\n21014296\nTHEFT MISD. (UNDER $950)\n04/01/2021 12:00:00 AM\n10:58\nLARCENY\n4\n06/15/2021 12:00:00 AM\nBerkeley, CA\\n(37.869058, -122.270455)\nNaN\nBerkeley\nCA\n\n\n1\n21014391\nTHEFT MISD. (UNDER $950)\n04/01/2021 12:00:00 AM\n10:38\nLARCENY\n4\n06/15/2021 12:00:00 AM\nBerkeley, CA\\n(37.869058, -122.270455)\nNaN\nBerkeley\nCA\n\n\n2\n21090494\nTHEFT MISD. (UNDER $950)\n04/19/2021 12:00:00 AM\n12:15\nLARCENY\n1\n06/15/2021 12:00:00 AM\n2100 BLOCK HASTE ST\\nBerkeley, CA\\n(37.864908,...\n2100 BLOCK HASTE ST\nBerkeley\nCA\n\n\n3\n21090204\nTHEFT FELONY (OVER $950)\n02/13/2021 12:00:00 AM\n17:00\nLARCENY\n6\n06/15/2021 12:00:00 AM\n2600 BLOCK WARRING ST\\nBerkeley, CA\\n(37.86393...\n2600 BLOCK WARRING ST\nBerkeley\nCA\n\n\n4\n21090179\nBURGLARY AUTO\n02/08/2021 12:00:00 AM\n6:20\nBURGLARY - VEHICLE\n1\n06/15/2021 12:00:00 AM\n2700 BLOCK GARBER ST\\nBerkeley, CA\\n(37.86066,...\n2700 BLOCK GARBER ST\nBerkeley\nCA\n\n\n\n\n\n\n\nLooks like there are three columns with dates/times: EVENTDT, EVENTTM, and InDbDate.\nMost likely, EVENTDT stands for the date when the event took place, EVENTTM stands for the time of day the event took place (in 24-hr format), and InDbDate is the date this call is recorded onto the database.\nIf we check the data type of these columns, we will see they are stored as strings. We can convert them to datetime objects using pandas to_datetime function.\n\ncalls[\"EVENTDT\"] = pd.to_datetime(calls[\"EVENTDT\"])\ncalls.head()\n\n\n\n\n\n\n\n\nCASENO\nOFFENSE\nEVENTDT\nEVENTTM\nCVLEGEND\nCVDOW\nInDbDate\nBlock_Location\nBLKADDR\nCity\nState\n\n\n\n\n0\n21014296\nTHEFT MISD. (UNDER $950)\n2021-04-01\n10:58\nLARCENY\n4\n06/15/2021 12:00:00 AM\nBerkeley, CA\\n(37.869058, -122.270455)\nNaN\nBerkeley\nCA\n\n\n1\n21014391\nTHEFT MISD. (UNDER $950)\n2021-04-01\n10:38\nLARCENY\n4\n06/15/2021 12:00:00 AM\nBerkeley, CA\\n(37.869058, -122.270455)\nNaN\nBerkeley\nCA\n\n\n2\n21090494\nTHEFT MISD. (UNDER $950)\n2021-04-19\n12:15\nLARCENY\n1\n06/15/2021 12:00:00 AM\n2100 BLOCK HASTE ST\\nBerkeley, CA\\n(37.864908,...\n2100 BLOCK HASTE ST\nBerkeley\nCA\n\n\n3\n21090204\nTHEFT FELONY (OVER $950)\n2021-02-13\n17:00\nLARCENY\n6\n06/15/2021 12:00:00 AM\n2600 BLOCK WARRING ST\\nBerkeley, CA\\n(37.86393...\n2600 BLOCK WARRING ST\nBerkeley\nCA\n\n\n4\n21090179\nBURGLARY AUTO\n2021-02-08\n6:20\nBURGLARY - VEHICLE\n1\n06/15/2021 12:00:00 AM\n2700 BLOCK GARBER ST\\nBerkeley, CA\\n(37.86066,...\n2700 BLOCK GARBER ST\nBerkeley\nCA\n\n\n\n\n\n\n\nNow, we can use the dt accessor on this column.\nWe can get the month:\n\ncalls[\"EVENTDT\"].dt.month.head()\n\n0 4\n1 4\n2 4\n3 2\n4 2\nName: EVENTDT, dtype: int64\n\n\nWhich day of the week the date is on:\n\ncalls[\"EVENTDT\"].dt.dayofweek.head()\n\n0 3\n1 3\n2 0\n3 5\n4 0\nName: EVENTDT, dtype: int64\n\n\nCheck the mimimum values to see if there are any suspicious-looking, 70s dates:\n\ncalls.sort_values(\"EVENTDT\").head()\n\n\n\n\n\n\n\n\nCASENO\nOFFENSE\nEVENTDT\nEVENTTM\nCVLEGEND\nCVDOW\nInDbDate\nBlock_Location\nBLKADDR\nCity\nState\n\n\n\n\n2513\n20057398\nBURGLARY COMMERCIAL\n2020-12-17\n16:05\nBURGLARY - COMMERCIAL\n4\n06/15/2021 12:00:00 AM\n600 BLOCK GILMAN ST\\nBerkeley, CA\\n(37.878405,...\n600 BLOCK GILMAN ST\nBerkeley\nCA\n\n\n624\n20057207\nASSAULT/BATTERY MISD.\n2020-12-17\n16:50\nASSAULT\n4\n06/15/2021 12:00:00 AM\n2100 BLOCK SHATTUCK AVE\\nBerkeley, CA\\n(37.871...\n2100 BLOCK SHATTUCK AVE\nBerkeley\nCA\n\n\n154\n20092214\nTHEFT FROM AUTO\n2020-12-17\n18:30\nLARCENY - FROM VEHICLE\n4\n06/15/2021 12:00:00 AM\n800 BLOCK SHATTUCK AVE\\nBerkeley, CA\\n(37.8918...\n800 BLOCK SHATTUCK AVE\nBerkeley\nCA\n\n\n659\n20057324\nTHEFT MISD. (UNDER $950)\n2020-12-17\n15:44\nLARCENY\n4\n06/15/2021 12:00:00 AM\n1800 BLOCK 4TH ST\\nBerkeley, CA\\n(37.869888, -...\n1800 BLOCK 4TH ST\nBerkeley\nCA\n\n\n993\n20057573\nBURGLARY RESIDENTIAL\n2020-12-17\n22:15\nBURGLARY - RESIDENTIAL\n4\n06/15/2021 12:00:00 AM\n1700 BLOCK STUART ST\\nBerkeley, CA\\n(37.857495...\n1700 BLOCK STUART ST\nBerkeley\nCA\n\n\n\n\n\n\n\nDoesn’t look like it! We are good!\nWe can also do many things with the dt accessor like switching time zones and converting time back to UNIX/POSIX time. Check out the documentation on .dt accessor and time series/date functionality." + }, + { + "objectID": "eda/eda.html#faithfulness", + "href": "eda/eda.html#faithfulness", + "title": "5  Data Cleaning and EDA", + "section": "5.3 Faithfulness", + "text": "5.3 Faithfulness\nAt this stage in our data cleaning and EDA workflow, we’ve achieved quite a lot: we’ve identified how our data is structured, come to terms with what information it encodes, and gained insight as to how it was generated. Throughout this process, we should always recall the original intent of our work in Data Science – to use data to better understand and model the real world. To achieve this goal, we need to ensure that the data we use is faithful to reality; that is, that our data accurately captures the “real world.”\nData used in research or industry is often “messy” – there may be errors or inaccuracies that impact the faithfulness of the dataset. Signs that data may not be faithful include:\n\nUnrealistic or “incorrect” values, such as negative counts, locations that don’t exist, or dates set in the future\nViolations of obvious dependencies, like an age that does not match a birthday\nClear signs that data was entered by hand, which can lead to spelling errors or fields that are incorrectly shifted\nSigns of data falsification, such as fake email addresses or repeated use of the same names\nDuplicated records or fields containing the same information\nTruncated data, e.g. Microsoft Excel would limit the number of rows to 655536 and the number of columns to 255\n\nWe often solve some of these more common issues in the following ways:\n\nSpelling errors: apply corrections or drop records that aren’t in a dictionary\nTime zone inconsistencies: convert to a common time zone (e.g. UTC)\nDuplicated records or fields: identify and eliminate duplicates (using primary keys)\nUnspecified or inconsistent units: infer the units and check that values are in reasonable ranges in the data\n\n\n5.3.1 Missing Values\nAnother common issue encountered with real-world datasets is that of missing data. One strategy to resolve this is to simply drop any records with missing values from the dataset. This does, however, introduce the risk of inducing biases – it is possible that the missing or corrupt records may be systemically related to some feature of interest in the data. Another solution is to keep the data as NaN values.\nA third method to address missing data is to perform imputation: infer the missing values using other data available in the dataset. There is a wide variety of imputation techniques that can be implemented; some of the most common are listed below.\n\nAverage imputation: replace missing values with the average value for that field\nHot deck imputation: replace missing values with some random value\nRegression imputation: develop a model to predict missing values and replace with the predicted value from the model.\nMultiple imputation: replace missing values with multiple random values\n\nRegardless of the strategy used to deal with missing data, we should think carefully about why particular records or fields may be missing – this can help inform whether or not the absence of these values is significant or meaningful." + }, + { + "objectID": "eda/eda.html#eda-demo-1-tuberculosis-in-the-united-states", + "href": "eda/eda.html#eda-demo-1-tuberculosis-in-the-united-states", + "title": "5  Data Cleaning and EDA", + "section": "5.4 EDA Demo 1: Tuberculosis in the United States", + "text": "5.4 EDA Demo 1: Tuberculosis in the United States\nNow, let’s walk through the data-cleaning and EDA workflow to see what can we learn about the presence of Tuberculosis in the United States!\nWe will examine the data included in the original CDC article published in 2021.\n\n5.4.1 CSVs and Field Names\nSuppose Table 1 was saved as a CSV file located in data/cdc_tuberculosis.csv.\nWe can then explore the CSV (which is a text file, and does not contain binary-encoded data) in many ways: 1. Using a text editor like emacs, vim, VSCode, etc. 2. Opening the CSV directly in DataHub (read-only), Excel, Google Sheets, etc. 3. The Python file object 4. pandas, using pd.read_csv()\nTo try out options 1 and 2, you can view or download the Tuberculosis from the lecture demo notebook under the data folder in the left hand menu. Notice how the CSV file is a type of rectangular data (i.e., tabular data) stored as comma-separated values.\nNext, let’s try out option 3 using the Python file object. We’ll look at the first four lines:\n\n\nCode\nwith open(\"data/cdc_tuberculosis.csv\", \"r\") as f:\n i = 0\n for row in f:\n print(row)\n i += 1\n if i > 3:\n break\n\n\n,No. of TB cases,,,TB incidence,,\n\nU.S. jurisdiction,2019,2020,2021,2019,2020,2021\n\nTotal,\"8,900\",\"7,173\",\"7,860\",2.71,2.16,2.37\n\nAlabama,87,72,92,1.77,1.43,1.83\n\n\n\nWhoa, why are there blank lines interspaced between the lines of the CSV?\nYou may recall that all line breaks in text files are encoded as the special newline character \\n. Python’s print() prints each string (including the newline), and an additional newline on top of that.\nIf you’re curious, we can use the repr() function to return the raw string with all special characters:\n\n\nCode\nwith open(\"data/cdc_tuberculosis.csv\", \"r\") as f:\n i = 0\n for row in f:\n print(repr(row)) # print raw strings\n i += 1\n if i > 3:\n break\n\n\n',No. of TB cases,,,TB incidence,,\\n'\n'U.S. jurisdiction,2019,2020,2021,2019,2020,2021\\n'\n'Total,\"8,900\",\"7,173\",\"7,860\",2.71,2.16,2.37\\n'\n'Alabama,87,72,92,1.77,1.43,1.83\\n'\n\n\nFinally, let’s try option 4 and use the tried-and-true Data 100 approach: pandas.\n\ntb_df = pd.read_csv(\"data/cdc_tuberculosis.csv\")\ntb_df.head()\n\n\n\n\n\n\n\n\nUnnamed: 0\nNo. of TB cases\nUnnamed: 2\nUnnamed: 3\nTB incidence\nUnnamed: 5\nUnnamed: 6\n\n\n\n\n0\nU.S. jurisdiction\n2019\n2020\n2021\n2019.00\n2020.00\n2021.00\n\n\n1\nTotal\n8,900\n7,173\n7,860\n2.71\n2.16\n2.37\n\n\n2\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n\n\n3\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n\n\n4\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n\n\n\n\n\n\n\nYou may notice some strange things about this table: what’s up with the “Unnamed” column names and the first row?\nCongratulations — you’re ready to wrangle your data! Because of how things are stored, we’ll need to clean the data a bit to name our columns better.\nA reasonable first step is to identify the row with the right header. The pd.read_csv() function (documentation) has the convenient header parameter that we can set to use the elements in row 1 as the appropriate columns:\n\ntb_df = pd.read_csv(\"data/cdc_tuberculosis.csv\", header=1) # row index\ntb_df.head(5)\n\n\n\n\n\n\n\n\nU.S. jurisdiction\n2019\n2020\n2021\n2019.1\n2020.1\n2021.1\n\n\n\n\n0\nTotal\n8,900\n7,173\n7,860\n2.71\n2.16\n2.37\n\n\n1\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n\n\n2\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n\n\n3\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n\n\n4\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n\n\n\n\n\n\n\nWait…but now we can’t differentiate betwen the “Number of TB cases” and “TB incidence” year columns. pandas has tried to make our lives easier by automatically adding “.1” to the latter columns, but this doesn’t help us, as humans, understand the data.\nWe can do this manually with df.rename() (documentation):\n\nrename_dict = {'2019': 'TB cases 2019',\n '2020': 'TB cases 2020',\n '2021': 'TB cases 2021',\n '2019.1': 'TB incidence 2019',\n '2020.1': 'TB incidence 2020',\n '2021.1': 'TB incidence 2021'}\ntb_df = tb_df.rename(columns=rename_dict)\ntb_df.head(5)\n\n\n\n\n\n\n\n\nU.S. jurisdiction\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n\n\n\n\n0\nTotal\n8,900\n7,173\n7,860\n2.71\n2.16\n2.37\n\n\n1\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n\n\n2\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n\n\n3\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n\n\n4\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n\n\n\n\n\n\n\n\n\n5.4.2 Record Granularity\nYou might already be wondering: what’s up with that first record?\nRow 0 is what we call a rollup record, or summary record. It’s often useful when displaying tables to humans. The granularity of record 0 (Totals) vs the rest of the records (States) is different.\nOkay, EDA step two. How was the rollup record aggregated?\nLet’s check if Total TB cases is the sum of all state TB cases. If we sum over all rows, we should get 2x the total cases in each of our TB cases by year (why do you think this is?).\n\n\nCode\ntb_df.sum(axis=0)\n\n\nU.S. jurisdiction TotalAlabamaAlaskaArizonaArkansasCaliforniaCol...\nTB cases 2019 8,9008758183642,111666718245583029973261085237...\nTB cases 2020 7,1737258136591,706525417194122219282169239376...\nTB cases 2021 7,8609258129691,750585443194992281064255127494...\nTB incidence 2019 109.94\nTB incidence 2020 93.09\nTB incidence 2021 102.94\ndtype: object\n\n\nWhoa, what’s going on with the TB cases in 2019, 2020, and 2021? Check out the column types:\n\n\nCode\ntb_df.dtypes\n\n\nU.S. jurisdiction object\nTB cases 2019 object\nTB cases 2020 object\nTB cases 2021 object\nTB incidence 2019 float64\nTB incidence 2020 float64\nTB incidence 2021 float64\ndtype: object\n\n\nSince there are commas in the values for TB cases, the numbers are read as the object datatype, or storage type (close to the Python string datatype), so pandas is concatenating strings instead of adding integers (recall that Python can “sum”, or concatenate, strings together: \"data\" + \"100\" evaluates to \"data100\").\nFortunately read_csv also has a thousands parameter (documentation):\n\n# improve readability: chaining method calls with outer parentheses/line breaks\ntb_df = (\n pd.read_csv(\"data/cdc_tuberculosis.csv\", header=1, thousands=',')\n .rename(columns=rename_dict)\n)\ntb_df.head(5)\n\n\n\n\n\n\n\n\nU.S. jurisdiction\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n\n\n\n\n0\nTotal\n8900\n7173\n7860\n2.71\n2.16\n2.37\n\n\n1\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n\n\n2\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n\n\n3\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n\n\n4\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n\n\n\n\n\n\n\n\ntb_df.sum()\n\nU.S. jurisdiction TotalAlabamaAlaskaArizonaArkansasCaliforniaCol...\nTB cases 2019 17800\nTB cases 2020 14346\nTB cases 2021 15720\nTB incidence 2019 109.94\nTB incidence 2020 93.09\nTB incidence 2021 102.94\ndtype: object\n\n\nThe total TB cases look right. Phew!\nLet’s just look at the records with state-level granularity:\n\n\nCode\nstate_tb_df = tb_df[1:]\nstate_tb_df.head(5)\n\n\n\n\n\n\n\n\n\nU.S. jurisdiction\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n\n\n\n\n1\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n\n\n2\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n\n\n3\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n\n\n4\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n\n\n5\nCalifornia\n2111\n1706\n1750\n5.35\n4.32\n4.46\n\n\n\n\n\n\n\n\n\n5.4.3 Gather Census Data\nU.S. Census population estimates source (2019), source (2020-2021).\nRunning the below cells cleans the data. There are a few new methods here: * df.convert_dtypes() (documentation) conveniently converts all float dtypes into ints and is out of scope for the class. * df.drop_na() (documentation) will be explained in more detail next time.\n\n\nCode\n# 2010s census data\ncensus_2010s_df = pd.read_csv(\"data/nst-est2019-01.csv\", header=3, thousands=\",\")\ncensus_2010s_df = (\n census_2010s_df\n .reset_index()\n .drop(columns=[\"index\", \"Census\", \"Estimates Base\"])\n .rename(columns={\"Unnamed: 0\": \"Geographic Area\"})\n .convert_dtypes() # \"smart\" converting of columns, use at your own risk\n .dropna() # we'll introduce this next time\n)\ncensus_2010s_df['Geographic Area'] = census_2010s_df['Geographic Area'].str.strip('.')\n\n# with pd.option_context('display.min_rows', 30): # shows more rows\n# display(census_2010s_df)\n \ncensus_2010s_df.head(5)\n\n\n\n\n\n\n\n\n\nGeographic Area\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\n\n\n\n\n0\nUnited States\n309321666\n311556874\n313830990\n315993715\n318301008\n320635163\n322941311\n324985539\n326687501\n328239523\n\n\n1\nNortheast\n55380134\n55604223\n55775216\n55901806\n56006011\n56034684\n56042330\n56059240\n56046620\n55982803\n\n\n2\nMidwest\n66974416\n67157800\n67336743\n67560379\n67745167\n67860583\n67987540\n68126781\n68236628\n68329004\n\n\n3\nSouth\n114866680\n116006522\n117241208\n118364400\n119624037\n120997341\n122351760\n123542189\n124569433\n125580448\n\n\n4\nWest\n72100436\n72788329\n73477823\n74167130\n74925793\n75742555\n76559681\n77257329\n77834820\n78347268\n\n\n\n\n\n\n\nOccasionally, you will want to modify code that you have imported. To reimport those modifications you can either use python’s importlib library:\nfrom importlib import reload\nreload(utils)\nor use iPython magic which will intelligently import code when files change:\n%load_ext autoreload\n%autoreload 2\n\n\nCode\n# census 2020s data\ncensus_2020s_df = pd.read_csv(\"data/NST-EST2022-POP.csv\", header=3, thousands=\",\")\ncensus_2020s_df = (\n census_2020s_df\n .reset_index()\n .drop(columns=[\"index\", \"Unnamed: 1\"])\n .rename(columns={\"Unnamed: 0\": \"Geographic Area\"})\n .convert_dtypes() # \"smart\" converting of columns, use at your own risk\n .dropna() # we'll introduce this next time\n)\ncensus_2020s_df['Geographic Area'] = census_2020s_df['Geographic Area'].str.strip('.')\n\ncensus_2020s_df.head(5)\n\n\n\n\n\n\n\n\n\nGeographic Area\n2020\n2021\n2022\n\n\n\n\n0\nUnited States\n331511512\n332031554\n333287557\n\n\n1\nNortheast\n57448898\n57259257\n57040406\n\n\n2\nMidwest\n68961043\n68836505\n68787595\n\n\n3\nSouth\n126450613\n127346029\n128716192\n\n\n4\nWest\n78650958\n78589763\n78743364\n\n\n\n\n\n\n\n\n\n5.4.4 Joining Data (Merging DataFrames)\nTime to merge! Here we use the DataFrame method df1.merge(right=df2, ...) on DataFrame df1 (documentation). Contrast this with the function pd.merge(left=df1, right=df2, ...) (documentation). Feel free to use either.\n\n# merge TB DataFrame with two US census DataFrames\ntb_census_df = (\n tb_df\n .merge(right=census_2010s_df,\n left_on=\"U.S. jurisdiction\", right_on=\"Geographic Area\")\n .merge(right=census_2020s_df,\n left_on=\"U.S. jurisdiction\", right_on=\"Geographic Area\")\n)\ntb_census_df.head(5)\n\n\n\n\n\n\n\n\nU.S. jurisdiction\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\nGeographic Area_x\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\nGeographic Area_y\n2020\n2021\n2022\n\n\n\n\n0\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\nAlabama\n4785437\n4799069\n4815588\n4830081\n4841799\n4852347\n4863525\n4874486\n4887681\n4903185\nAlabama\n5031362\n5049846\n5074296\n\n\n1\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\nAlaska\n713910\n722128\n730443\n737068\n736283\n737498\n741456\n739700\n735139\n731545\nAlaska\n732923\n734182\n733583\n\n\n2\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\nArizona\n6407172\n6472643\n6554978\n6632764\n6730413\n6829676\n6941072\n7044008\n7158024\n7278717\nArizona\n7179943\n7264877\n7359197\n\n\n3\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\nArkansas\n2921964\n2940667\n2952164\n2959400\n2967392\n2978048\n2989918\n3001345\n3009733\n3017804\nArkansas\n3014195\n3028122\n3045637\n\n\n4\nCalifornia\n2111\n1706\n1750\n5.35\n4.32\n4.46\nCalifornia\n37319502\n37638369\n37948800\n38260787\n38596972\n38918045\n39167117\n39358497\n39461588\n39512223\nCalifornia\n39501653\n39142991\n39029342\n\n\n\n\n\n\n\nHaving all of these columns is a little unwieldy. We could either drop the unneeded columns now, or just merge on smaller census DataFrames. Let’s do the latter.\n\n# try merging again, but cleaner this time\ntb_census_df = (\n tb_df\n .merge(right=census_2010s_df[[\"Geographic Area\", \"2019\"]],\n left_on=\"U.S. jurisdiction\", right_on=\"Geographic Area\")\n .drop(columns=\"Geographic Area\")\n .merge(right=census_2020s_df[[\"Geographic Area\", \"2020\", \"2021\"]],\n left_on=\"U.S. jurisdiction\", right_on=\"Geographic Area\")\n .drop(columns=\"Geographic Area\")\n)\ntb_census_df.head(5)\n\n\n\n\n\n\n\n\nU.S. jurisdiction\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n2019\n2020\n2021\n\n\n\n\n0\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n4903185\n5031362\n5049846\n\n\n1\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n731545\n732923\n734182\n\n\n2\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n7278717\n7179943\n7264877\n\n\n3\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n3017804\n3014195\n3028122\n\n\n4\nCalifornia\n2111\n1706\n1750\n5.35\n4.32\n4.46\n39512223\n39501653\n39142991\n\n\n\n\n\n\n\n\n\n5.4.5 Reproducing Data: Compute Incidence\nLet’s recompute incidence to make sure we know where the original CDC numbers came from.\nFrom the CDC report: TB incidence is computed as “Cases per 100,000 persons using mid-year population estimates from the U.S. Census Bureau.”\nIf we define a group as 100,000 people, then we can compute the TB incidence for a given state population as\n\\[\\text{TB incidence} = \\frac{\\text{TB cases in population}}{\\text{groups in population}} = \\frac{\\text{TB cases in population}}{\\text{population}/100000} \\]\n\\[= \\frac{\\text{TB cases in population}}{\\text{population}} \\times 100000\\]\nLet’s try this for 2019:\n\ntb_census_df[\"recompute incidence 2019\"] = tb_census_df[\"TB cases 2019\"]/tb_census_df[\"2019\"]*100000\ntb_census_df.head(5)\n\n\n\n\n\n\n\n\nU.S. jurisdiction\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n2019\n2020\n2021\nrecompute incidence 2019\n\n\n\n\n0\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n4903185\n5031362\n5049846\n1.77\n\n\n1\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n731545\n732923\n734182\n7.93\n\n\n2\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n7278717\n7179943\n7264877\n2.51\n\n\n3\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n3017804\n3014195\n3028122\n2.12\n\n\n4\nCalifornia\n2111\n1706\n1750\n5.35\n4.32\n4.46\n39512223\n39501653\n39142991\n5.34\n\n\n\n\n\n\n\nAwesome!!!\nLet’s use a for-loop and Python format strings to compute TB incidence for all years. Python f-strings are just used for the purposes of this demo, but they’re handy to know when you explore data beyond this course (documentation).\n\n# recompute incidence for all years\nfor year in [2019, 2020, 2021]:\n tb_census_df[f\"recompute incidence {year}\"] = tb_census_df[f\"TB cases {year}\"]/tb_census_df[f\"{year}\"]*100000\ntb_census_df.head(5)\n\n\n\n\n\n\n\n\nU.S. jurisdiction\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n2019\n2020\n2021\nrecompute incidence 2019\nrecompute incidence 2020\nrecompute incidence 2021\n\n\n\n\n0\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n4903185\n5031362\n5049846\n1.77\n1.43\n1.82\n\n\n1\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n731545\n732923\n734182\n7.93\n7.91\n7.90\n\n\n2\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n7278717\n7179943\n7264877\n2.51\n1.89\n1.78\n\n\n3\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n3017804\n3014195\n3028122\n2.12\n1.96\n2.28\n\n\n4\nCalifornia\n2111\n1706\n1750\n5.35\n4.32\n4.46\n39512223\n39501653\n39142991\n5.34\n4.32\n4.47\n\n\n\n\n\n\n\nThese numbers look pretty close!!! There are a few errors in the hundredths place, particularly in 2021. It may be useful to further explore reasons behind this discrepancy.\n\ntb_census_df.describe()\n\n\n\n\n\n\n\n\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n2019\n2020\n2021\nrecompute incidence 2019\nrecompute incidence 2020\nrecompute incidence 2021\n\n\n\n\ncount\n51.00\n51.00\n51.00\n51.00\n51.00\n51.00\n51.00\n51.00\n51.00\n51.00\n51.00\n51.00\n\n\nmean\n174.51\n140.65\n154.12\n2.10\n1.78\n1.97\n6436069.08\n6500225.73\n6510422.63\n2.10\n1.78\n1.97\n\n\nstd\n341.74\n271.06\n286.78\n1.50\n1.34\n1.48\n7360660.47\n7408168.46\n7394300.08\n1.50\n1.34\n1.47\n\n\nmin\n1.00\n0.00\n2.00\n0.17\n0.00\n0.21\n578759.00\n577605.00\n579483.00\n0.17\n0.00\n0.21\n\n\n25%\n25.50\n29.00\n23.00\n1.29\n1.21\n1.23\n1789606.00\n1820311.00\n1844920.00\n1.30\n1.21\n1.23\n\n\n50%\n70.00\n67.00\n69.00\n1.80\n1.52\n1.70\n4467673.00\n4507445.00\n4506589.00\n1.81\n1.52\n1.69\n\n\n75%\n180.50\n139.00\n150.00\n2.58\n1.99\n2.22\n7446805.00\n7451987.00\n7502811.00\n2.58\n1.99\n2.22\n\n\nmax\n2111.00\n1706.00\n1750.00\n7.91\n7.92\n7.92\n39512223.00\n39501653.00\n39142991.00\n7.93\n7.91\n7.90\n\n\n\n\n\n\n\n\n\n5.4.6 Bonus EDA: Reproducing the Reported Statistic\nHow do we reproduce that reported statistic in the original CDC report?\n\nReported TB incidence (cases per 100,000 persons) increased 9.4%, from 2.2 during 2020 to 2.4 during 2021 but was lower than incidence during 2019 (2.7). Increases occurred among both U.S.-born and non–U.S.-born persons.\n\nThis is TB incidence computed across the entire U.S. population! How do we reproduce this? * We need to reproduce the “Total” TB incidences in our rolled record. * But our current tb_census_df only has 51 entries (50 states plus Washington, D.C.). There is no rolled record. * What happened…?\nLet’s get exploring!\nBefore we keep exploring, we’ll set all indexes to more meaningful values, instead of just numbers that pertain to some row at some point. This will make our cleaning slightly easier.\n\n\nCode\ntb_df = tb_df.set_index(\"U.S. jurisdiction\")\ntb_df.head(5)\n\n\n\n\n\n\n\n\n\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n\n\nU.S. jurisdiction\n\n\n\n\n\n\n\n\n\n\nTotal\n8900\n7173\n7860\n2.71\n2.16\n2.37\n\n\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n\n\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n\n\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n\n\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n\n\n\n\n\n\n\n\ncensus_2010s_df = census_2010s_df.set_index(\"Geographic Area\")\ncensus_2010s_df.head(5)\n\n\n\n\n\n\n\n\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\n\n\nGeographic Area\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nUnited States\n309321666\n311556874\n313830990\n315993715\n318301008\n320635163\n322941311\n324985539\n326687501\n328239523\n\n\nNortheast\n55380134\n55604223\n55775216\n55901806\n56006011\n56034684\n56042330\n56059240\n56046620\n55982803\n\n\nMidwest\n66974416\n67157800\n67336743\n67560379\n67745167\n67860583\n67987540\n68126781\n68236628\n68329004\n\n\nSouth\n114866680\n116006522\n117241208\n118364400\n119624037\n120997341\n122351760\n123542189\n124569433\n125580448\n\n\nWest\n72100436\n72788329\n73477823\n74167130\n74925793\n75742555\n76559681\n77257329\n77834820\n78347268\n\n\n\n\n\n\n\n\ncensus_2020s_df = census_2020s_df.set_index(\"Geographic Area\")\ncensus_2020s_df.head(5)\n\n\n\n\n\n\n\n\n2020\n2021\n2022\n\n\nGeographic Area\n\n\n\n\n\n\n\nUnited States\n331511512\n332031554\n333287557\n\n\nNortheast\n57448898\n57259257\n57040406\n\n\nMidwest\n68961043\n68836505\n68787595\n\n\nSouth\n126450613\n127346029\n128716192\n\n\nWest\n78650958\n78589763\n78743364\n\n\n\n\n\n\n\nIt turns out that our merge above only kept state records, even though our original tb_df had the “Total” rolled record:\n\ntb_df.head()\n\n\n\n\n\n\n\n\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n\n\nU.S. jurisdiction\n\n\n\n\n\n\n\n\n\n\nTotal\n8900\n7173\n7860\n2.71\n2.16\n2.37\n\n\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n\n\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n\n\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n\n\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n\n\n\n\n\n\n\nRecall that merge by default does an inner merge by default, meaning that it only preserves keys that are present in both DataFrames.\nThe rolled records in our census DataFrame have different Geographic Area fields, which was the key we merged on:\n\ncensus_2010s_df.head(5)\n\n\n\n\n\n\n\n\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\n\n\nGeographic Area\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nUnited States\n309321666\n311556874\n313830990\n315993715\n318301008\n320635163\n322941311\n324985539\n326687501\n328239523\n\n\nNortheast\n55380134\n55604223\n55775216\n55901806\n56006011\n56034684\n56042330\n56059240\n56046620\n55982803\n\n\nMidwest\n66974416\n67157800\n67336743\n67560379\n67745167\n67860583\n67987540\n68126781\n68236628\n68329004\n\n\nSouth\n114866680\n116006522\n117241208\n118364400\n119624037\n120997341\n122351760\n123542189\n124569433\n125580448\n\n\nWest\n72100436\n72788329\n73477823\n74167130\n74925793\n75742555\n76559681\n77257329\n77834820\n78347268\n\n\n\n\n\n\n\nThe Census DataFrame has several rolled records. The aggregate record we are looking for actually has the Geographic Area named “United States”.\nOne straightforward way to get the right merge is to rename the value itself. Because we now have the Geographic Area index, we’ll use df.rename() (documentation):\n\n# rename rolled record for 2010s\ncensus_2010s_df.rename(index={'United States':'Total'}, inplace=True)\ncensus_2010s_df.head(5)\n\n\n\n\n\n\n\n\n2010\n2011\n2012\n2013\n2014\n2015\n2016\n2017\n2018\n2019\n\n\nGeographic Area\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nTotal\n309321666\n311556874\n313830990\n315993715\n318301008\n320635163\n322941311\n324985539\n326687501\n328239523\n\n\nNortheast\n55380134\n55604223\n55775216\n55901806\n56006011\n56034684\n56042330\n56059240\n56046620\n55982803\n\n\nMidwest\n66974416\n67157800\n67336743\n67560379\n67745167\n67860583\n67987540\n68126781\n68236628\n68329004\n\n\nSouth\n114866680\n116006522\n117241208\n118364400\n119624037\n120997341\n122351760\n123542189\n124569433\n125580448\n\n\nWest\n72100436\n72788329\n73477823\n74167130\n74925793\n75742555\n76559681\n77257329\n77834820\n78347268\n\n\n\n\n\n\n\n\n# same, but for 2020s rename rolled record\ncensus_2020s_df.rename(index={'United States':'Total'}, inplace=True)\ncensus_2020s_df.head(5)\n\n\n\n\n\n\n\n\n2020\n2021\n2022\n\n\nGeographic Area\n\n\n\n\n\n\n\nTotal\n331511512\n332031554\n333287557\n\n\nNortheast\n57448898\n57259257\n57040406\n\n\nMidwest\n68961043\n68836505\n68787595\n\n\nSouth\n126450613\n127346029\n128716192\n\n\nWest\n78650958\n78589763\n78743364\n\n\n\n\n\n\n\n\nNext let’s rerun our merge. Note the different chaining, because we are now merging on indexes (df.merge() documentation).\n\ntb_census_df = (\n tb_df\n .merge(right=census_2010s_df[[\"2019\"]],\n left_index=True, right_index=True)\n .merge(right=census_2020s_df[[\"2020\", \"2021\"]],\n left_index=True, right_index=True)\n)\ntb_census_df.head(5)\n\n\n\n\n\n\n\n\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n2019\n2020\n2021\n\n\n\n\nTotal\n8900\n7173\n7860\n2.71\n2.16\n2.37\n328239523\n331511512\n332031554\n\n\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n4903185\n5031362\n5049846\n\n\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n731545\n732923\n734182\n\n\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n7278717\n7179943\n7264877\n\n\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n3017804\n3014195\n3028122\n\n\n\n\n\n\n\n\nFinally, let’s recompute our incidences:\n\n# recompute incidence for all years\nfor year in [2019, 2020, 2021]:\n tb_census_df[f\"recompute incidence {year}\"] = tb_census_df[f\"TB cases {year}\"]/tb_census_df[f\"{year}\"]*100000\ntb_census_df.head(5)\n\n\n\n\n\n\n\n\nTB cases 2019\nTB cases 2020\nTB cases 2021\nTB incidence 2019\nTB incidence 2020\nTB incidence 2021\n2019\n2020\n2021\nrecompute incidence 2019\nrecompute incidence 2020\nrecompute incidence 2021\n\n\n\n\nTotal\n8900\n7173\n7860\n2.71\n2.16\n2.37\n328239523\n331511512\n332031554\n2.71\n2.16\n2.37\n\n\nAlabama\n87\n72\n92\n1.77\n1.43\n1.83\n4903185\n5031362\n5049846\n1.77\n1.43\n1.82\n\n\nAlaska\n58\n58\n58\n7.91\n7.92\n7.92\n731545\n732923\n734182\n7.93\n7.91\n7.90\n\n\nArizona\n183\n136\n129\n2.51\n1.89\n1.77\n7278717\n7179943\n7264877\n2.51\n1.89\n1.78\n\n\nArkansas\n64\n59\n69\n2.12\n1.96\n2.28\n3017804\n3014195\n3028122\n2.12\n1.96\n2.28\n\n\n\n\n\n\n\nWe reproduced the total U.S. incidences correctly!\nWe’re almost there. Let’s revisit the quote:\n\nReported TB incidence (cases per 100,000 persons) increased 9.4%, from 2.2 during 2020 to 2.4 during 2021 but was lower than incidence during 2019 (2.7). Increases occurred among both U.S.-born and non–U.S.-born persons.\n\nRecall that percent change from \\(A\\) to \\(B\\) is computed as \\(\\text{percent change} = \\frac{B - A}{A} \\times 100\\).\n\nincidence_2020 = tb_census_df.loc['Total', 'recompute incidence 2020']\nincidence_2020\n\n2.1637257652759883\n\n\n\nincidence_2021 = tb_census_df.loc['Total', 'recompute incidence 2021']\nincidence_2021\n\n2.3672448914298068\n\n\n\ndifference = (incidence_2021 - incidence_2020)/incidence_2020 * 100\ndifference\n\n9.405957511804143" + }, + { + "objectID": "eda/eda.html#eda-demo-2-mauna-loa-co2-data-a-lesson-in-data-faithfulness", + "href": "eda/eda.html#eda-demo-2-mauna-loa-co2-data-a-lesson-in-data-faithfulness", + "title": "5  Data Cleaning and EDA", + "section": "5.5 EDA Demo 2: Mauna Loa CO2 Data – A Lesson in Data Faithfulness", + "text": "5.5 EDA Demo 2: Mauna Loa CO2 Data – A Lesson in Data Faithfulness\nMauna Loa Observatory has been monitoring CO2 concentrations since 1958.\n\nco2_file = \"data/co2_mm_mlo.txt\"\n\nLet’s do some EDA!!\n\n5.5.1 Reading this file into Pandas?\nLet’s instead check out this .txt file. Some questions to keep in mind: Do we trust this file extension? What structure is it?\nLines 71-78 (inclusive) are shown below:\nline number | file contents\n\n71 | # decimal average interpolated trend #days\n72 | # date (season corr)\n73 | 1958 3 1958.208 315.71 315.71 314.62 -1\n74 | 1958 4 1958.292 317.45 317.45 315.29 -1\n75 | 1958 5 1958.375 317.50 317.50 314.71 -1\n76 | 1958 6 1958.458 -99.99 317.10 314.85 -1\n77 | 1958 7 1958.542 315.86 315.86 314.98 -1\n78 | 1958 8 1958.625 314.93 314.93 315.94 -1\nNotice how:\n\nThe values are separated by white space, possibly tabs.\nThe data line up down the rows. For example, the month appears in 7th to 8th position of each line.\nThe 71st and 72nd lines in the file contain column headings split over two lines.\n\nWe can use read_csv to read the data into a pandas DataFrame, and we provide several arguments to specify that the separators are white space, there is no header (we will set our own column names), and to skip the first 72 rows of the file.\n\nco2 = pd.read_csv(\n co2_file, header = None, skiprows = 72,\n sep = r'\\s+' #delimiter for continuous whitespace (stay tuned for regex next lecture))\n)\nco2.head()\n\n\n\n\n\n\n\n\n0\n1\n2\n3\n4\n5\n6\n\n\n\n\n0\n1958\n3\n1958.21\n315.71\n315.71\n314.62\n-1\n\n\n1\n1958\n4\n1958.29\n317.45\n317.45\n315.29\n-1\n\n\n2\n1958\n5\n1958.38\n317.50\n317.50\n314.71\n-1\n\n\n3\n1958\n6\n1958.46\n-99.99\n317.10\n314.85\n-1\n\n\n4\n1958\n7\n1958.54\n315.86\n315.86\n314.98\n-1\n\n\n\n\n\n\n\nCongratulations! You’ve wrangled the data!\n\n…But our columns aren’t named. We need to do more EDA.\n\n\n5.5.2 Exploring Variable Feature Types\nThe NOAA webpage might have some useful tidbits (in this case it doesn’t).\nUsing this information, we’ll rerun pd.read_csv, but this time with some custom column names.\n\nco2 = pd.read_csv(\n co2_file, header = None, skiprows = 72,\n sep = '\\s+', #regex for continuous whitespace (next lecture)\n names = ['Yr', 'Mo', 'DecDate', 'Avg', 'Int', 'Trend', 'Days']\n)\nco2.head()\n\n\n\n\n\n\n\n\nYr\nMo\nDecDate\nAvg\nInt\nTrend\nDays\n\n\n\n\n0\n1958\n3\n1958.21\n315.71\n315.71\n314.62\n-1\n\n\n1\n1958\n4\n1958.29\n317.45\n317.45\n315.29\n-1\n\n\n2\n1958\n5\n1958.38\n317.50\n317.50\n314.71\n-1\n\n\n3\n1958\n6\n1958.46\n-99.99\n317.10\n314.85\n-1\n\n\n4\n1958\n7\n1958.54\n315.86\n315.86\n314.98\n-1\n\n\n\n\n\n\n\n\n\n5.5.3 Visualizing CO2\nScientific studies tend to have very clean data, right…? Let’s jump right in and make a time series plot of CO2 monthly averages.\n\n\nCode\nsns.lineplot(x='DecDate', y='Avg', data=co2);\n\n\n\n\n\nThe code above uses the seaborn plotting library (abbreviated sns). We will cover this in the Visualization lecture, but now you don’t need to worry about how it works!\nYikes! Plotting the data uncovered a problem. The sharp vertical lines suggest that we have some missing values. What happened here?\n\nco2.head()\n\n\n\n\n\n\n\n\nYr\nMo\nDecDate\nAvg\nInt\nTrend\nDays\n\n\n\n\n0\n1958\n3\n1958.21\n315.71\n315.71\n314.62\n-1\n\n\n1\n1958\n4\n1958.29\n317.45\n317.45\n315.29\n-1\n\n\n2\n1958\n5\n1958.38\n317.50\n317.50\n314.71\n-1\n\n\n3\n1958\n6\n1958.46\n-99.99\n317.10\n314.85\n-1\n\n\n4\n1958\n7\n1958.54\n315.86\n315.86\n314.98\n-1\n\n\n\n\n\n\n\n\nco2.tail()\n\n\n\n\n\n\n\n\nYr\nMo\nDecDate\nAvg\nInt\nTrend\nDays\n\n\n\n\n733\n2019\n4\n2019.29\n413.32\n413.32\n410.49\n26\n\n\n734\n2019\n5\n2019.38\n414.66\n414.66\n411.20\n28\n\n\n735\n2019\n6\n2019.46\n413.92\n413.92\n411.58\n27\n\n\n736\n2019\n7\n2019.54\n411.77\n411.77\n411.43\n23\n\n\n737\n2019\n8\n2019.62\n409.95\n409.95\n411.84\n29\n\n\n\n\n\n\n\nSome data have unusual values like -1 and -99.99.\nLet’s check the description at the top of the file again.\n\n-1 signifies a missing value for the number of days Days the equipment was in operation that month.\n-99.99 denotes a missing monthly average Avg\n\nHow can we fix this? First, let’s explore other aspects of our data. Understanding our data will help us decide what to do with the missing values.\n\n\n\n5.5.4 Sanity Checks: Reasoning about the data\nFirst, we consider the shape of the data. How many rows should we have?\n\nIf chronological order, we should have one record per month.\nData from March 1958 to August 2019.\nWe should have $ 12 (2019-1957) - 2 - 4 = 738 $ records.\n\n\nco2.shape\n\n(738, 7)\n\n\nNice!! The number of rows (i.e. records) match our expectations.\nLet’s now check the quality of each feature.\n\n\n5.5.5 Understanding Missing Value 1: Days\nDays is a time field, so let’s analyze other time fields to see if there is an explanation for missing values of days of operation.\nLet’s start with months, Mo.\nAre we missing any records? The number of months should have 62 or 61 instances (March 1957-August 2019).\n\nco2[\"Mo\"].value_counts().sort_index()\n\n1 61\n2 61\n3 62\n4 62\n5 62\n6 62\n7 62\n8 62\n9 61\n10 61\n11 61\n12 61\nName: Mo, dtype: int64\n\n\nAs expected Jan, Feb, Sep, Oct, Nov, and Dec have 61 occurrences and the rest 62.\n\nNext let’s explore days Days itself, which is the number of days that the measurement equipment worked.\n\n\nCode\nsns.displot(co2['Days']);\nplt.title(\"Distribution of days feature\"); # suppresses unneeded plotting output\n\n\n/Users/Ishani/micromamba/lib/python3.9/site-packages/seaborn/axisgrid.py:118: UserWarning:\n\nThe figure layout has changed to tight\n\n\n\n\n\n\nIn terms of data quality, a handful of months have averages based on measurements taken on fewer than half the days. In addition, there are nearly 200 missing values–that’s about 27% of the data!\n\nFinally, let’s check the last time feature, year Yr.\nLet’s check to see if there is any connection between missing-ness and the year of the recording.\n\n\nCode\nsns.scatterplot(x=\"Yr\", y=\"Days\", data=co2);\nplt.title(\"Day field by Year\"); # the ; suppresses output\n\n\n\n\n\nObservations:\n\nAll of the missing data are in the early years of operation.\nIt appears there may have been problems with equipment in the mid to late 80s.\n\nPotential Next Steps:\n\nConfirm these explanations through documentation about the historical readings.\nMaybe drop the earliest recordings? However, we would want to delay such action until after we have examined the time trends and assess whether there are any potential problems.\n\n\n\n\n5.5.6 Understanding Missing Value 2: Avg\nNext, let’s return to the -99.99 values in Avg to analyze the overall quality of the CO2 measurements. We’ll plot a histogram of the average CO2 measurements\n\n\nCode\n# Histograms of average CO2 measurements\nsns.displot(co2['Avg']);\n\n\n/Users/Ishani/micromamba/lib/python3.9/site-packages/seaborn/axisgrid.py:118: UserWarning:\n\nThe figure layout has changed to tight\n\n\n\n\n\n\nThe non-missing values are in the 300-400 range (a regular range of CO2 levels).\nWe also see that there are only a few missing Avg values (<1% of values). Let’s examine all of them:\n\nco2[co2[\"Avg\"] < 0]\n\n\n\n\n\n\n\n\nYr\nMo\nDecDate\nAvg\nInt\nTrend\nDays\n\n\n\n\n3\n1958\n6\n1958.46\n-99.99\n317.10\n314.85\n-1\n\n\n7\n1958\n10\n1958.79\n-99.99\n312.66\n315.61\n-1\n\n\n71\n1964\n2\n1964.12\n-99.99\n320.07\n319.61\n-1\n\n\n72\n1964\n3\n1964.21\n-99.99\n320.73\n319.55\n-1\n\n\n73\n1964\n4\n1964.29\n-99.99\n321.77\n319.48\n-1\n\n\n213\n1975\n12\n1975.96\n-99.99\n330.59\n331.60\n0\n\n\n313\n1984\n4\n1984.29\n-99.99\n346.84\n344.27\n2\n\n\n\n\n\n\n\nThere doesn’t seem to be a pattern to these values, other than that most records also were missing Days data.\n\n\n5.5.7 Drop, NaN, or Impute Missing Avg Data?\nHow should we address the invalid Avg data?\n\nDrop records\nSet to NaN\nImpute using some strategy\n\nRemember we want to fix the following plot:\n\n\nCode\nsns.lineplot(x='DecDate', y='Avg', data=co2)\nplt.title(\"CO2 Average By Month\");\n\n\n\n\n\nSince we are plotting Avg vs DecDate, we should just focus on dealing with missing values for Avg.\nLet’s consider a few options: 1. Drop those records 2. Replace -99.99 with NaN 3. Substitute it with a likely value for the average CO2?\nWhat do you think are the pros and cons of each possible action?\nLet’s examine each of these three options.\n\n# 1. Drop missing values\nco2_drop = co2[co2['Avg'] > 0]\nco2_drop.head()\n\n\n\n\n\n\n\n\nYr\nMo\nDecDate\nAvg\nInt\nTrend\nDays\n\n\n\n\n0\n1958\n3\n1958.21\n315.71\n315.71\n314.62\n-1\n\n\n1\n1958\n4\n1958.29\n317.45\n317.45\n315.29\n-1\n\n\n2\n1958\n5\n1958.38\n317.50\n317.50\n314.71\n-1\n\n\n4\n1958\n7\n1958.54\n315.86\n315.86\n314.98\n-1\n\n\n5\n1958\n8\n1958.62\n314.93\n314.93\n315.94\n-1\n\n\n\n\n\n\n\n\n# 2. Replace NaN with -99.99\nco2_NA = co2.replace(-99.99, np.NaN)\nco2_NA.head()\n\n\n\n\n\n\n\n\nYr\nMo\nDecDate\nAvg\nInt\nTrend\nDays\n\n\n\n\n0\n1958\n3\n1958.21\n315.71\n315.71\n314.62\n-1\n\n\n1\n1958\n4\n1958.29\n317.45\n317.45\n315.29\n-1\n\n\n2\n1958\n5\n1958.38\n317.50\n317.50\n314.71\n-1\n\n\n3\n1958\n6\n1958.46\nNaN\n317.10\n314.85\n-1\n\n\n4\n1958\n7\n1958.54\n315.86\n315.86\n314.98\n-1\n\n\n\n\n\n\n\nWe’ll also use a third version of the data.\nFirst, we note that the dataset already comes with a substitute value for the -99.99.\nFrom the file description:\n\nThe interpolated column includes average values from the preceding column (average) and interpolated values where data are missing. Interpolated values are computed in two steps…\n\nThe Int feature has values that exactly match those in Avg, except when Avg is -99.99, and then a reasonable estimate is used instead.\nSo, the third version of our data will use the Int feature instead of Avg.\n\n# 3. Use interpolated column which estimates missing Avg values\nco2_impute = co2.copy()\nco2_impute['Avg'] = co2['Int']\nco2_impute.head()\n\n\n\n\n\n\n\n\nYr\nMo\nDecDate\nAvg\nInt\nTrend\nDays\n\n\n\n\n0\n1958\n3\n1958.21\n315.71\n315.71\n314.62\n-1\n\n\n1\n1958\n4\n1958.29\n317.45\n317.45\n315.29\n-1\n\n\n2\n1958\n5\n1958.38\n317.50\n317.50\n314.71\n-1\n\n\n3\n1958\n6\n1958.46\n317.10\n317.10\n314.85\n-1\n\n\n4\n1958\n7\n1958.54\n315.86\n315.86\n314.98\n-1\n\n\n\n\n\n\n\nWhat’s a reasonable estimate?\nTo answer this question, let’s zoom in on a short time period, say the measurements in 1958 (where we know we have two missing values).\n\n\nCode\n# results of plotting data in 1958\n\ndef line_and_points(data, ax, title):\n # assumes single year, hence Mo\n ax.plot('Mo', 'Avg', data=data)\n ax.scatter('Mo', 'Avg', data=data)\n ax.set_xlim(2, 13)\n ax.set_title(title)\n ax.set_xticks(np.arange(3, 13))\n\ndef data_year(data, year):\n return data[data[\"Yr\"] == 1958]\n \n# uses matplotlib subplots\n# you may see more next week; focus on output for now\nfig, axes = plt.subplots(ncols = 3, figsize=(12, 4), sharey=True)\n\nyear = 1958\nline_and_points(data_year(co2_drop, year), axes[0], title=\"1. Drop Missing\")\nline_and_points(data_year(co2_NA, year), axes[1], title=\"2. Missing Set to NaN\")\nline_and_points(data_year(co2_impute, year), axes[2], title=\"3. Missing Interpolated\")\n\nfig.suptitle(f\"Monthly Averages for {year}\")\nplt.tight_layout()\n\n\n\n\n\nIn the big picture since there are only 7 Avg values missing (<1% of 738 months), any of these approaches would work.\nHowever there is some appeal to option C, Imputing:\n\nShows seasonal trends for CO2\nWe are plotting all months in our data as a line plot\n\nLet’s replot our original figure with option 3:\n\n\nCode\nsns.lineplot(x='DecDate', y='Avg', data=co2_impute)\nplt.title(\"CO2 Average By Month, Imputed\");\n\n\n\n\n\nLooks pretty close to what we see on the NOAA website!\n\n\n5.5.8 Presenting the Data: A Discussion on Data Granularity\nFrom the description:\n\nMonthly measurements are averages of average day measurements.\nThe NOAA GML website has datasets for daily/hourly measurements too.\n\nThe data you present depends on your research question.\nHow do CO2 levels vary by season?\n\nYou might want to keep average monthly data.\n\nAre CO2 levels rising over the past 50+ years, consistent with global warming predictions?\n\nYou might be happier with a coarser granularity of average year data!\n\n\n\nCode\nco2_year = co2_impute.groupby('Yr').mean()\nsns.lineplot(x='Yr', y='Avg', data=co2_year)\nplt.title(\"CO2 Average By Year\");\n\n\n\n\n\nIndeed, we see a rise by nearly 100 ppm of CO2 since Mauna Loa began recording in 1958." + }, + { + "objectID": "eda/eda.html#summary", + "href": "eda/eda.html#summary", + "title": "5  Data Cleaning and EDA", + "section": "5.6 Summary", + "text": "5.6 Summary\nWe went over a lot of content this lecture; let’s summarize the most important points:\n\n5.6.1 Dealing with Missing Values\nThere are a few options we can take to deal with missing data:\n\nDrop missing records\nKeep NaN missing values\nImpute using an interpolated column\n\n\n\n5.6.2 EDA and Data Wrangling\nThere are several ways to approach EDA and Data Wrangling:\n\nExamine the data and metadata: what is the date, size, organization, and structure of the data?\nExamine each field/attribute/dimension individually.\nExamine pairs of related dimensions (e.g. breaking down grades by major).\nAlong the way, we can:\n\nVisualize or summarize the data.\nValidate assumptions about data and its collection process. Pay particular attention to when the data was collected.\nIdentify and address anomalies.\nApply data transformations and corrections (we’ll cover this in the upcoming lecture).\nRecord everything you do! Developing in Jupyter Notebook promotes reproducibility of your own work!" + }, + { + "objectID": "regex/regex.html#why-work-with-text", + "href": "regex/regex.html#why-work-with-text", + "title": "6  Regular Expressions", + "section": "6.1 Why Work with Text?", + "text": "6.1 Why Work with Text?\nLast lecture, we learned of the difference between quantitative and qualitative variable types. The latter includes string data — the primary focus of lecture 6. In this note, we’ll discuss the necessary tools to manipulate text: Python string manipulation and regular expressions.\nThere are two main reasons for working with text.\n\nCanonicalization: Convert data that has multiple formats into a standard form.\n\nBy manipulating text, we can join tables with mismatched string labels.\n\nExtract information into a new feature.\n\nFor example, we can extract date and time features from text." + }, + { + "objectID": "regex/regex.html#python-string-methods", + "href": "regex/regex.html#python-string-methods", + "title": "6  Regular Expressions", + "section": "6.2 Python String Methods", + "text": "6.2 Python String Methods\nFirst, we’ll introduce a few methods useful for string manipulation. The following table includes a number of string operations supported by Python and pandas. The Python functions operate on a single string, while their equivalent in pandas are vectorized — they operate on a Series of string data.\n\n\n\n\n\n\n\n\nOperation\nPython\nPandas (Series)\n\n\n\n\nTransformation\n\ns.lower()\ns.upper()\n\n\nser.str.lower()\nser.str.upper()\n\n\n\nReplacement + Deletion\n\ns.replace(_)\n\n\nser.str.replace(_)\n\n\n\nSplit\n\ns.split(_)\n\n\nser.str.split(_)\n\n\n\nSubstring\n\ns[1:4]\n\n\nser.str[1:4]\n\n\n\nMembership\n\n'_' in s\n\n\nser.str.contains(_)\n\n\n\nLength\n\nlen(s)\n\n\nser.str.len()\n\n\n\n\nWe’ll discuss the differences between Python string functions and pandas Series methods in the following section on canonicalization.\n\n6.2.1 Canonicalization\nAssume we want to merge the given tables.\n\n\nCode\nimport pandas as pd\n\nwith open('data/county_and_state.csv') as f:\n county_and_state = pd.read_csv(f)\n \nwith open('data/county_and_population.csv') as f:\n county_and_pop = pd.read_csv(f)\n\n\n\ndisplay(county_and_state), display(county_and_pop);\n\n\n\n\n\n\n\n\nCounty\nState\n\n\n\n\n0\nDe Witt County\nIL\n\n\n1\nLac qui Parle County\nMN\n\n\n2\nLewis and Clark County\nMT\n\n\n3\nSt John the Baptist Parish\nLS\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCounty\nPopulation\n\n\n\n\n0\nDeWitt\n16798\n\n\n1\nLac Qui Parle\n8067\n\n\n2\nLewis & Clark\n55716\n\n\n3\nSt. John the Baptist\n43044\n\n\n\n\n\n\n\nLast time, we used a primary key and foreign key to join two tables. While neither of these keys exist in our DataFrames, the \"County\" columns look similar enough. Can we convert these columns into one standard, canonical form to merge the two tables?\n\n6.2.1.1 Canonicalization with Python String Manipulation\nThe following function uses Python string manipulation to convert a single county name into canonical form. It does so by eliminating whitespace, punctuation, and unnecessary text.\n\ndef canonicalize_county(county_name):\n return (\n county_name\n .lower()\n .replace(' ', '')\n .replace('&', 'and')\n .replace('.', '')\n .replace('county', '')\n .replace('parish', '')\n )\n\ncanonicalize_county(\"St. John the Baptist\")\n\n'stjohnthebaptist'\n\n\nWe will use the pandas map function to apply the canonicalize_county function to every row in both DataFrames. In doing so, we’ll create a new column in each called clean_county_python with the canonical form.\n\ncounty_and_pop['clean_county_python'] = county_and_pop['County'].map(canonicalize_county)\ncounty_and_state['clean_county_python'] = county_and_state['County'].map(canonicalize_county)\ndisplay(county_and_state), display(county_and_pop);\n\n\n\n\n\n\n\n\nCounty\nState\nclean_county_python\n\n\n\n\n0\nDe Witt County\nIL\ndewitt\n\n\n1\nLac qui Parle County\nMN\nlacquiparle\n\n\n2\nLewis and Clark County\nMT\nlewisandclark\n\n\n3\nSt John the Baptist Parish\nLS\nstjohnthebaptist\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCounty\nPopulation\nclean_county_python\n\n\n\n\n0\nDeWitt\n16798\ndewitt\n\n\n1\nLac Qui Parle\n8067\nlacquiparle\n\n\n2\nLewis & Clark\n55716\nlewisandclark\n\n\n3\nSt. John the Baptist\n43044\nstjohnthebaptist\n\n\n\n\n\n\n\n\n\n6.2.1.2 Canonicalization with Pandas Series Methods\nAlternatively, we can use pandas Series methods to create this standardized column. To do so, we must call the .str attribute of our Series object prior to calling any methods, like .lower and .replace. Notice how these method names match their equivalent built-in Python string functions.\nChaining multiple Series methods in this manner eliminates the need to use the map function (as this code is vectorized).\n\ndef canonicalize_county_series(county_series):\n return (\n county_series\n .str.lower()\n .str.replace(' ', '')\n .str.replace('&', 'and')\n .str.replace('.', '')\n .str.replace('county', '')\n .str.replace('parish', '')\n )\n\ncounty_and_pop['clean_county_pandas'] = canonicalize_county_series(county_and_pop['County'])\ncounty_and_state['clean_county_pandas'] = canonicalize_county_series(county_and_state['County'])\ndisplay(county_and_pop), display(county_and_state);\n\n/var/folders/7t/zbwy02ts2m7cn64fvwjqb8xw0000gp/T/ipykernel_73675/2523629438.py:3: FutureWarning:\n\nThe default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.\n\n/var/folders/7t/zbwy02ts2m7cn64fvwjqb8xw0000gp/T/ipykernel_73675/2523629438.py:3: FutureWarning:\n\nThe default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.\n\n\n\n\n\n\n\n\n\n\nCounty\nPopulation\nclean_county_python\nclean_county_pandas\n\n\n\n\n0\nDeWitt\n16798\ndewitt\ndewitt\n\n\n1\nLac Qui Parle\n8067\nlacquiparle\nlacquiparle\n\n\n2\nLewis & Clark\n55716\nlewisandclark\nlewisandclark\n\n\n3\nSt. John the Baptist\n43044\nstjohnthebaptist\nstjohnthebaptist\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nCounty\nState\nclean_county_python\nclean_county_pandas\n\n\n\n\n0\nDe Witt County\nIL\ndewitt\ndewitt\n\n\n1\nLac qui Parle County\nMN\nlacquiparle\nlacquiparle\n\n\n2\nLewis and Clark County\nMT\nlewisandclark\nlewisandclark\n\n\n3\nSt John the Baptist Parish\nLS\nstjohnthebaptist\nstjohnthebaptist\n\n\n\n\n\n\n\n\n\n\n6.2.2 Extraction\nExtraction explores the idea of obtaining useful information from text data. This will be particularily important in model building, which we’ll study in a few weeks.\nSay we want to read some data from a .txt file.\n\nwith open('data/log.txt', 'r') as f:\n log_lines = f.readlines()\n\nlog_lines\n\n['169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] \"GET /stat141/Winter04/ HTTP/1.1\" 200 2585 \"http://anson.ucdavis.edu/courses/\"\\n',\n '193.205.203.3 - - [2/Feb/2005:17:23:6 -0800] \"GET /stat141/Notes/dim.html HTTP/1.0\" 404 302 \"http://eeyore.ucdavis.edu/stat141/Notes/session.html\"\\n',\n '169.237.46.240 - \"\" [3/Feb/2006:10:18:37 -0800] \"GET /stat141/homework/Solutions/hw1Sol.pdf HTTP/1.1\"\\n']\n\n\nSuppose we want to extract the day, month, year, hour, minutes, seconds, and time zone. Unfortunately, these items are not in a fixed position from the beginning of the string, so slicing by some fixed offset won’t work.\nInstead, we can use some clever thinking. Notice how the relevant information is contained within a set of brackets, further separated by / and :. We can hone in on this region of text, and split the data on these characters. Python’s built-in .split function makes this easy.\n\nfirst = log_lines[0] # Only considering the first row of data\n\npertinent = first.split(\"[\")[1].split(']')[0]\nday, month, rest = pertinent.split('/')\nyear, hour, minute, rest = rest.split(':')\nseconds, time_zone = rest.split(' ')\nday, month, year, hour, minute, seconds, time_zone\n\n('26', 'Jan', '2014', '10', '47', '58', '-0800')\n\n\nThere are two problems with this code:\n\nPython’s built-in functions limit us to extract data one record at a time,\n\nThis can be resolved using the map function or pandas Series methods.\n\nThe code is quite verbose.\n\nThis is a larger issue that is trickier to solve\n\n\nIn the next section, we’ll introduce regular expressions - a tool that solves problem 2." + }, + { + "objectID": "regex/regex.html#regex-basics", + "href": "regex/regex.html#regex-basics", + "title": "6  Regular Expressions", + "section": "6.3 RegEx Basics", + "text": "6.3 RegEx Basics\nA regular expression (“RegEx”) is a sequence of characters that specifies a search pattern. They are written to extract specific information from text. Regular expressions are essentially part of a smaller programming language embedded in Python, made available through the re module. As such, they have a stand-alone syntax and methods for various capabilities.\nRegular expressions are useful in many applications beyond data science. For example, Social Security Numbers (SSNs) are often validated with regular expressions.\n\nr\"[0-9]{3}-[0-9]{2}-[0-9]{4}\" # Regular Expression Syntax\n\n# 3 of any digit, then a dash,\n# then 2 of any digit, then a dash,\n# then 4 of any digit\n\n'[0-9]{3}-[0-9]{2}-[0-9]{4}'\n\n\n\n\nThere are a ton of resources to learn and experiment with regular expressions. A few are provided below:\n\nOfficial Regex Guide\nData 100 Reference Sheet\nRegex101.com\n\nBe sure to check Python under the category on the left.\n\n\n\n6.3.1 Basics RegEx Syntax\nThere are four basic operations with regular expressions.\n\n\n\n\n\n\n\n\n\n\nOperation\nOrder\nSyntax Example\nMatches\nDoesn’t Match\n\n\n\n\nOr: |\n4\nAA|BAAB\nAA BAAB\nevery other string\n\n\nConcatenation\n3\nAABAAB\nAABAAB\nevery other string\n\n\nClosure: * (zero or more)\n2\nAB*A\nAA ABBBBBBA\nAB ABABA\n\n\nGroup: () (parenthesis)\n1\nA(A|B)AAB (AB)*A\nAAAAB ABAAB A ABABABABA\nevery other string AA ABBA\n\n\n\nNotice how these metacharacter operations are ordered. Rather than being literal characters, these metacharacters manipulate adjacent characters. () takes precedence, followed by *, and finally |. This allows us to differentiate between very different regex commands like AB* and (AB)*. The former reads “A then zero or more copies of B”, while the latter specifies “zero or more copies of AB”.\n\n6.3.1.1 Examples\nQuestion 1: Give a regular expression that matches moon, moooon, etc. Your expression should match any even number of os except zero (i.e. don’t match mn).\nAnswer 1: moo(oo)*n\n\nHardcoding oo before the capture group ensures that mn is not matched.\nA capture group of (oo)* ensures the number of o’s is even.\n\nQuestion 2: Using only basic operations, formulate a regex that matches muun, muuuun, moon, moooon, etc. Your expression should match any even number of us or os except zero (i.e. don’t match mn).\nAnswer 2: m(uu(uu)*|oo(oo)*)n\n\nThe leading m and trailing n ensures that only strings beginning with m and ending with n are matched.\nNotice how the outer capture group surrounds the |.\n\nConsider the regex m(uu(uu)*)|(oo(oo)*)n. This incorrectly matches muu and oooon.\n\nEach OR clause is everything to the left and right of |. The incorrect solution matches only half of the string, and ignores either the beginning m or trailing n.\nA set of parenthesis must surround |. That way, each OR clause is everything to the left and right of | within the group. This ensures both the beginning m and trailing n are matched." + }, + { + "objectID": "regex/regex.html#regex-expanded", + "href": "regex/regex.html#regex-expanded", + "title": "6  Regular Expressions", + "section": "6.4 RegEx Expanded", + "text": "6.4 RegEx Expanded\nProvided below are more complex regular expression functions.\n\n\n\n\n\n\n\n\n\nOperation\nSyntax Example\nMatches\nDoesn’t Match\n\n\n\n\nAny Character: . (except newline)\n.U.U.U.\nCUMULUS JUGULUM\nSUCCUBUS TUMULTUOUS\n\n\nCharacter Class: [] (match one character in [])\n[A-Za-z][a-z]*\nword Capitalized\ncamelCase 4illegal\n\n\nRepeated \"a\" Times: {a}\nj[aeiou]{3}hn\njaoehn jooohn\njhn jaeiouhn\n\n\nRepeated \"from a to b\" Times: {a, b}\nj[0u]{1,2}hn\njohn juohn\njhn jooohn\n\n\nAt Least One: +\njo+hn\njohn joooooohn\njhn jjohn\n\n\nZero or One: ?\njoh?n\njon john\nany other string\n\n\n\nA character class matches a single character in its class. These characters can be hardcoded —— in the case of [aeiou] —— or shorthand can be specified to mean a range of characters. Examples include:\n\n[A-Z]: Any capitalized letter\n[a-z]: Any lowercase letter\n[0-9]: Any single digit\n[A-Za-z]: Any capitalized of lowercase letter\n[A-Za-z0-9]: Any capitalized or lowercase letter or single digit\n\n\n6.4.0.1 Examples\nLet’s analyze a few examples of complex regular expressions.\n\n\n\n\n\n\n\nMatches\nDoes Not Match\n\n\n\n\n\n.*SPB.*\n\n\n\n\nRASPBERRY SPBOO\nSUBSPACE SUBSPECIES\n\n\n\n[0-9]{3}-[0-9]{2}-[0-9]{4}\n\n\n\n\n231-41-5121 573-57-1821\n231415121 57-3571821\n\n\n\n[a-z]+@([a-z]+\\.)+(edu|com)\n\n\n\n\nhorse@pizza.com horse@pizza.food.com\nfrank_99@yahoo.com hug@cs\n\n\n\nExplanations\n\n.*SPB.* only matches strings that contain the substring SPB.\n\nThe .* metacharacter matches any amount of non-negative characters. Newlines do not count.\n\n\nThis regular expression matches 3 of any digit, then a dash, then 2 of any digit, then a dash, then 4 of any digit.\n\nYou’ll recognize this as the familiar Social Security Number regular expression.\n\nMatches any email with a com or edu domain, where all characters of the email are letters.\n\nAt least one . must precede the domain name. Including a backslash \\ before any metacharacter (in this case, the .) tells RegEx to match that character exactly." + }, + { + "objectID": "regex/regex.html#convenient-regex", + "href": "regex/regex.html#convenient-regex", + "title": "6  Regular Expressions", + "section": "6.5 Convenient RegEx", + "text": "6.5 Convenient RegEx\nHere are a few more convenient regular expressions.\n\n\n\n\n\n\n\n\n\nOperation\nSyntax Example\nMatches\nDoesn’t Match\n\n\n\n\nbuilt in character class\n\\w+ \\d+ \\s+ \nFawef_03 231123 whitespace\nthis person 423 people non-whitespace\n\n\ncharacter class negation: [^] (everything except the given characters)\n[^a-z]+.\nPEPPERS3982 17211!↑å\nporch CLAmS\n\n\nescape character: \\ (match the literal next character)\ncow\\.com\ncow.com\ncowscom\n\n\nbeginning of line: ^\n^ark\nark two ark o ark\ndark\n\n\nend of line: $\nark$\ndark ark o ark\nark two\n\n\nlazy version of zero or more : *?\n5.*?5\n5005 55\n5005005\n\n\n\n\n6.5.1 Greediness\nIn order to fully understand the last operation in the table, we have to discuss greediness. RegEx is greedy – it will look for the longest possible match in a string. To motivate this with an example, consider the pattern <div>.*</div>. Given the sentence below, we would hope that the bolded portions would be matched:\n“This is a <div>example</div> of greediness <div>in</div> regular expressions.” ”\nIn actuality, the way RegEx processes the text given that pattern is as follows:\n\n“Look for the exact string <>”\nthen, “look for any character 0 or more times”\nthen, “look for the exact string </div>”\n\nThe result would be all the characters starting from the leftmost <div> and the rightmost </div> (inclusive). We can fix this by making our pattern non-greedy, <div>.*?</div>. You can read up more on the documentation here.\n\n\n6.5.2 Examples\nLet’s revisit our earlier problem of extracting date/time data from the given .txt files. Here is how the data looked.\n\nlog_lines[0]\n\n'169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] \"GET /stat141/Winter04/ HTTP/1.1\" 200 2585 \"http://anson.ucdavis.edu/courses/\"\\n'\n\n\nQuestion: Give a regular expression that matches everything contained within and including the brackets - the day, month, year, hour, minutes, seconds, and time zone.\nAnswer: \\[.*\\]\n\nNotice how matching the literal [ and ] is necessary. Therefore, an escape character \\ is required before both [ and ] — otherwise these metacharacters will match character classes.\nWe need to match a particular format between [ and ]. For this example, .* will suffice.\n\nAlternative Solution: \\[\\w+/\\w+/\\w+:\\w+:\\w+:\\w+\\s-\\w+\\]\n\nThis solution is much safer.\n\nImagine the data between [ and ] was garbage - .* will still match that.\nThe alternate solution will only match data that follows the correct format." + }, + { + "objectID": "regex/regex.html#regex-in-python-and-pandas-regex-groups", + "href": "regex/regex.html#regex-in-python-and-pandas-regex-groups", + "title": "6  Regular Expressions", + "section": "6.6 Regex in Python and Pandas (RegEx Groups)", + "text": "6.6 Regex in Python and Pandas (RegEx Groups)\n\n6.6.1 Canonicalization\n\n6.6.1.1 Canonicalization with RegEx\nEarlier in this note, we examined the process of canonicalization using python string manipulation and pandas Series methods. However, we mentioned this approach had a major flaw: our code was unnecessarily verbose. Equipped with our knowledge of regular expressions, let’s fix this.\nTo do so, we need to understand a few functions in the re module. The first of these is the substitute function: re.sub(pattern, rep1, text). It behaves similarly to python’s built-in .replace function, and returns text with all instances of pattern replaced by rep1.\nThe regular expression here removes text surrounded by <> (also known as HTML tags).\nIn order, the pattern matches … 1. a single < 2. any character that is not a > : div, td valign…, /td, /div 3. a single >\nAny substring in text that fulfills all three conditions will be replaced by ''.\n\nimport re\n\ntext = \"<div><td valign='top'>Moo</td></div>\"\npattern = r\"<[^>]+>\"\nre.sub(pattern, '', text) \n\n'Moo'\n\n\nNotice the r preceding the regular expression pattern; this specifies the regular expression is a raw string. Raw strings do not recognize escape sequences (i.e., the Python newline metacharacter \\n). This makes them useful for regular expressions, which often contain literal \\ characters.\nIn other words, don’t forget to tag your RegEx with an r.\n\n\n6.6.1.2 Canonicalization with pandas\nWe can also use regular expressions with pandas Series methods. This gives us the benefit of operating on an entire column of data as opposed to a single value. The code is simple: ser.str.replace(pattern, repl, regex=True).\nConsider the following DataFrame html_data with a single column.\n\n\nCode\ndata = {\"HTML\": [\"<div><td valign='top'>Moo</td></div>\", \\\n \"<a href='http://ds100.org'>Link</a>\", \\\n \"<b>Bold text</b>\"]}\nhtml_data = pd.DataFrame(data)\n\n\n\nhtml_data\n\n\n\n\n\n\n\n\nHTML\n\n\n\n\n0\n<div><td valign='top'>Moo</td></div>\n\n\n1\n<a href='http://ds100.org'>Link</a>\n\n\n2\n<b>Bold text</b>\n\n\n\n\n\n\n\n\npattern = r\"<[^>]+>\"\nhtml_data['HTML'].str.replace(pattern, '', regex=True)\n\n0 Moo\n1 Link\n2 Bold text\nName: HTML, dtype: object\n\n\n\n\n\n6.6.2 Extraction\n\n6.6.2.1 Extraction with RegEx\nJust like with canonicalization, the re module provides capability to extract relevant text from a string: re.findall(pattern, text). This function returns a list of all matches to pattern.\nUsing the familiar regular expression for Social Security Numbers:\n\ntext = \"My social security number is 123-45-6789 bro, or maybe it’s 321-45-6789.\"\npattern = r\"[0-9]{3}-[0-9]{2}-[0-9]{4}\"\nre.findall(pattern, text) \n\n['123-45-6789', '321-45-6789']\n\n\n\n\n6.6.2.2 Extraction with pandas\npandas similarily provides extraction functionality on a Series of data: ser.str.findall(pattern)\nConsider the following DataFrame ssn_data.\n\n\nCode\ndata = {\"SSN\": [\"987-65-4321\", \"forty\", \\\n \"123-45-6789 bro or 321-45-6789\",\n \"999-99-9999\"]}\nssn_data = pd.DataFrame(data)\n\n\n\nssn_data\n\n\n\n\n\n\n\n\nSSN\n\n\n\n\n0\n987-65-4321\n\n\n1\nforty\n\n\n2\n123-45-6789 bro or 321-45-6789\n\n\n3\n999-99-9999\n\n\n\n\n\n\n\n\nssn_data[\"SSN\"].str.findall(pattern)\n\n0 [987-65-4321]\n1 []\n2 [123-45-6789, 321-45-6789]\n3 [999-99-9999]\nName: SSN, dtype: object\n\n\nThis function returns a list for every row containing the pattern matches in a given string.\nAs you may expect, there are similar pandas equivalents for other re functions as well. Series.str.extract takes in a pattern and returns a DataFrame of each capture group’s first match in the string. In contrast, Series.str.extractall returns a multi-indexed DataFrame of all matches for each capture group. You can see the difference in the outputs below:\n\npattern_cg = r\"([0-9]{3})-([0-9]{2})-([0-9]{4})\"\nssn_data[\"SSN\"].str.extract(pattern_cg)\n\n\n\n\n\n\n\n\n0\n1\n2\n\n\n\n\n0\n987\n65\n4321\n\n\n1\nNaN\nNaN\nNaN\n\n\n2\n123\n45\n6789\n\n\n3\n999\n99\n9999\n\n\n\n\n\n\n\n\nssn_data[\"SSN\"].str.extractall(pattern_cg)\n\n\n\n\n\n\n\n\n\n0\n1\n2\n\n\n\nmatch\n\n\n\n\n\n\n\n0\n0\n987\n65\n4321\n\n\n2\n0\n123\n45\n6789\n\n\n1\n321\n45\n6789\n\n\n3\n0\n999\n99\n9999\n\n\n\n\n\n\n\n\n\n\n6.6.3 Regular Expression Capture Groups\nEarlier we used parentheses ( ) to specify the highest order of operation in regular expressions. However, they have another meaning; parentheses are often used to represent capture groups. Capture groups are essentially, a set of smaller regular expressions that match multiple substrings in text data.\nLet’s take a look at an example.\n\n6.6.3.1 Example 1\n\ntext = \"Observations: 03:04:53 - Horse awakens. \\\n 03:05:14 - Horse goes back to sleep.\"\n\nSay we want to capture all occurences of time data (hour, minute, and second) as separate entities.\n\npattern_1 = r\"(\\d\\d):(\\d\\d):(\\d\\d)\"\nre.findall(pattern_1, text)\n\n[('03', '04', '53'), ('03', '05', '14')]\n\n\nNotice how the given pattern has 3 capture groups, each specified by the regular expression (\\d\\d). We then use re.findall to return these capture groups, each as tuples containing 3 matches.\nThese regular expression capture groups can be different. We can use the (\\d{2}) shorthand to extract the same data.\n\npattern_2 = r\"(\\d\\d):(\\d\\d):(\\d{2})\"\nre.findall(pattern_2, text)\n\n[('03', '04', '53'), ('03', '05', '14')]\n\n\n\n\n6.6.3.2 Example 2\nWith the notion of capture groups, convince yourself how the following regular expression works.\n\nfirst = log_lines[0]\nfirst\n\n'169.237.46.168 - - [26/Jan/2014:10:47:58 -0800] \"GET /stat141/Winter04/ HTTP/1.1\" 200 2585 \"http://anson.ucdavis.edu/courses/\"\\n'\n\n\n\npattern = r'\\[(\\d+)\\/(\\w+)\\/(\\d+):(\\d+):(\\d+):(\\d+) (.+)\\]'\nday, month, year, hour, minute, second, time_zone = re.findall(pattern, first)[0]\nprint(day, month, year, hour, minute, second, time_zone)\n\n26 Jan 2014 10 47 58 -0800" + }, + { + "objectID": "regex/regex.html#limitations-of-regular-expressions", + "href": "regex/regex.html#limitations-of-regular-expressions", + "title": "6  Regular Expressions", + "section": "6.7 Limitations of Regular Expressions", + "text": "6.7 Limitations of Regular Expressions\nToday, we explored the capabilities of regular expressions in data wrangling with text data. However, there are a few things to be wary of.\nWriting regular expressions is like writing a program.\n\nNeed to know the syntax well.\nCan be easier to write than to read.\nCan be difficult to debug.\n\nRegular expressions are terrible at certain types of problems:\n\nFor parsing a hierarchical structure, such as JSON, use the json.load() parser, not RegEx!\nComplex features (e.g. valid email address).\nCounting (same number of instances of a and b). (impossible)\nComplex properties (palindromes, balanced parentheses). (impossible)\n\nUltimately, the goal is not to memorize all regular expressions. Rather, the aim is to:\n\nUnderstand what RegEx is capable of.\nParse and create RegEx, with a reference table\nUse vocabulary (metacharacter, escape character, groups, etc.) to describe regex metacharacters.\nDifferentiate between (), [], {}\nDesign your own character classes with , , […-…], ^, etc.\nUse python and pandas RegEx methods." + }, + { + "objectID": "visualization_1/visualization_1.html#visualizations-in-data-8-and-data-100-so-far", + "href": "visualization_1/visualization_1.html#visualizations-in-data-8-and-data-100-so-far", + "title": "7  Visualization I", + "section": "7.1 Visualizations in Data 8 and Data 100 (so far)", + "text": "7.1 Visualizations in Data 8 and Data 100 (so far)\nYou’ve likely encountered several forms of data visualizations in your studies. You may remember two such examples from Data 8: line plots, scatter plots, and histograms. Each of these served a unique purpose. For example, line plots displayed how numerical quantities changed over time, while histograms were useful in understanding a variable’s distribution.\n\n\n\n\n\n\n\nLine Chart\nScatter Plot\n\n\n\n\n\n\n\n\n\n\n\n\nHistogram" + }, + { + "objectID": "visualization_1/visualization_1.html#goals-of-visualization", + "href": "visualization_1/visualization_1.html#goals-of-visualization", + "title": "7  Visualization I", + "section": "7.2 Goals of Visualization", + "text": "7.2 Goals of Visualization\nVisualizations are useful for a number of reasons. In Data 100, we consider two areas in particular:\n\nTo broaden your understanding of the data. Summarizing trends visually before in-depth analysis is a key part of exploratory data analysis. Creating these graphs is a lightweight, iterative and flexible process that helps us investigate relationships between variables.\nTo communicate results/conclusions to others. These visualizations are highly editorial, selective, and fine-tuned to achieve a communications goal, so be thoughtful and careful about its clarity, accessibility, and necessary context.\n\nAltogether, these goals emphasize the fact that visualizations aren’t a matter of making “pretty” pictures; we need to do a lot of thinking about what stylistic choices communicate ideas most effectively.\nThis course note will focus on the first half of visualization topics in Data 100. The goal here is to understand how to choose the “right” plot depending on different variable types and, secondly, how to generate these plots using code." + }, + { + "objectID": "visualization_1/visualization_1.html#an-overview-of-distributions", + "href": "visualization_1/visualization_1.html#an-overview-of-distributions", + "title": "7  Visualization I", + "section": "7.3 An Overview of Distributions", + "text": "7.3 An Overview of Distributions\nA distribution describes both the set of values that a single variable can take and the frequency of unique values in a single variable. For example, if we’re interested in the distribution of students across Data 100 discussion sections, the set of possible values is a list of discussion sections (10-11am, 11-12pm, etc.), and the frequency that each of those values occur is the number of students enrolled in each section. In other words, the we’re interested in how a variable is distributed across it’s possible values. Therefore, distributions must satisfy two properties:\n\nThe total frequency of all categories must sum to 100%\nTotal count should sum to the total number of datapoints if we’re using raw counts.\n\n\n\n\n\n\n\n\nNot a Valid Distribution\nValid Distribution\n\n\n\n\n\n\n\n\nThis is not a valid distribution since individuals can be associated with more than one category and the bar values demonstrate values in minutes and not probability.\nThis example satisfies the two properties of distributions, so it is a valid distribution." + }, + { + "objectID": "visualization_1/visualization_1.html#variable-types-should-inform-plot-choice", + "href": "visualization_1/visualization_1.html#variable-types-should-inform-plot-choice", + "title": "7  Visualization I", + "section": "7.4 Variable Types Should Inform Plot Choice", + "text": "7.4 Variable Types Should Inform Plot Choice\nDifferent plots are more or less suited for displaying particular types of variables, laid out in the diagram below:\n\n\n\nThe first step of any visualization is to identify the type(s) of variables we’re working with. From here, we can select an appropriate plot type:" + }, + { + "objectID": "visualization_1/visualization_1.html#qualitative-variables-bar-plots", + "href": "visualization_1/visualization_1.html#qualitative-variables-bar-plots", + "title": "7  Visualization I", + "section": "7.5 Qualitative Variables: Bar Plots", + "text": "7.5 Qualitative Variables: Bar Plots\nA bar plot is one of the most common ways of displaying the distribution of a qualitative (categorical) variable. The length of a bar plot encodes the frequency of a category; the width encodes no useful information. The color could indicate a sub-category, but this is not necessarily the case.\nLet’s contextualize this in an example. We will use the World Bank dataset (wb) in our analysis.\n\n\nCode\nimport pandas as pd\nimport numpy as np\n\nwb = pd.read_csv(\"data/world_bank.csv\", index_col=0)\nwb.head()\n\n\n\n\n\n\n\n\n\nContinent\nCountry\nPrimary completion rate: Male: % of relevant age group: 2015\nPrimary completion rate: Female: % of relevant age group: 2015\nLower secondary completion rate: Male: % of relevant age group: 2015\nLower secondary completion rate: Female: % of relevant age group: 2015\nYouth literacy rate: Male: % of ages 15-24: 2005-14\nYouth literacy rate: Female: % of ages 15-24: 2005-14\nAdult literacy rate: Male: % ages 15 and older: 2005-14\nAdult literacy rate: Female: % ages 15 and older: 2005-14\n...\nAccess to improved sanitation facilities: % of population: 1990\nAccess to improved sanitation facilities: % of population: 2015\nChild immunization rate: Measles: % of children ages 12-23 months: 2015\nChild immunization rate: DTP3: % of children ages 12-23 months: 2015\nChildren with acute respiratory infection taken to health provider: % of children under age 5 with ARI: 2009-2016\nChildren with diarrhea who received oral rehydration and continuous feeding: % of children under age 5 with diarrhea: 2009-2016\nChildren sleeping under treated bed nets: % of children under age 5: 2009-2016\nChildren with fever receiving antimalarial drugs: % of children under age 5 with fever: 2009-2016\nTuberculosis: Treatment success rate: % of new cases: 2014\nTuberculosis: Cases detection rate: % of new estimated cases: 2015\n\n\n\n\n0\nAfrica\nAlgeria\n106.0\n105.0\n68.0\n85.0\n96.0\n92.0\n83.0\n68.0\n...\n80.0\n88.0\n95.0\n95.0\n66.0\n42.0\nNaN\nNaN\n88.0\n80.0\n\n\n1\nAfrica\nAngola\nNaN\nNaN\nNaN\nNaN\n79.0\n67.0\n82.0\n60.0\n...\n22.0\n52.0\n55.0\n64.0\nNaN\nNaN\n25.9\n28.3\n34.0\n64.0\n\n\n2\nAfrica\nBenin\n83.0\n73.0\n50.0\n37.0\n55.0\n31.0\n41.0\n18.0\n...\n7.0\n20.0\n75.0\n79.0\n23.0\n33.0\n72.7\n25.9\n89.0\n61.0\n\n\n3\nAfrica\nBotswana\n98.0\n101.0\n86.0\n87.0\n96.0\n99.0\n87.0\n89.0\n...\n39.0\n63.0\n97.0\n95.0\nNaN\nNaN\nNaN\nNaN\n77.0\n62.0\n\n\n5\nAfrica\nBurundi\n58.0\n66.0\n35.0\n30.0\n90.0\n88.0\n89.0\n85.0\n...\n42.0\n48.0\n93.0\n94.0\n55.0\n43.0\n53.8\n25.4\n91.0\n51.0\n\n\n\n\n5 rows × 47 columns\n\n\n\nWe can visualize the distribution of the Continent column using a bar plot. There are a few ways to do this.\n\n7.5.1 Plotting in Pandas\n\nwb['Continent'].value_counts().plot(kind='bar');\n\n\n\n\nRecall that .value_counts() returns a Series with the total count of each unique value. We call .plot(kind='bar') on this result to visualize these counts as a bar plot.\nPlotting methods in pandas are the least preferred and not supported in Data 100, as their functionality is limited. Instead, future examples will focus on other libraries built specifically for visualizing data. The most well-known library here is matplotlib.\n\n\n7.5.2 Plotting in Matplotlib\n\nimport matplotlib.pyplot as plt # matplotlib is typically given the alias plt\n\ncontinent = wb['Continent'].value_counts()\nplt.bar(continent.index, continent)\nplt.xlabel('Continent')\nplt.ylabel('Count');\n\n\n\n\nWhile more code is required to achieve the same result, matplotlib is often used over pandas for its ability to plot more complex visualizations, some of which are discussed shortly.\nHowever, note how we needed to label the axes with plt.xlabel and plt.ylabel, as matplotlib does not support automatic axis labeling. To get around these inconveniences, we can use a more efficient plotting library: seaborn.\n\n\n7.5.3 Plotting in Seaborn\n\nimport seaborn as sns # seaborn is typically given the alias sns\nsns.countplot(data = wb, x = 'Continent');\n\n\n\n\nIn contrast to matplotlib, the general structure of a seaborn call involves passing in an entire DataFrame, and then specifying what column(s) to plot. seaborn.countplot both counts and visualizes the number of unique values in a given column. This column is specified by the x argument to sns.countplot, while the DataFrame is specified by the data argument.\nFor the vast majority of visualizations, seaborn is far more concise and aesthetically pleasing than matplotlib. However, the color scheme of this particular bar plot is arbitrary - it encodes no additional information about the categories themselves. This is not always true; color may signify meaningful detail in other visualizations. We’ll explore this more in-depth during the next lecture.\nBy now, you’ll have noticed that each of these plotting libraries have a very different syntax. As with pandas, we’ll teach you the important methods in matplotlib and seaborn, but you’ll learn more through documentation.\n\nMatplotlib Documentation\nSeaborn Documentation" + }, + { + "objectID": "visualization_1/visualization_1.html#distributions-of-quantitative-variables", + "href": "visualization_1/visualization_1.html#distributions-of-quantitative-variables", + "title": "7  Visualization I", + "section": "7.6 Distributions of Quantitative Variables", + "text": "7.6 Distributions of Quantitative Variables\nRevisiting our example with the wb DataFrame, let’s plot the distribution of Gross national income per capita.\n\n\nCode\nwb.head(5)\n\n\n\n\n\n\n\n\n\nContinent\nCountry\nPrimary completion rate: Male: % of relevant age group: 2015\nPrimary completion rate: Female: % of relevant age group: 2015\nLower secondary completion rate: Male: % of relevant age group: 2015\nLower secondary completion rate: Female: % of relevant age group: 2015\nYouth literacy rate: Male: % of ages 15-24: 2005-14\nYouth literacy rate: Female: % of ages 15-24: 2005-14\nAdult literacy rate: Male: % ages 15 and older: 2005-14\nAdult literacy rate: Female: % ages 15 and older: 2005-14\n...\nAccess to improved sanitation facilities: % of population: 1990\nAccess to improved sanitation facilities: % of population: 2015\nChild immunization rate: Measles: % of children ages 12-23 months: 2015\nChild immunization rate: DTP3: % of children ages 12-23 months: 2015\nChildren with acute respiratory infection taken to health provider: % of children under age 5 with ARI: 2009-2016\nChildren with diarrhea who received oral rehydration and continuous feeding: % of children under age 5 with diarrhea: 2009-2016\nChildren sleeping under treated bed nets: % of children under age 5: 2009-2016\nChildren with fever receiving antimalarial drugs: % of children under age 5 with fever: 2009-2016\nTuberculosis: Treatment success rate: % of new cases: 2014\nTuberculosis: Cases detection rate: % of new estimated cases: 2015\n\n\n\n\n0\nAfrica\nAlgeria\n106.0\n105.0\n68.0\n85.0\n96.0\n92.0\n83.0\n68.0\n...\n80.0\n88.0\n95.0\n95.0\n66.0\n42.0\nNaN\nNaN\n88.0\n80.0\n\n\n1\nAfrica\nAngola\nNaN\nNaN\nNaN\nNaN\n79.0\n67.0\n82.0\n60.0\n...\n22.0\n52.0\n55.0\n64.0\nNaN\nNaN\n25.9\n28.3\n34.0\n64.0\n\n\n2\nAfrica\nBenin\n83.0\n73.0\n50.0\n37.0\n55.0\n31.0\n41.0\n18.0\n...\n7.0\n20.0\n75.0\n79.0\n23.0\n33.0\n72.7\n25.9\n89.0\n61.0\n\n\n3\nAfrica\nBotswana\n98.0\n101.0\n86.0\n87.0\n96.0\n99.0\n87.0\n89.0\n...\n39.0\n63.0\n97.0\n95.0\nNaN\nNaN\nNaN\nNaN\n77.0\n62.0\n\n\n5\nAfrica\nBurundi\n58.0\n66.0\n35.0\n30.0\n90.0\n88.0\n89.0\n85.0\n...\n42.0\n48.0\n93.0\n94.0\n55.0\n43.0\n53.8\n25.4\n91.0\n51.0\n\n\n\n\n5 rows × 47 columns\n\n\n\nHow should we define our categories for this variable? In the previous example, these were a few unique values of the Continent column. If we use similar logic here, our categories are the different numerical values contained in the Gross national income per capita column.\nUnder this assumption, let’s plot this distribution using the seaborn.countplot function.\n\nsns.countplot(data = wb, x = 'Gross national income per capita, Atlas method: $: 2016');\n\n\n\n\nWhat happened? A bar plot (either plt.bar or sns.countplot) will create a separate bar for each unique value of a variable. With a continuous variable, we may not have a finite number of possible values, which can lead to situations like above where we would need many, many bars to display each unique value.\nSpecifically, we can say this histogram suffers from overplotting as we are unable to interpret the plot and gain any meaningful insight.\nRather than bar plots, to visualize the distribution of a continuous variable, we use one of the following types of plots:\n\nHistogram\nBox plot\nViolin plot\n\n\n7.6.1 Box Plots and Violin Plots\nBox plots and violin plots are two very similar kinds of visualizations. Both display the distribution of a variable using information about quartiles.\nIn a box plot, the width of the box at any point does not encode meaning. In a violin plot, the width of the plot indicates the density of the distribution at each possible value.\n\nsns.boxplot(data=wb, y='Gross national income per capita, Atlas method: $: 2016');\n\n\n\n\n\nsns.violinplot(data=wb, y=\"Gross national income per capita, Atlas method: $: 2016\");\n\n\n\n\nA quartile represents a 25% portion of the data. We say that:\n\nThe first quartile (Q1) represents the 25th percentile – 25% of the data is smaller than or equal to the first quartile.\nThe second quartile (Q2) represents the 50th percentile, also known as the median – 50% of the data is smaller than or equal to the second quartile.\nThe third quartile (Q3) represents the 75th percentile – 75% of the data is smaller than or equal to the third quartile.\n\nThis means that the middle 50% of the data lies between the first and third quartiles. This is demonstrated in the histogram below. The three quartiles are marked with red vertical bars.\n\n\nCode\ngdp = wb['Gross domestic product: % growth : 2016']\ngdp = gdp[~gdp.isna()]\n\nq1, q2, q3 = np.percentile(gdp, [25, 50, 75])\n\nwb_quartiles = wb.copy()\nwb_quartiles['category'] = None\nwb_quartiles.loc[(wb_quartiles['Gross domestic product: % growth : 2016'] < q1) | (wb_quartiles['Gross domestic product: % growth : 2016'] > q3), 'category'] = 'Outside of the middle 50%'\nwb_quartiles.loc[(wb_quartiles['Gross domestic product: % growth : 2016'] > q1) & (wb_quartiles['Gross domestic product: % growth : 2016'] < q3), 'category'] = 'In the middle 50%'\n\nsns.histplot(wb_quartiles, x=\"Gross domestic product: % growth : 2016\", hue=\"category\")\nsns.rugplot([q1, q2, q3], c=\"firebrick\", lw=6, height=0.1);\n\n\n\n\n\nIn a box plot, the lower extent of the box lies at Q1, while the upper extent of the box lies at Q3. The horizontal line in the middle of the box corresponds to Q2 (equivalently, the median).\n\nsns.boxplot(data=wb, y='Gross domestic product: % growth : 2016');\n\n\n\n\nThe whiskers of a box-plot are the two points that lie at the [\\(1^{st}\\) Quartile \\(-\\) (\\(1.5\\times\\) IQR)], and the [\\(3^{rd}\\) Quartile \\(+\\) (\\(1.5\\times\\) IQR)]. They are the lower and upper ranges of “normal” data (the points excluding outliers).\nThe different forms of information contained in a box plot can be summarised as follows:\n\n\n\nA violin plot displays quartile information, albeit a bit more subtly through smoothed density curves. Look closely at the center vertical bar of the violin plot below; the three quartiles and “whiskers” are still present!\n\nsns.violinplot(data=wb, y='Gross domestic product: % growth : 2016');\n\n\n\n\n\n\n7.6.2 Side-by-Side Box and Violin Plots\nPlotting side-by-side box or violin plots allows us to compare distributions across different categories. In other words, they enable us to plot both a qualitative variable and a quantitative continuous variable in one visualization.\nWith seaborn, we can easily create side-by-side plots by specifying both an x and y column.\n\nsns.boxplot(data=wb, x=\"Continent\", y='Gross domestic product: % growth : 2016');\n\n\n\n\n\n\n7.6.3 Histograms\nYou are likely familiar with histograms from Data 8. A histogram collects continuous data into bins, then plots this binned data. Each bin reflects the density of datapoints with values that lie between the left and right ends of the bin; in other words, the area of each bin is proportional to the percentage of datapoints it contains.\n\n7.6.3.1 Plotting Histograms\nBelow, we plot a histogram using matplotlib and seaborn. Which graph do you prefer?\n\n# The `edgecolor` argument controls the color of the bin edges\ngni = wb[\"Gross national income per capita, Atlas method: $: 2016\"]\nplt.hist(gni, density=True, edgecolor=\"white\")\n\n# Add labels\nplt.xlabel(\"Gross national income per capita\")\nplt.ylabel(\"Density\")\nplt.title(\"Distribution of gross national income per capita\");\n\n\n\n\n\nsns.histplot(data=wb, x=\"Gross national income per capita, Atlas method: $: 2016\", stat=\"density\")\nplt.title(\"Distribution of gross national income per capita\");\n\n\n\n\n\n\n7.6.3.2 Overlaid Histograms\nWe can overlay histograms (or density curves) to compare distributions across qualitative categories.\nThe hue parameter of sns.histplot specifies the column that should be used to determine the color of each category. hue can be used in many seaborn plotting functions.\nNotice that the resulting plot includes a legend describing which color corresponds to each hemisphere – a legend should always be included if color is used to encode information in a visualization!\n\n# Create a new variable to store the hemisphere in which each country is located\nnorth = [\"Asia\", \"Europe\", \"N. America\"]\nsouth = [\"Africa\", \"Oceania\", \"S. America\"]\nwb.loc[wb[\"Continent\"].isin(north), \"Hemisphere\"] = \"Northern\"\nwb.loc[wb[\"Continent\"].isin(south), \"Hemisphere\"] = \"Southern\"\n\n\nsns.histplot(data=wb, x=\"Gross national income per capita, Atlas method: $: 2016\", hue=\"Hemisphere\", stat=\"density\")\nplt.title(\"Distribution of gross national income per capita\");\n\n\n\n\nAgain, each bin of a histogram is scaled such that its area is proportional to the percentage of all datapoints that it contains.\n\ndensities, bins, _ = plt.hist(gni, density=True, edgecolor=\"white\", bins=5)\nplt.xlabel(\"Gross national income per capita\")\nplt.ylabel(\"Density\")\n\nprint(f\"First bin has width {bins[1]-bins[0]} and height {densities[0]}\")\nprint(f\"This corresponds to {bins[1]-bins[0]} * {densities[0]} = {(bins[1]-bins[0])*densities[0]*100}% of the data\")\n\nFirst bin has width 16410.0 and height 4.7741589911386953e-05\nThis corresponds to 16410.0 * 4.7741589911386953e-05 = 78.343949044586% of the data\n\n\n\n\n\n\n\n7.6.3.3 Evaluating Histograms\nHistograms allow us to assess a distribution by their shape. There are a few properties of histograms we can analyze:\n\nSkewness and Tails\n\nSkewed left vs skewed right\nLeft tail vs right tail\n\nOutliers\n\nUsing percentiles\n\nModes\n\nMost commonly occuring data\n\n\n\n7.6.3.3.1 Skewness and Tails\nThe skew of a histogram describes the direction in which its “tail” extends. - A distribution with a long right tail is skewed right (such as Gross national income per capita). In a right-skewed distribution, the few large outliers “pull” the mean to the right of the median.\n\nsns.histplot(data = wb, x = 'Gross national income per capita, Atlas method: $: 2016', stat = 'density');\nplt.title('Distribution with a long right tail')\n\nText(0.5, 1.0, 'Distribution with a long right tail')\n\n\n\n\n\n\nA distribution with a long left tail is skewed left (such as Access to an improved water source). In a left-skewed distribution, the few small outliers “pull” the mean to the left of the median.\n\nIn the case where a distribution has equal-sized right and left tails, it is symmetric. The mean is approximately equal to the median. Think of mean as the balancing point of the distribution.\n\nsns.histplot(data = wb, x = 'Access to an improved water source: % of population: 2015', stat = 'density');\nplt.title('Distribution with a long left tail')\n\nText(0.5, 1.0, 'Distribution with a long left tail')\n\n\n\n\n\n\n\n7.6.3.3.2 Outliers\nLoosely speaking, an outlier is defined as a data point that lies an abnormally large distance away from other values. Let’s make this more concrete. As you may have observed in the box plot infographic earlier, we define outliers to be the data points that fall beyond the whiskers. Specifically, values that are less than the [\\(1^{st}\\) Quartile \\(-\\) (\\(1.5\\times\\) IQR)], or greater than [\\(3^{rd}\\) Quartile \\(+\\) (\\(1.5\\times\\) IQR).]\n\n\n7.6.3.3.3 Modes\nIn Data 100, we describe a “mode” of a histogram as a peak in the distribution. Often, however, it is difficult to determine what counts as its own “peak.” For example, the number of peaks in the distribution of HIV rates across different countries varies depending on the number of histogram bins we plot.\nIf we set the number of bins to 5, the distribution appears unimodal.\n\n# Rename the very long column name for convenience\nwb = wb.rename(columns={'Antiretroviral therapy coverage: % of people living with HIV: 2015':\"HIV rate\"})\n# With 5 bins, it seems that there is only one peak\nsns.histplot(data=wb, x=\"HIV rate\", stat=\"density\", bins=5)\nplt.title(\"5 histogram bins\");\n\n\n\n\n\n# With 10 bins, there seem to be two peaks\n\nsns.histplot(data=wb, x=\"HIV rate\", stat=\"density\", bins=10)\nplt.title(\"10 histogram bins\");\n\n\n\n\n\n# And with 20 bins, it becomes hard to say what counts as a \"peak\"!\n\nsns.histplot(data=wb, x =\"HIV rate\", stat=\"density\", bins=20)\nplt.title(\"20 histogram bins\");\n\n\n\n\nIn part, it is these ambiguities that motivate us to consider using Kernel Density Estimation (KDE), which we will explore more in the next lecture." + }, + { + "objectID": "visualization_2/visualization_2.html#kernel-density-estimation", + "href": "visualization_2/visualization_2.html#kernel-density-estimation", + "title": "8  Visualization II", + "section": "8.1 Kernel Density Estimation", + "text": "8.1 Kernel Density Estimation\nOften, we want to identify general trends across a distribution, rather than focus on detail. Smoothing a distribution helps generalize the structure of the data and eliminate noise.\n\n8.1.1 KDE Theory\nA kernel density estimate (KDE) is a smooth, continuous function that approximates a curve. It allows us to represent general trends in a distribution without focusing on the details, which is useful for analyzing the broad structure of a dataset.\nMore formally, a KDE attempts to approximate the underlying probability distribution from which our dataset was drawn. You may have encountered the idea of a probability distribution in your other classes; if not, we’ll discuss it at length in the next lecture. For now, you can think of a probability distribution as a description of how likely it is for us to sample a particular value in our dataset.\nA KDE curve estimates the probability density function of a random variable. Consider the example below, where we have used sns.displot to plot both a histogram (containing the data points we actually collected) and a KDE curve (representing the approximated probability distribution from which this data was drawn) using data from the World Bank dataset (wb).\n\n\nCode\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nwb = pd.read_csv(\"data/world_bank.csv\", index_col=0)\nwb = wb.rename(columns={'Antiretroviral therapy coverage: % of people living with HIV: 2015':\"HIV rate\",\n 'Gross national income per capita, Atlas method: $: 2016':'gni'})\nwb.head()\n\n\n\n\n\n\n\n\n\nContinent\nCountry\nPrimary completion rate: Male: % of relevant age group: 2015\nPrimary completion rate: Female: % of relevant age group: 2015\nLower secondary completion rate: Male: % of relevant age group: 2015\nLower secondary completion rate: Female: % of relevant age group: 2015\nYouth literacy rate: Male: % of ages 15-24: 2005-14\nYouth literacy rate: Female: % of ages 15-24: 2005-14\nAdult literacy rate: Male: % ages 15 and older: 2005-14\nAdult literacy rate: Female: % ages 15 and older: 2005-14\n...\nAccess to improved sanitation facilities: % of population: 1990\nAccess to improved sanitation facilities: % of population: 2015\nChild immunization rate: Measles: % of children ages 12-23 months: 2015\nChild immunization rate: DTP3: % of children ages 12-23 months: 2015\nChildren with acute respiratory infection taken to health provider: % of children under age 5 with ARI: 2009-2016\nChildren with diarrhea who received oral rehydration and continuous feeding: % of children under age 5 with diarrhea: 2009-2016\nChildren sleeping under treated bed nets: % of children under age 5: 2009-2016\nChildren with fever receiving antimalarial drugs: % of children under age 5 with fever: 2009-2016\nTuberculosis: Treatment success rate: % of new cases: 2014\nTuberculosis: Cases detection rate: % of new estimated cases: 2015\n\n\n\n\n0\nAfrica\nAlgeria\n106.0\n105.0\n68.0\n85.0\n96.0\n92.0\n83.0\n68.0\n...\n80.0\n88.0\n95.0\n95.0\n66.0\n42.0\nNaN\nNaN\n88.0\n80.0\n\n\n1\nAfrica\nAngola\nNaN\nNaN\nNaN\nNaN\n79.0\n67.0\n82.0\n60.0\n...\n22.0\n52.0\n55.0\n64.0\nNaN\nNaN\n25.9\n28.3\n34.0\n64.0\n\n\n2\nAfrica\nBenin\n83.0\n73.0\n50.0\n37.0\n55.0\n31.0\n41.0\n18.0\n...\n7.0\n20.0\n75.0\n79.0\n23.0\n33.0\n72.7\n25.9\n89.0\n61.0\n\n\n3\nAfrica\nBotswana\n98.0\n101.0\n86.0\n87.0\n96.0\n99.0\n87.0\n89.0\n...\n39.0\n63.0\n97.0\n95.0\nNaN\nNaN\nNaN\nNaN\n77.0\n62.0\n\n\n5\nAfrica\nBurundi\n58.0\n66.0\n35.0\n30.0\n90.0\n88.0\n89.0\n85.0\n...\n42.0\n48.0\n93.0\n94.0\n55.0\n43.0\n53.8\n25.4\n91.0\n51.0\n\n\n\n\n5 rows × 47 columns\n\n\n\n\nimport seaborn as sns\nimport matplotlib.pyplot as plt\n\nsns.displot(data = wb, x = 'HIV rate', \\\n kde = True, stat = \"density\")\n\nplt.title(\"Distribution of HIV rates\");\n\n/Users/Ishani/micromamba/lib/python3.9/site-packages/seaborn/axisgrid.py:118: UserWarning:\n\nThe figure layout has changed to tight\n\n\n\n\n\n\nNotice that the smooth KDE curve is higher when the histogram bins are taller. You can think of the height of the KDE curve as representing how “probable” it is that we randomly sample a datapoint with the corresponding value. This intuitively makes sense – if we have already collected more datapoints with a particular value (resulting in a tall histogram bin), it is more likely that, if we randomly sample another datapoint, we will sample one with a similar value (resulting in a high KDE curve).\nThe area under a probability density function should always integrate to 1, representing the fact that the total probability of a distribution should always sum to 100%. Hence, a KDE curve will always have an area under the curve of 1.\n\n\n8.1.2 Constructing a KDE\nWe perform kernel density estimation using three steps.\n\nPlace a kernel at each datapoint.\nNormalize the kernels to have a total area of 1 (across all kernels).\nSum the normalized kernels.\n\nWe’ll explain what a “kernel” is momentarily.\nTo make things simpler, let’s construct a KDE for a small, artificially generated dataset of 5 datapoints: \\([2.2, 2.8, 3.7, 5.3, 5.7]\\). In the plot below, each vertical bar represents one data point.\n\n\nCode\ndata = [2.2, 2.8, 3.7, 5.3, 5.7]\n\nsns.rugplot(data, height=0.3)\n\nplt.xlabel(\"Data\")\nplt.ylabel(\"Density\")\nplt.xlim(-3, 10)\nplt.ylim(0, 0.5);\n\n\n\n\n\nOur goal is to create the following KDE curve, which was generated automatically by sns.kdeplot.\n\n\nCode\nsns.kdeplot(data)\n\nplt.xlabel(\"Data\")\nplt.xlim(-3, 10)\nplt.ylim(0, 0.5);\n\n\n\n\n\n\n8.1.2.1 Step 1: Place a Kernel at Each Data Point\nTo begin generating a density curve, we need to choose a kernel and bandwidth value (\\(\\alpha\\)). What are these exactly?\nA kernel is a density curve. It is the mathematical function that attempts to capture the randomness of each data point in our sampled data. To explain what this means, consider just one of the datapoints in our dataset: \\(2.2\\). We obtained this datapoint by randomly sampling some information out in the real world (you can imagine \\(2.2\\) as representing a single measurement taken in an experiment, for example). If we were to sample a new datapoint, we may obtain a slightly different value. It could be higher than \\(2.2\\); it could also be lower than \\(2.2\\). We make the assumption that any future sampled datapoints will likely be similar in value to the data we’ve already drawn. This means that our kernel – our description of the probability of randomly sampling any new value – will be greatest at the datapoint we’ve already drawn but still have non-zero probability above and below it. The area under any kernel should integrate to 1, representing the total probability of drawing a new datapoint.\nA bandwidth value, usually denoted by \\(\\alpha\\), represents the width of the kernel. A large value of \\(\\alpha\\) will result in a wide, short kernel function, while a small value with result in a narrow, tall kernel.\nBelow, we place a Gaussian kernel, plotted in orange, over the datapoint \\(2.2\\). A Gaussian kernel is simply the normal distribution, which you may have called a bell curve in Data 8.\n\n\nCode\ndef gaussian_kernel(x, z, a):\n # We'll discuss where this mathematical formulation came from later\n return (1/np.sqrt(2*np.pi*a**2)) * np.exp((-(x - z)**2 / (2 * a**2)))\n\n# Plot our datapoint\nsns.rugplot([2.2], height=0.3)\n\n# Plot the kernel\nx = np.linspace(-3, 10, 1000)\nplt.plot(x, gaussian_kernel(x, 2.2, 1))\n\nplt.xlabel(\"Data\")\nplt.ylabel(\"Density\")\nplt.xlim(-3, 10)\nplt.ylim(0, 0.5);\n\n\n\n\n\nTo begin creating our KDE, we place a kernel on each datapoint in our dataset. For our dataset of 5 points, we will have 5 kernels.\n\n\nCode\n# You will work with the functions below in Lab 4\ndef create_kde(kernel, pts, a):\n # Takes in a kernel, set of points, and alpha\n # Returns the KDE as a function\n def f(x):\n output = 0\n for pt in pts:\n output += kernel(x, pt, a)\n return output / len(pts) # Normalization factor\n return f\n\ndef plot_kde(kernel, pts, a):\n # Calls create_kde and plots the corresponding KDE\n f = create_kde(kernel, pts, a)\n x = np.linspace(min(pts) - 5, max(pts) + 5, 1000)\n y = [f(xi) for xi in x]\n plt.plot(x, y);\n \ndef plot_separate_kernels(kernel, pts, a, norm=False):\n # Plots individual kernels, which are then summed to create the KDE\n x = np.linspace(min(pts) - 5, max(pts) + 5, 1000)\n for pt in pts:\n y = kernel(x, pt, a)\n if norm:\n y /= len(pts)\n plt.plot(x, y)\n \n plt.show();\n \nplt.xlim(-3, 10)\nplt.ylim(0, 0.5)\nplt.xlabel(\"Data\")\nplt.ylabel(\"Density\")\n\nplot_separate_kernels(gaussian_kernel, data, a = 1)\n\n\n\n\n\n\n\n8.1.2.2 Step 2: Normalize Kernels to Have a Total Area of 1\nAbove, we said that each kernel has an area of 1. Earlier, we also said that our goal is to construct a KDE curve using these kernels with a total area of 1. If we were to directly sum the kernels as they are, we would produce a KDE curve with an integrated area of (5 kernels) \\(\\times\\) (area of 1 each) = 5. To avoid this, we will normalize each of our kernels. This involves multiplying each kernel by \\(\\frac{1}{\\#\\:\\text{datapoints}}\\).\nIn the cell below, we multiply each of our 5 kernels by \\(\\frac{1}{5}\\) to apply normalization.\n\n\nCode\nplt.xlim(-3, 10)\nplt.ylim(0, 0.5)\nplt.xlabel(\"Data\")\nplt.ylabel(\"Density\")\n\n# The `norm` argument specifies whether or not to normalize the kernels\nplot_separate_kernels(gaussian_kernel, data, a = 1, norm = True)\n\n\n\n\n\n\n\n8.1.2.3 Step 3: Sum the Normalized Kernels\nOur KDE curve is the sum of the normalized kernels. Notice that the final curve is identical to the plot generated by sns.kdeplot we saw earlier!\n\n\nCode\nplt.xlim(-3, 10)\nplt.ylim(0, 0.5)\nplt.xlabel(\"Data\")\nplt.ylabel(\"Density\")\n\nplot_kde(gaussian_kernel, data, a = 1)\n\n\n\n\n\n\n\n\n8.1.3 Kernel Functions and Bandwidths\n\n\n\nA general “KDE formula” function is given above.\n\n\\(K_{\\alpha}(x, x_i)\\) is the kernel centered on the observation i.\n\nEach kernel individually has area 1.\nx represents any number on the number line. It is the input to our function.\n\n\\(n\\) is the number of observed datapoints that we have.\n\nWe multiply by \\(\\frac{1}{n}\\) so that the total area of the KDE is still 1.\n\nEach \\(x_i \\in \\{x_1, x_2, \\dots, x_n\\}\\) represents an observed datapoint.\n\nThese are what we use to create our KDE by summing multiple shifted kernels centered at these points.\n\n\n\n\\(\\alpha\\) (alpha) is the bandwidth or smoothing parameter.\n\nA kernel (for our purposes) is a valid density function. This means it:\n\nMust be non-negative for all inputs.\nMust integrate to 1.\n\n\n8.1.3.1 Gaussian Kernel\nThe most common kernel is the Gaussian kernel. The Gaussian kernel is equivalent to the Gaussian probability density function (the Normal distribution), centered at the observed value with a standard deviation of (this is known as the bandwidth parameter).\n\\[K_a(x, x_i) = \\frac{1}{\\sqrt{2\\pi\\alpha^{2}}}e^{-\\frac{(x-x_i)^{2}}{2\\alpha^{2}}}\\]\nIn this formula:\n\n\\(x\\) (no subscript) represents any value along the x-axis of our plot\n\\(x_i\\) represents the \\(i\\) -th datapoint in our dataset. It is one of the values that we have actually collected in our data sampling process. In our example earlier, \\(x_i=2.2\\). Those of you who have taken a probability class may recognize \\(x_i\\) as the mean of the normal distribution.\nEach kernel is centered on our observed values, so its distribution mean is \\(x_i\\).\n\\(\\alpha\\) is the bandwidth parameter, representing the width of our kernel. More formally, \\(\\alpha\\) is the standard deviation of the Gaussian curve.\n\nA large value of \\(\\alpha\\) will produce a kernel that is wider and shorter – this leads to a smoother KDE when the kernels are summed together.\nA small value of \\(\\alpha\\) will produce a narrower, taller kernel, and, with it, a noisier KDE.\n\n\nThe details of this (admittedly intimidating) formula are less important than understanding its role in kernel density estimation – this equation gives us the shape of each kernel.\n\n\n\n\n\n\n\nGaussian Kernel, \\(\\alpha\\) = 0.1\nGaussian Kernel, \\(\\alpha\\) = 1\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nGaussian Kernel, \\(\\alpha\\) = 2\nGaussian Kernel, \\(\\alpha\\) = 10\n\n\n\n\n\n\n\n\n\n\n\n8.1.3.2 Boxcar Kernel\nAnother example of a kernel is the Boxcar kernel. The boxcar kernel assigns a uniform density to points within a “window” of the observation, and a density of 0 elsewhere. The equation below is a boxcar kernel with the center at \\(x_i\\) and the bandwidth of \\(\\alpha\\).\n\\[K_a(x, x_i) = \\begin{cases}\n \\frac{1}{\\alpha}, & |x - x_i| \\le \\frac{\\alpha}{2}\\\\\n 0, & \\text{else }\n \\end{cases}\\]\nThe boxcar kernel is seldom used in practice – we include it here to demonstrate that a kernel function can take whatever form you would like, provided it integrates to 1 and does not output negative values.\n\n\nCode\ndef boxcar_kernel(alpha, x, z):\n return (((x-z)>=-alpha/2)&((x-z)<=alpha/2))/alpha\n\nxs = np.linspace(-5, 5, 200)\nalpha=1\nkde_curve = [boxcar_kernel(alpha, x, 0) for x in xs]\nplt.plot(xs, kde_curve);\n\n\n\n\n\nThe Boxcar kernel centered at 0 with bandwidth \\(\\alpha\\) = 1.\n\n\n\n\nThe diagram on the right is how the density curve for our 5 point dataset would have looked had we used the Boxcar kernel with bandwidth \\(\\alpha\\) = 1.\n\n\n\n\n\n\n\nKDE\nBoxcar" + }, + { + "objectID": "visualization_2/visualization_2.html#diving-deeper-into-displot", + "href": "visualization_2/visualization_2.html#diving-deeper-into-displot", + "title": "8  Visualization II", + "section": "8.2 Diving Deeper into displot", + "text": "8.2 Diving Deeper into displot\nAs we saw earlier, we can use seaborn’s displot function to plot various distributions. In particular, displot allows you to specify the kind of plot and is a wrapper for histplot, kdeplot, and ecdfplot.\nBelow, we can see a couple of examples of how sns.displot can be used to plot various distributions.\nFirst, we can plot a histogram by setting kind to \"hist\". Note that here we’ve specified stat = density to normalize the histogram such that the area under the histogram is equal to 1.\n\nsns.displot(data=wb, \n x=\"gni\", \n kind=\"hist\", \n stat=\"density\") # default: stat=count and density integrates to 1\nplt.title(\"Distribution of gross national income per capita\");\n\n/Users/Ishani/micromamba/lib/python3.9/site-packages/seaborn/axisgrid.py:118: UserWarning:\n\nThe figure layout has changed to tight\n\n\n\n\n\n\nNow, what if we want to generate a KDE plot? We can set kind = to \"kde\"!\n\nsns.displot(data=wb, \n x=\"gni\", \n kind='kde')\nplt.title(\"Distribution of gross national income per capita\");\n\n/Users/Ishani/micromamba/lib/python3.9/site-packages/seaborn/axisgrid.py:118: UserWarning:\n\nThe figure layout has changed to tight\n\n\n\n\n\n\nAnd finally, if we want to generate an Empirical Cumulative Distribution Function (ECDF), we can specify kind = \"ecdf\".\n\nsns.displot(data=wb, \n x=\"gni\", \n kind='ecdf')\nplt.title(\"Cumulative Distribution of gross national income per capita\");\n\n/Users/Ishani/micromamba/lib/python3.9/site-packages/seaborn/axisgrid.py:118: UserWarning:\n\nThe figure layout has changed to tight" + }, + { + "objectID": "visualization_2/visualization_2.html#relationships-between-quantitative-variables", + "href": "visualization_2/visualization_2.html#relationships-between-quantitative-variables", + "title": "8  Visualization II", + "section": "8.3 Relationships Between Quantitative Variables", + "text": "8.3 Relationships Between Quantitative Variables\nUp until now, we’ve discussed how to visualize single-variable distributions. Going beyond this, we want to understand the relationship between pairs of numerical variables.\n\n8.3.0.1 Scatter Plots\nScatter plots are one of the most useful tools in representing the relationship between pairs of quantitative variables. They are particularly important in gauging the strength, or correlation, of the relationship between variables. Knowledge of these relationships can then motivate decisions in our modeling process.\nIn matplotlib, we use the function plt.scatter to generate a scatter plot. Notice that, unlike our examples of plotting single-variable distributions, now we specify sequences of values to be plotted along the x-axis and the y-axis.\n\nplt.scatter(wb[\"per capita: % growth: 2016\"], \\\n wb['Adult literacy rate: Female: % ages 15 and older: 2005-14'])\n\nplt.xlabel(\"% growth per capita\")\nplt.ylabel(\"Female adult literacy rate\")\nplt.title(\"Female adult literacy against % growth\");\n\n\n\n\nIn seaborn, we call the function sns.scatterplot. We use the x and y parameters to indicate the values to be plotted along the x and y axes, respectively. By using the hue parameter, we can specify a third variable to be used for coloring each scatter point.\n\nsns.scatterplot(data = wb, x = \"per capita: % growth: 2016\", \\\n y = \"Adult literacy rate: Female: % ages 15 and older: 2005-14\", \n hue = \"Continent\")\n\nplt.title(\"Female adult literacy against % growth\");\n\n\n\n\n\n8.3.0.1.1 Overplotting\nAlthough the plots above communicate the general relationship between the two plotted variables, they both suffer a major limitation – overplotting. Overplotting occurs when scatter points with similar values are stacked on top of one another, making it difficult to see the number of scatter points actually plotted in the visualization. Notice how in the upper righthand region of the plots, we cannot easily tell just how many points have been plotted. This makes our visualizations difficult to interpret.\nWe have a few methods to help reduce overplotting:\n\nDecreasing the size of the scatter point markers can improve readability. We do this by setting a new value to the size parameter, s, of plt.scatter or sns.scatterplot.\nJittering is the process of adding a small amount of random noise to all x and y values to slightly shift the position of each datapoint. By randomly shifting all the data by some small distance, we can discern individual points more clearly without modifying the major trends of the original dataset.\n\nIn the cell below, we first jitter the data using np.random.uniform, then re-plot it with smaller markers. The resulting plot is much easier to interpret.\n\n# Setting a seed ensures that we produce the same plot each time\n# This means that the course notes will not change each time you access them\nnp.random.seed(150)\n\n# This call to np.random.uniform generates random numbers between -1 and 1\n# We add these random numbers to the original x data to jitter it slightly\nx_noise = np.random.uniform(-1, 1, len(wb))\njittered_x = wb[\"per capita: % growth: 2016\"] + x_noise\n\n# Repeat for y data\ny_noise = np.random.uniform(-5, 5, len(wb))\njittered_y = wb[\"Adult literacy rate: Female: % ages 15 and older: 2005-14\"] + y_noise\n\n# Setting the size parameter `s` changes the size of each point\nplt.scatter(jittered_x, jittered_y, s=15)\n\nplt.xlabel(\"% growth per capita (jittered)\")\nplt.ylabel(\"Female adult literacy rate (jittered)\")\nplt.title(\"Female adult literacy against % growth\");\n\n\n\n\n\n\n\n8.3.0.2 lmplot and jointplot\nseaborn also includes several built-in functions for creating more sophisticated scatter plots. Two of the most commonly used examples are sns.lmplot and sns.jointplot.\nsns.lmplot plots both a scatter plot and a linear regression line, all in one function call. We’ll discuss linear regression in a few lectures.\n\nsns.lmplot(data = wb, x = \"per capita: % growth: 2016\", \\\n y = \"Adult literacy rate: Female: % ages 15 and older: 2005-14\")\n\nplt.title(\"Female adult literacy against % growth\");\n\n/Users/Ishani/micromamba/lib/python3.9/site-packages/seaborn/axisgrid.py:118: UserWarning:\n\nThe figure layout has changed to tight\n\n\n\n\n\n\nsns.jointplot creates a visualization with three components: a scatter plot, a histogram of the distribution of x values, and a histogram of the distribution of y values.\n\nsns.jointplot(data = wb, x = \"per capita: % growth: 2016\", \\\n y = \"Adult literacy rate: Female: % ages 15 and older: 2005-14\")\n\n# plt.suptitle allows us to shift the title up so it does not overlap with the histogram\nplt.suptitle(\"Female adult literacy against % growth\")\nplt.subplots_adjust(top=0.9);\n\n\n\n\n\n\n8.3.0.3 Hex plots\nFor datasets with a very large number of datapoints, jittering is unlikely to fully resolve the issue of overplotting. In these cases, we can attempt to visualize our data by its density, rather than displaying each individual datapoint.\nHex plots can be thought of as two-dimensional histograms that show the joint distribution between two variables. This is particularly useful when working with very dense data. In a hex plot, the x-y plane is binned into hexagons. Hexagons that are darker in color indicate a greater density of data – that is, there are more data points that lie in the region enclosed by the hexagon.\nWe can generate a hex plot using sns.jointplot modified with the kind parameter.\n\nsns.jointplot(data = wb, x = \"per capita: % growth: 2016\", \\\n y = \"Adult literacy rate: Female: % ages 15 and older: 2005-14\", \\\n kind = \"hex\")\n\n# plt.suptitle allows us to shift the title up so it does not overlap with the histogram\nplt.suptitle(\"Female adult literacy against % growth\")\nplt.subplots_adjust(top=0.9);\n\n\n\n\n\n\n8.3.0.4 Contour Plots\nContour plots are an alternative way of plotting the joint distribution of two variables. You can think of them as the 2-dimensional versions of KDE plots. A contour plot can be interpreted in a similar way to a topographic map. Each contour line represents an area that has the same density of datapoints throughout the region. Contours marked with darker colors contain more datapoints (a higher density) in that region.\nsns.kdeplot will generate a contour plot if we specify both x and y data.\n\nsns.kdeplot(data = wb, x = \"per capita: % growth: 2016\", \\\n y = \"Adult literacy rate: Female: % ages 15 and older: 2005-14\", \\\n fill = True)\n\nplt.title(\"Female adult literacy against % growth\");" + }, + { + "objectID": "visualization_2/visualization_2.html#transformations", + "href": "visualization_2/visualization_2.html#transformations", + "title": "8  Visualization II", + "section": "8.4 Transformations", + "text": "8.4 Transformations\nWe have now covered visualizations in great depth, looking into various forms of visualizations, plotting libraries, and high-level theory.\nMuch of this was done to uncover insights in data, which will prove necessary when we begin building models of data later in the course. A strong graphical correlation between two variables hints at an underlying relationship that we may want to study in greater detail. However, relying on visual relationships alone is limiting - not all plots show association. The presence of outliers and other statistical anomalies makes it hard to interpret data.\nTransformations are the process of manipulating data to find significant relationships between variables. These are often found by applying mathematical functions to variables that “transform” their range of possible values and highlight some previously hidden associations between data.\nTo see why we may want to transform data, consider the following plot of adult literacy rates against gross national income.\n\n\nCode\n# Some data cleaning to help with the next example\ndf = pd.DataFrame(index=wb.index)\ndf['lit'] = wb['Adult literacy rate: Female: % ages 15 and older: 2005-14'] \\\n + wb[\"Adult literacy rate: Male: % ages 15 and older: 2005-14\"]\ndf['inc'] = wb['gni']\ndf.dropna(inplace=True)\n\nplt.scatter(df[\"inc\"], df[\"lit\"])\nplt.xlabel(\"Gross national income per capita\")\nplt.ylabel(\"Adult literacy rate\")\nplt.title(\"Adult literacy rate against GNI per capita\");\n\n\n\n\n\nThis plot is difficult to interpret for two reasons:\n\nThe data shown in the visualization appears almost “smushed” – it is heavily concentrated in the upper lefthand region of the plot. Even if we jittered the dataset, we likely would not be able to fully assess all datapoints in that area.\nIt is hard to generalize a clear relationship between the two plotted variables. While adult literacy rate appears to share some positive relationship with gross national income, we are not able to describe the specifics of this trend in much detail.\n\nA transformation would allow us to visualize this data more clearly, which, in turn, would enable us to describe the underlying relationship between our variables of interest.\nWe will most commonly apply a transformation to linearize a relationship between variables. If we find a transformation to make a scatter plot of two variables linear, we can “backtrack” to find the exact relationship between the variables. This helps us in two major ways. Firstly, linear relationships are particularly simple to interpret – we have an intuitive sense of what the slope and intercept of a linear trend represent, and how they can help us understand the relationship between two variables. Secondly, linear relationships are the backbone of linear models. We will begin exploring linear modeling in great detail next week. As we’ll soon see, linear models become much more effective when we are working with linearized data.\nIn the remainder of this note, we will discuss how to linearize a dataset to produce the result below. Notice that the resulting plot displays a rough linear relationship between the values plotted on the x and y axes.\n\n\n8.4.1 Linearization and Applying Transformations\nTo linearize a relationship, begin by asking yourself: what makes the data non-linear? It is helpful to repeat this question for each variable in your visualization.\nLet’s start by considering the gross national income variable in our plot above. Looking at the y values in the scatter plot, we can see that many large y values are all clumped together, compressing the vertical axis. The scale of the horizontal axis is also being distorted by the few large outlying x values on the right.\n\nIf we decreased the size of these outliers relative to the bulk of the data, we could reduce the distortion of the horizontal axis. How can we do this? We need a transformation that will:\n\nDecrease the magnitude of large x values by a significant amount.\nNot drastically change the magnitude of small x values.\n\nOne function that produces this result is the log transformation. When we take the logarithm of a large number, the original number will decrease in magnitude dramatically. Conversely, when we take the logarithm of a small number, the original number does not change its value by as significant of an amount (to illustrate this, consider the difference between \\(\\log{(100)} = 4.61\\) and \\(\\log{(10)} = 2.3\\)).\nIn Data 100 (and most upper-division STEM classes), \\(\\log\\) is used to refer to the natural logarithm with base \\(e\\).\n\n# np.log takes the logarithm of an array or Series\nplt.scatter(np.log(df[\"inc\"]), df[\"lit\"])\n\nplt.xlabel(\"Log(gross national income per capita)\")\nplt.ylabel(\"Adult literacy rate\")\nplt.title(\"Adult literacy rate against Log(GNI per capita)\");\n\n\n\n\nAfter taking the logarithm of our x values, our plot appears much more balanced in its horizontal scale. We no longer have many datapoints clumped on one end and a few outliers out at extreme values.\nLet’s repeat this reasoning for the y values. Considering only the vertical axis of the plot, notice how there are many datapoints concentrated at large y values. Only a few datapoints lie at smaller values of y.\nIf we were to “spread out” these large values of y more, we would no longer see the dense concentration in one region of the y-axis. We need a transformation that will:\n\nIncrease the magnitude of large values of y so these datapoints are distributed more broadly on the vertical scale,\nNot substantially alter the scaling of small values of y (we do not want to drastically modify the lower end of the y axis, which is already distributed evenly on the vertical scale).\n\nIn this case, it is helpful to apply a power transformation – that is, raise our y values to a power. Let’s try raising our adult literacy rate values to the power of 4. Large values raised to the power of 4 will increase in magnitude proportionally much more than small values raised to the power of 4 (consider the difference between \\(2^4 = 16\\) and \\(200^4 = 1600000000\\)).\n\n# Apply a log transformation to the x values and a power transformation to the y values\nplt.scatter(np.log(df[\"inc\"]), df[\"lit\"]**4)\n\nplt.xlabel(\"Log(gross national income per capita)\")\nplt.ylabel(\"Adult literacy rate (4th power)\")\nplt.suptitle(\"Adult literacy rate (4th power) against Log(GNI per capita)\")\nplt.subplots_adjust(top=0.9);\n\n\n\n\nOur scatter plot is looking a lot better! Now, we are plotting the log of our original x values on the horizontal axis, and the 4th power of our original y values on the vertical axis. We start to see an approximate linear relationship between our transformed variables.\nWhat can we take away from this? We now know that the log of gross national income and adult literacy to the power of 4 are roughly linearly related. If we denote the original, untransformed gross national income values as \\(x\\) and the original adult literacy rate values as \\(y\\), we can use the standard form of a linear fit to express this relationship:\n\\[y^4 = m(\\log{x}) + b\\]\nWhere \\(m\\) represents the slope of the linear fit, while \\(b\\) represents the intercept.\nThe cell below computes \\(m\\) and \\(b\\) for our transformed data. We’ll discuss how this code was generated in a future lecture.\n\n\nCode\n# The code below fits a linear regression model. We'll discuss it at length in a future lecture\nfrom sklearn.linear_model import LinearRegression\n\nmodel = LinearRegression()\nmodel.fit(np.log(df[[\"inc\"]]), df[\"lit\"]**4)\nm, b = model.coef_[0], model.intercept_\n\nprint(f\"The slope, m, of the transformed data is: {m}\")\nprint(f\"The intercept, b, of the transformed data is: {b}\")\n\ndf = df.sort_values(\"inc\")\nplt.scatter(np.log(df[\"inc\"]), df[\"lit\"]**4, label=\"Transformed data\")\nplt.plot(np.log(df[\"inc\"]), m*np.log(df[\"inc\"])+b, c=\"red\", label=\"Linear regression\")\nplt.xlabel(\"Log(gross national income per capita)\")\nplt.ylabel(\"Adult literacy rate (4th power)\")\nplt.legend();\n\n\nThe slope, m, of the transformed data is: 336400693.43172693\nThe intercept, b, of the transformed data is: -1802204836.0479977\n\n\n\n\n\nWhat if we want to understand the underlying relationship between our original variables, before they were transformed? We can simply rearrange our linear expression above!\nRecall our linear relationship between the transformed variables \\(\\log{x}\\) and \\(y^4\\).\n\\[y^4 = m(\\log{x}) + b\\]\nBy rearranging the equation, we find a relationship between the untransformed variables \\(x\\) and \\(y\\).\n\\[y = [m(\\log{x}) + b]^{(1/4)}\\]\nWhen we plug in the values for \\(m\\) and \\(b\\) computed above, something interesting happens.\n\n\nCode\n# Now, plug the values for m and b into the relationship between the untransformed x and y\nplt.scatter(df[\"inc\"], df[\"lit\"], label=\"Untransformed data\")\nplt.plot(df[\"inc\"], (m*np.log(df[\"inc\"])+b)**(1/4), c=\"red\", label=\"Modeled relationship\")\nplt.xlabel(\"Gross national income per capita\")\nplt.ylabel(\"Adult literacy rate\")\nplt.legend();\n\n\n\n\n\nWe have found a relationship between our original variables – gross national income and adult literacy rate!\nTransformations are powerful tools for understanding our data in greater detail. To summarize what we just achieved:\n\nWe identified appropriate transformations to linearize the original data.\nWe used our knowledge of linear curves to compute the slope and intercept of the transformed data.\nWe used this slope and intercept information to derive a relationship in the untransformed data.\n\nLinearization will be an important tool as we begin our work on linear modeling next week.\n\n8.4.1.1 Tukey-Mosteller Bulge Diagram\nThe Tukey-Mosteller Bulge Diagram is a good guide when determining possible transformations to achieve linearity. It is a visual summary of the reasoning we just worked through above.\n\nHow does it work? Each curved “bulge” represents a possible shape of non-linear data. To use the diagram, find which of the four bulges resembles your dataset the most closely. Then, look at the axes of the quadrant for this bulge. The horizontal axis will list possible transformations that could be applied to your x data for linearization. Similarly, the vertical axis will list possible transformations that could be applied to your y data. Note that each axis lists two possible transformations. While either of these transformations has the potential to linearize your dataset, note that this is an iterative process. It’s important to try out these transformations and look at the results to see whether you’ve actually achieved linearity. If not, you’ll need to continue testing other possible transformations.\nGenerally:\n\n\\(\\sqrt{}\\) and \\(\\log{}\\) will reduce the magnitude of large values.\nPowers (\\(^2\\) and \\(^3\\)) will increase the spread in magnitude of large values.\n\n\n\n\nImportant: You should still understand the logic we worked through to determine how best to transform the data. The bulge diagram is just a summary of this same reasoning. You will be expected to be able to explain why a given transformation is or is not appropriate for linearization.\n\n\n\n8.4.2 Additional Remarks\nVisualization requires a lot of thought!\n\nThere are many tools for visualizing distributions.\n\nDistribution of a single variable:\n\nRugplot\nHistogram\nDensity plot\nBox plot\nViolin plot\n\nJoint distribution of two quantitative variables:\n\nScatter plot\nHex plot\nContour plot\n\n\n\nThis class primarily uses seaborn and matplotlib, but pandas also has basic built-in plotting methods. Many other visualization libraries exist, and plotly is one of them.\n\nplotly creates very easily creates interactive plots.\nplotly will occasionally appear in lecture code, labs, and assignments!\n\nNext, we’ll go deeper into the theory behind visualization." + }, + { + "objectID": "visualization_2/visualization_2.html#visualization-theory", + "href": "visualization_2/visualization_2.html#visualization-theory", + "title": "8  Visualization II", + "section": "8.5 Visualization Theory", + "text": "8.5 Visualization Theory\nThis section marks a pivot to the second major topic of this lecture - visualization theory. We’ll discuss the abstract nature of visualizations and analyze how they convey information.\nRemember, we had two goals for visualizing data. This section is particularly important in:\n\nHelping us understand the data and results,\nCommunicating our results and conclusions with others.\n\n\n8.5.1 Information Channels\nVisualizations are able to convey information through various encodings. In the remainder of this lecture, we’ll look at the use of color, scale, and depth, to name a few.\n\n8.5.1.1 Encodings in Rugplots\nOne detail that we may have overlooked in our earlier discussion of rugplots is the importance of encodings. Rugplots are effective visuals because they utilize line thickness to encode frequency. Consider the following diagram:\n\n\n\n8.5.1.2 Multi-Dimensional Encodings\nEncodings are also useful for representing multi-dimensional data. Notice how the following visual highlights four distinct “dimensions” of data:\n\nX-axis\nY-axis\nArea\nColor\n\n\nThe human visual perception system is only capable of visualizing data in a three-dimensional plane, but as you’ve seen, we can encode many more channels of information.\n\n\n\n8.5.2 Harnessing the Axes\n\n8.5.2.1 Consider the Scale of the Data\nHowever, we should be careful to not misrepresent relationships in our data by manipulating the scale or axes. The visualization below improperly portrays two seemingly independent relationships on the same plot. The authors have clearly changed the scale of the y-axis to mislead their audience.\n\nNotice how the downwards-facing line segment contains values in the millions, while the upwards-trending segment only contains values near three hundred thousand. These lines should not be intersecting.\nWhen there is a large difference in the magnitude of the data, it’s advised to analyze percentages instead of counts. The following diagrams correctly display the trends in cancer screening and abortion rates.\n\n\n\n\n\n\n\n\n\n\n\n8.5.2.2 Reveal the Data\nGreat visualizations not only consider the scale of the data but also utilize the axes in a way that best conveys information. For example, data scientists commonly set certain axes limits to highlight parts of the visualization they are most interested in.\n\n\n\n\n\n\n\n\n\nThe visualization on the right captures the trend in coronavirus cases during March of 2020. From only looking at the visualization on the left, a viewer may incorrectly believe that coronavirus began to skyrocket on March 4th, 2020. However, the second illustration tells a different story - cases rose closer to March 21th, 2020.\n\n\n\n8.5.3 Harnessing Color\nColor is another important feature in visualizations that does more than what meets the eye.\nWe already explored using color to encode a categorical variable in our scatter plot. Let’s now discuss the uses of color in novel visualizations like colormaps and heatmaps.\n5-8% of the world is red-green color blind, so we have to be very particular about our color scheme. We want to make these as accessible as possible. Choosing a set of colors that work together is evidently a challenging task!\n\n8.5.3.1 Colormaps\nColormaps are mappings from pixel data to color values, and they’re often used to highlight distinct parts of an image. Let’s investigate a few properties of colormaps.\n\n\nJet Colormap \n\n\n\nViridis Colormap \n\n\nThe jet colormap is infamous for being misleading. While it seems more vibrant than viridis, the aggressive colors poorly encode numerical data. To understand why, let’s analyze the following images.\n\n\n\n\n\n\n\n\n\nThe diagram on the left compares how a variety of colormaps represent pixel data that transitions from a high to low intensity. These include the jet colormap (row a) and grayscale (row b). Notice how the grayscale images do the best job in smoothly transitioning between pixel data. The jet colormap is the worst at this - the four images in row (a) look like a conglomeration of individual colors.\nThe difference is also evident in the images labeled (a) and (b) on the left side. The grayscale image is better at preserving finer detail in the vertical line strokes. Additionally, grayscale is preferred in X-ray scans for being more neutral. The intensity of the dark red color in the jet colormap is frightening and indicates something is wrong.\nWhy is the jet colormap so much worse? The answer lies in how its color composition is perceived to the human eye.\n\n\nJet Colormap Perception \n\n\n\nViridis Colormap Perception \n\n\nThe jet colormap is largely misleading because it is not perceptually uniform. Perceptually uniform colormaps have the property that if the pixel data goes from 0.1 to 0.2, the perceptual change is the same as when the data goes from 0.8 to 0.9.\nNotice how the said uniformity is present within the linear trend displayed in the viridis colormap. On the other hand, the jet colormap is largely non-linear - this is precisely why it’s considered a worse colormap.\n\n\n\n8.5.4 Harnessing Markings\nIn our earlier discussion of multi-dimensional encodings, we analyzed a scatter plot with four pseudo-dimensions: the two axes, area, and color. Were these appropriate to use? The following diagram analyzes how well the human eye can distinguish between these “markings”.\n\nThere are a few key takeaways from this diagram\n\nLengths are easy to discern. Don’t use plots with jiggled baselines - keep everything axis-aligned.\nAvoid pie charts! Angle judgments are inaccurate.\nAreas and volumes are hard to distinguish (area charts, word clouds, etc.).\n\n\n\n8.5.5 Harnessing Conditioning\nConditioning is the process of comparing data that belong to separate groups. We’ve seen this before in overlayed distributions, side-by-side box plots, and scatter plots with categorical encodings. Here, we’ll introduce terminology that formalizes these examples.\nConsider an example where we want to analyze income earnings for males and females with varying levels of education. There are multiple ways to compare this data.\n\n\n\n\n\n\n\n\n\nThe barplot is an example of juxtaposition: placing multiple plots side by side, with the same scale. The scatter plot is an example of superposition: placing multiple density curves and scatter plots on top of each other.\nWhich is better depends on the problem at hand. Here, superposition makes the precise wage difference very clear from a quick glance. However, many sophisticated plots convey information that favors the use of juxtaposition. Below is one example.\n\n\n\n8.5.6 Harnessing Context\nThe last component of a great visualization is perhaps the most critical - the use of context. Adding informative titles, axis labels, and descriptive captions are all best practices that we’ve heard repeatedly in Data 8.\nA publication-ready plot (and every Data 100 plot) needs:\n\nInformative title (takeaway, not description),\nAxis labels,\nReference lines, markers, etc,\nLegends, if appropriate,\nCaptions that describe data,\n\nCaptions should:\n\nBe comprehensive and self-contained,\nDescribe what has been graphed,\nDraw attention to important features,\nDescribe conclusions drawn from graphs." + }, + { + "objectID": "sampling/sampling.html#censuses-and-surveys", + "href": "sampling/sampling.html#censuses-and-surveys", + "title": "9  Sampling", + "section": "9.1 Censuses and Surveys", + "text": "9.1 Censuses and Surveys\nIn general: a census is “a complete count or survey of a population, typically recording various details of individuals.” An example is the U.S. Decennial Census which was held in April 2020. It counts every person living in all 50 states, DC, and US territories, not just citizens. Participation is required by law (it is mandated by the U.S. Constitution). Important uses include the allocation of Federal funds, congressional representation, and drawing congressional and state legislative districts. The census is composed of a survey mailed to different housing addresses in the United States.\nA survey is a set of questions. An example is workers sampling individuals and households. What is asked and how it is asked can affect how the respondent answers or even whether or not they answer in the first place.\nWhile censuses are great, it is often very difficult and expensive to survey everyone in a population. Imagine the amount of resources, money, time, and energy the U.S. spent on the 2020 Census. While this does give us more accurate information about the population, it’s often infeasible to execute. Thus, we usually survey a subset of the population instead.\nA sample is (usually) a subset of the population that is often used to make inferences about the population. If our sample is a good representation of our population, then we can use it to glean useful information at a lower cost. That being said, how the sample is drawn will affect the reliability of such inferences. Two common sources of error in sampling are chance error, where random samples can vary from what is expected in any direction, and bias, which is a systematic error in one direction. Biases can be the result of many things, for example, our sampling scheme or survey methods.\nLet’s define some useful vocabulary:\n\nPopulation: The group that you want to learn something about.\n\nIndividuals in a population are not always people. Other populations include bacteria in your gut (sampled using DNA sequencing), trees of a certain species, small businesses receiving a microloan, or published results in an academic journal or field.\n\nSampling Frame: The list from which the sample is drawn.\n\nFor example, if sampling people, then the sampling frame is the set of all people that could possibly end up in your sample.\n\nSample: Who you actually end up sampling. The sample is therefore a subset of your sampling frame.\n\nWhile ideally, these three sets would be exactly the same, they usually aren’t in practice. For example, there may be individuals in your sampling frame (and hence, your sample) that are not in your population. And generally, sample sizes are much smaller than population sizes.\n\n\n\nSampling_Frames" + }, + { + "objectID": "sampling/sampling.html#bias-a-case-study", + "href": "sampling/sampling.html#bias-a-case-study", + "title": "9  Sampling", + "section": "9.2 Bias: A Case Study", + "text": "9.2 Bias: A Case Study\nThe following case study is adapted from Statistics by Freedman, Pisani, and Purves, W.W. Norton NY, 1978.\nIn 1936, President Franklin D. Roosevelt (Democratic) went up for re-election against Alf Landon (Republican). As is usual, polls were conducted in the months leading up to the election to try and predict the outcome. The Literary Digest was a magazine that had successfully predicted the outcome of 5 general elections coming into 1936. In their polling for the 1936 election, they sent out their survey to 10 million individuals whom they found from phone books, lists of magazine subscribers, and lists of country club members. Of the roughly 2.4 million people who filled out the survey, only 43% reported they would vote for Roosevelt; thus, the Digest predicted that Landon would win.\nOn election day, Roosevelt won in a landslide, winning 61% of the popular vote of about 45 million voters. How could the Digest have been so wrong with their polling?\nIt turns out that the Literary Digest sample was not representative of the population. Their sampling frame of people found in phone books, lists of magazine subscribers, and lists of country club members were more affluent and tended to vote Republican. As such, their sampling frame was inherently skewed in Landon’s favor. The Literary Digest completely overlooked the lion’s share of voters who were still suffering through the Great Depression. Furthermore, they had a dismal response rate (about 24%); who knows how the other non-respondents would have polled? The Digest folded just 18 months after this disaster.\nAt the same time, George Gallup, a rising statistician, also made predictions about the 1936 elections. Despite having a smaller sample size of “only” 50,000 (this is still more than necessary; more when we cover the Central Limit Theorem), his estimate that 56% of voters would choose Roosevelt was much closer to the actual result (61%). Gallup also predicted the Digest’s prediction within 1% with a sample size of only 3000 people by anticipating the Digest’s affluent sampling frame and subsampling those individuals.\nSo what’s the moral of the story? Samples, while convenient, are subject to chance error and bias. Election polling, in particular, can involve many sources of bias. To name a few:\n\nSelection bias systematically excludes (or favors) particular groups.\n\nExample: the Literary Digest poll excludes people not in phone books.\nHow to avoid: Examine the sampling frame and the method of sampling.\n\nResponse bias occurs because people don’t always respond truthfully. Survey designers pay special detail to the nature and wording of questions to avoid this type of bias.\n\nExample: Illegal immigrants might not answer truthfully when asked citizenship questions on the census survey.\nHow to avoid: Examine the nature of questions and the method of surveying. Randomized response - flip a coin and answer yes if heads or answer truthfully if tails.\n\nNon-response bias occurs because people don’t always respond to survey requests, which can skew responses.\n\nExample: Only 2.4m out of 10m people responded to the Literary Digest’s poll.\nHow to avoid: Keep surveys short, and be persistent.\n\n\nRandomized Response\nSuppose you want to ask someone a sensitive question: “Have you ever cheated on an exam?” An individual may be embarrassed or afraid to answer truthfully and might lie or not answer the question. One solution is to leverage a randomized response:\nFirst, you can ask the individual to secretly flip a fair coin; you (the surveyor) don’t know the outcome of the coin flip.\nThen, you ask them to answer “Yes” if the coin landed heads and to answer truthfully if the coin landed tails.\nThe surveyor doesn’t know if the “Yes” means that the person cheated or if it means that the coin landed heads. The individual’s sensitive information remains secret. However, if the response is “No”, then the surveyor knows the individual didn’t cheat. We assume the individual is comfortable revealing this information.\nGenerally, we can assume that the coin lands heads 50% of the time, masking the remaining 50% of the “No” answers. We can therefore double the proportion of “No” answers to estimate the true fraction of “No” answers.\nElection Polls\nToday, the Gallup Poll is one of the leading polls for election results. The many sources of biases – who responds to polls? Do voters tell the truth? How can we predict turnout? – still remain, but the Gallup Poll uses several tactics to mitigate them. Within their sampling frame of “civilian, non-institutionalized population” of adults in telephone households in continental U.S., they use random digit dialing to include both listed/unlisted phone numbers and to avoid selection bias. Additionally, they use a within-household selection process to randomly select households with one or more adults. If no one answers, re-call multiple times to avoid non-response bias." + }, + { + "objectID": "sampling/sampling.html#probability-samples", + "href": "sampling/sampling.html#probability-samples", + "title": "9  Sampling", + "section": "9.3 Probability Samples", + "text": "9.3 Probability Samples\nWhen sampling, it is essential to focus on the quality of the sample rather than the quantity of the sample. A huge sample size does not fix a bad sampling method. Our main goal is to gather a sample that is representative of the population it came from. In this section, we’ll explore the different types of sampling and their pros and cons.\nA convenience sample is whatever you can get ahold of; this type of sampling is non-random. Note that haphazard sampling is not necessarily random sampling; there are many potential sources of bias.\nIn a probability sample, we provide the chance that any specified set of individuals will be in the sample (individuals in the population can have different chances of being selected; they don’t all have to be uniform), and we sample at random based off this known chance. For this reason, probability samples are also called random samples. The randomness provides a few benefits:\n\nBecause we know the source probabilities, we can measure the errors.\nSampling at random gives us a more representative sample of the population, which reduces bias. (Note: this is only the case when the probability distribution we’re sampling from is accurate. Random samples using “bad” or inaccurate distributions can produce biased estimates of population quantities.)\nProbability samples allow us to estimate the bias and chance error, which helps us quantify uncertainty (more in a future lecture).\n\nThe real world is usually more complicated, and we often don’t know the initial probabilities. For example, we do not generally know the probability that a given bacterium is in a microbiome sample or whether people will answer when Gallup calls landlines. That being said, still we try to model probability sampling to the best of our ability even when the sampling or measurement process is not fully under our control.\nA few common random sampling schemes:\n\nA uniform random sample with replacement is a sample drawn uniformly at random with replacement.\n\nRandom doesn’t always mean “uniformly at random,” but in this specific context, it does.\nSome individuals in the population might get picked more than once.\n\nA simple random sample (SRS) is a sample drawn uniformly at random without replacement.\n\nEvery individual (and subset of individuals) has the same chance of being selected.\nEvery pair has the same chance as every other pair.\nEvery triple has the same chance as every other triple.\nAnd so on.\n\nA stratified random sample, where random sampling is performed on strata (specific groups), and the groups together compose a sample.\n\n\n9.3.1 Example Scheme 1: Probability Sample\nSuppose we have 3 TA’s (Arman, Boyu, Charlie): I decide to sample 2 of them as follows:\n\nI choose A with probability 1.0\nI choose either B or C, each with a probability of 0.5.\n\nWe can list all the possible outcomes and their respective probabilities in a table:\n\n\n\nOutcome\nProbability\n\n\n\n\n{A, B}\n0.5\n\n\n{A, C}\n0.5\n\n\n{B, C}\n0\n\n\n\nThis is a probability sample (though not a great one). Of the 3 people in my population, I know the chance of getting each subset. Suppose I’m measuring the average distance TAs live from campus.\n\nThis scheme does not see the entire population!\nMy estimate using the single sample I take has some chance error depending on if I see AB or AC.\nThis scheme is biased towards A’s response.\n\n\n\n9.3.2 Example Scheme 2: Simple Random Sample\nConsider the following sampling scheme:\n\nA class roster has 1100 students listed alphabetically.\nPick one of the first 10 students on the list at random (e.g. Student 8).\nTo create your sample, take that student and every 10th student listed after that (e.g. Students 8, 18, 28, 38, etc.).\n\n\n\nIs this a probability sample?\n\nYes. For a sample [n, n + 10, n + 20, …, n + 1090], where 1 <= n <= 10, the probability of that sample is 1/10. Otherwise, the probability is 0.\nOnly 10 possible samples!\n\n\n\nDoes each student have the same probability of being selected?\n\nYes. Each student is chosen with a probability of 1/10.\n\n\n\nIs this a simple random sample?\n\nNo. The chance of selecting (8, 18) is 1/10; the chance of selecting (8, 9) is 0.\n\n\n\n9.3.3 Demo: Barbie v. Oppenheimer\nWe are trying to collect a sample from Berkeley residents to predict the which one of Barbie and Oppenheimer would perform better on their opening day, July 21st.\nFirst, let’s grab a dataset that has every single resident in Berkeley (this is a fake dataset) and which movie they actually watched on July 21st.\nLet’s load in the movie.csv table. We can assume that:\n\nis_male is a boolean that indicates if a resident identifies as male.\nThere are only two movies they can watch on July 21st: Barbie and Oppenheimer.\nEvery resident watches a movie (either Barbie or Oppenheimer) on July 21st.\n\n\n\nCode\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\nsns.set_theme(style='darkgrid', font_scale = 1.5,\n rc={'figure.figsize':(7,5)})\n\nrng = np.random.default_rng()\n\n\n\nmovie = pd.read_csv(\"data/movie.csv\")\n\n# create a 1/0 int that indicates Barbie vote\nmovie['barbie'] = (movie['movie'] == 'Barbie').astype(int)\nmovie.head()\n\n\n\n\n\n\n\n\nage\nis_male\nmovie\nbarbie\n\n\n\n\n0\n35\nFalse\nBarbie\n1\n\n\n1\n42\nTrue\nOppenheimer\n0\n\n\n2\n55\nFalse\nBarbie\n1\n\n\n3\n77\nTrue\nOppenheimer\n0\n\n\n4\n31\nFalse\nBarbie\n1\n\n\n\n\n\n\n\nWhat fraction of Berkeley residents chose Barbie?\n\nactual_barbie = np.mean(movie[\"barbie\"])\nactual_barbie\n\n0.5302792307692308\n\n\nThis is the actual outcome of the competition. Based on this result, Barbie would win. How did our sample of retirees do?\n\n9.3.3.1 Convenience Sample: Retirees\nLet’s take a convenience sample of people who have retired (>= 65 years old). What proportion of them went to see Barbie instead of Oppenheimer?\n\nconvenience_sample = movie[movie['age'] >= 65] # take a convenience sample of retirees\nnp.mean(convenience_sample[\"barbie\"]) # what proportion of them saw Barbie? \n\n0.3744755089093924\n\n\nBased on this result, we would have predicted that Oppenheimer would win! What happened? Is it possible that our sample is too small or noisy?\n\n# what's the size of our sample? \nlen(convenience_sample)\n\n359396\n\n\n\n# what proportion of our data is in the convenience sample? \nlen(convenience_sample)/len(movie)\n\n0.27645846153846154\n\n\nSeems like our sample is rather large (roughly 360,000 people), so the error is likely not due to solely to chance.\n\n\n9.3.3.2 Check for Bias\nLet us aggregate all choices by age and visualize the fraction of Barbie views, split by gender.\n\nvotes_by_barbie = movie.groupby([\"age\",\"is_male\"]).agg(\"mean\", numeric_only=True).reset_index()\nvotes_by_barbie.head()\n\n\n\n\n\n\n\n\nage\nis_male\nbarbie\n\n\n\n\n0\n18\nFalse\n0.819594\n\n\n1\n18\nTrue\n0.667001\n\n\n2\n19\nFalse\n0.812214\n\n\n3\n19\nTrue\n0.661252\n\n\n4\n20\nFalse\n0.805281\n\n\n\n\n\n\n\n\n\nCode\n# A common matplotlib/seaborn pattern: create the figure and axes object, pass ax\n# to seaborn for drawing into, and later fine-tune the figure via ax.\nfig, ax = plt.subplots();\n\nred_blue = [\"#bf1518\", \"#397eb7\"]\nwith sns.color_palette(red_blue):\n sns.pointplot(data=votes_by_barbie, x = \"age\", y = \"barbie\", hue = \"is_male\", ax=ax)\n\nnew_ticks = [i.get_text() for i in ax.get_xticklabels()]\nax.set_xticks(range(0, len(new_ticks), 10), new_ticks[::10])\nax.set_title(\"Preferences by Demographics\");\n\n\n\n\n\n\nWe see that retirees (in Berkeley) tend to watch Oppenheimer.\nWe also see that residents who identify as non-male tend to prefer Barbie.\n\n\n\n9.3.3.3 Simple Random Sample\nSuppose we took a simple random sample (SRS) of the same size as our retiree sample:\n\nn = len(convenience_sample)\nrandom_sample = movie.sample(n, replace = False) ## By default, replace = False\nnp.mean(random_sample[\"barbie\"])\n\n0.5302813609500384\n\n\nThis is very close to the actual vote of 0.5302792307692308!\nIt turns out that we can get similar results with a much smaller sample size, say, 800:\n\nn = 800\nrandom_sample = movie.sample(n, replace = False)\n\n# Compute the sample average and the resulting relative error\nsample_barbie = np.mean(random_sample[\"barbie\"])\nerr = abs(sample_barbie-actual_barbie)/actual_barbie\n\n# We can print output with Markdown formatting too...\nfrom IPython.display import Markdown\nMarkdown(f\"**Actual** = {actual_barbie:.4f}, **Sample** = {sample_barbie:.4f}, \"\n f\"**Err** = {100*err:.2f}%.\")\n\nActual = 0.5303, Sample = 0.5288, Err = 0.29%.\n\n\nWe’ll learn how to choose this number when we (re)learn the Central Limit Theorem later in the semester.\n\n\n9.3.3.4 Quantifying Chance Error\nIn our SRS of size 800, what would be our chance error?\nLet’s simulate 1000 versions of taking the 800-sized SRS from before:\n\nnrep = 1000 # number of simulations\nn = 800 # size of our sample\npoll_result = []\nfor i in range(0, nrep):\n random_sample = movie.sample(n, replace = False)\n poll_result.append(np.mean(random_sample[\"barbie\"]))\n\n\n\nCode\nfig, ax = plt.subplots()\nsns.histplot(poll_result, stat='density', ax=ax)\nax.axvline(actual_barbie, color=\"orange\", lw=4);\n\n\n\n\n\nWhat fraction of these simulated samples would have predicted Barbie?\n\npoll_result = pd.Series(poll_result)\nnp.sum(poll_result > 0.5)/1000\n\n0.94\n\n\nYou can see the curve looks roughly Gaussian/normal. Using KDE:\n\n\nCode\nsns.histplot(poll_result, stat='density', kde=True);" + }, + { + "objectID": "sampling/sampling.html#summary", + "href": "sampling/sampling.html#summary", + "title": "9  Sampling", + "section": "9.4 Summary", + "text": "9.4 Summary\nUnderstanding the sampling process is what lets us go from describing the data to understanding the world. Without knowing / assuming something about how the data were collected, there is no connection between the sample and the population. Ultimately, the dataset doesn’t tell us about the world behind the data." + }, + { + "objectID": "intro_to_modeling/intro_to_modeling.html#what-is-a-model", + "href": "intro_to_modeling/intro_to_modeling.html#what-is-a-model", + "title": "10  Introduction to Modeling", + "section": "10.1 What is a Model?", + "text": "10.1 What is a Model?\nA model is an idealized representation of a system. A system is a set of principles or procedures according to which something functions. We live in a world full of systems: the procedure of turning on a light happens according to a specific set of rules dictating the flow of electricity. The truth behind how any event occurs is usually complex, and many times the specifics are unknown. The workings of the world can be viewed as its own giant procedure. Models seek to simplify the world and distill them into workable pieces.\nExample: We model the fall of an object on Earth as subject to a constant acceleration of \\(9.81 m/s^2\\) due to gravity.\n\nWhile this describes the behavior of our system, it is merely an approximation.\nIt doesn’t account for the effects of air resistance, local variations in gravity, etc.\nIn practice, it’s accurate enough to be useful!\n\n\n10.1.1 Reasons for Building Models\nWhy do we want to build models? As far as data scientists and statisticians are concerned, there are three reasons, and each implies a different focus on modeling.\n\nTo explain complex phenomena occurring in the world we live in. Examples of this might be:\n\nHow are the parents’ average height related to their children’s average height?\nHow does an object’s velocity and acceleration impact how far it travels? (Physics: \\(d = d_0 + vt + \\frac{1}{2}at^2\\))\n\nIn these cases, we care about creating models that are simple and interpretable, allowing us to understand what the relationships between our variables are.\nTo make accurate predictions about unseen data. Some examples include:\n\nCan we predict if an email is spam or not?\nCan we generate a one-sentence summary of this 10-page long article?\n\nWhen making predictions, we care more about making extremely accurate predictions, at the cost of having an uninterpretable model. These are sometimes called black-box models and are common in fields like deep learning.\nTo measure the causal effects of one event on some other event. For example,\n\nDoes smoking cause lung cancer?\nDoes a job training program cause increases in employment and wages?\n\nThis is a much harder question because most statistical tools are designed to infer association, not causation. We will not focus on this task in Data 100, but you can take other advanced classes on causal inference (e.g., Stat 156, Data 102) if you are intrigued!\n\nMost of the time, we aim to strike a balance between building interpretable models and building accurate models.\n\n\n10.1.2 Common Types of Models\nIn general, models can be split into two categories:\n\nDeterministic physical (mechanistic) models: Laws that govern how the world works.\n\nKepler’s Third Law of Planetary Motion (1619): The ratio of the square of an object’s orbital period with the cube of the semi-major axis of its orbit is the same for all objects orbiting the same primary.\n\n\\(T^2 \\propto R^3\\)\n\nNewton’s Laws: motion and gravitation (1687): Newton’s second law of motion models the relationship between the mass of an object and the force required to accelerate it.\n\n\\(F = ma\\)\n\\(F_g = G \\frac{m_1 m_2}{r^2}\\) \n\n\nProbabilistic models: Models that attempt to understand how random processes evolve. These are more general and can be used to describe many phenomena in the real world. These models commonly make simplifying assumptions about the nature of the world.\n\nPoisson Process models: Used to model random events that happen with some probability at any point in time and are strictly increasing in count, such as the arrival of customers at a store.\n\n\nNote: These specific models are not in the scope of Data 100 and exist to serve as motivation." + }, + { + "objectID": "intro_to_modeling/intro_to_modeling.html#simple-linear-regression", + "href": "intro_to_modeling/intro_to_modeling.html#simple-linear-regression", + "title": "10  Introduction to Modeling", + "section": "10.2 Simple Linear Regression", + "text": "10.2 Simple Linear Regression\nThe regression line is the unique straight line that minimizes the mean squared error of estimation among all straight lines. As with any straight line, it can be defined by a slope and a y-intercept:\n\n\\(\\text{slope} = r \\cdot \\frac{\\text{Standard Deviation of } y}{\\text{Standard Deviation of }x}\\)\n\\(y\\text{-intercept} = \\text{average of }y - \\text{slope}\\cdot\\text{average of }x\\)\n\\(\\text{regression estimate} = y\\text{-intercept} + \\text{slope}\\cdot\\text{}x\\)\n\\(\\text{residual} =\\text{observed }y - \\text{regression estimate}\\)\n\n\n\nCode\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n# Set random seed for consistency \nnp.random.seed(43)\nplt.style.use('default') \n\n#Generate random noise for plotting\nx = np.linspace(-3, 3, 100)\ny = x * 0.5 - 1 + np.random.randn(100) * 0.3\n\n#plot regression line\nsns.regplot(x=x,y=y);\n\n\n\n\n\n\n10.2.1 Notations and Definitions\nFor a pair of variables \\(x\\) and \\(y\\) representing our data \\(\\mathcal{D} = \\{(x_1, y_1), (x_2, y_2), \\dots, (x_n, y_n)\\}\\), we denote their means/averages as \\(\\bar x\\) and \\(\\bar y\\) and standard deviations as \\(\\sigma_x\\) and \\(\\sigma_y\\).\n\n10.2.1.1 Standard Units\nA variable is represented in standard units if the following are true:\n\n0 in standard units is equal to the mean (\\(\\bar{x}\\)) in the original variable’s units.\nAn increase of 1 standard unit is an increase of 1 standard deviation (\\(\\sigma_x\\)) in the original variable’s units.\n\nTo convert a variable \\(x_i\\) into standard units, we subtract its mean from it and divide it by its standard deviation. For example, \\(x_i\\) in standard units is \\(\\frac{x_i - \\bar x}{\\sigma_x}\\).\n\n\n10.2.1.2 Correlation\nThe correlation (\\(r\\)) is the average of the product of \\(x\\) and \\(y\\), both measured in standard units.\n\\[r = \\frac{1}{n} \\sum_{i=1}^n (\\frac{x_i - \\bar{x}}{\\sigma_x})(\\frac{y_i - \\bar{y}}{\\sigma_y})\\]\n\nCorrelation measures the strength of a linear association between two variables.\nCorrelations range between -1 and 1: \\(|r| \\leq 1\\), with \\(r=1\\) indicating perfect linear association, and \\(r=-1\\) indicating perfect negative association. The closer \\(r\\) is to \\(0\\), the weaker the linear association is.\nCorrelation says nothing about causation and non-linear association. Correlation does not imply causation. When \\(r = 0\\), the two variables are uncorrelated. However, they could still be related through some non-linear relationship.\n\n\n\nCode\ndef plot_and_get_corr(ax, x, y, title):\n ax.set_xlim(-3, 3)\n ax.set_ylim(-3, 3)\n ax.set_xticks([])\n ax.set_yticks([])\n ax.scatter(x, y, alpha = 0.73)\n r = np.corrcoef(x, y)[0, 1]\n ax.set_title(title + \" (corr: {})\".format(r.round(2)))\n return r\n\nfig, axs = plt.subplots(2, 2, figsize = (10, 10))\n\n# Just noise\nx1, y1 = np.random.randn(2, 100)\ncorr1 = plot_and_get_corr(axs[0, 0], x1, y1, title = \"noise\")\n\n# Strong linear\nx2 = np.linspace(-3, 3, 100)\ny2 = x2 * 0.5 - 1 + np.random.randn(100) * 0.3\ncorr2 = plot_and_get_corr(axs[0, 1], x2, y2, title = \"strong linear\")\n\n# Unequal spread\nx3 = np.linspace(-3, 3, 100)\ny3 = - x3/3 + np.random.randn(100)*(x3)/2.5\ncorr3 = plot_and_get_corr(axs[1, 0], x3, y3, title = \"strong linear\")\nextent = axs[1, 0].get_window_extent().transformed(fig.dpi_scale_trans.inverted())\n\n# Strong non-linear\nx4 = np.linspace(-3, 3, 100)\ny4 = 2*np.sin(x3 - 1.5) + np.random.randn(100) * 0.3\ncorr4 = plot_and_get_corr(axs[1, 1], x4, y4, title = \"strong non-linear\")\n\nplt.show()\n\n\n\n\n\n\n\n\n10.2.2 Alternate Form\nWhen the variables \\(y\\) and \\(x\\) are measured in standard units, the regression line for predicting \\(y\\) based on \\(x\\) has slope \\(r\\) and passes through the origin.\n\\[\\hat{y}_{su} = r \\cdot x_{su}\\]\n\n\nIn the original units, this becomes\n\n\\[\\frac{\\hat{y} - \\bar{y}}{\\sigma_y} = r \\cdot \\frac{x - \\bar{x}}{\\sigma_x}\\]\n\n\n\n10.2.3 Derivation\nStarting from the top, we have our claimed form of the regression line, and we want to show that it is equivalent to the optimal linear regression line: \\(\\hat{y} = \\hat{a} + \\hat{b}x\\).\nRecall:\n\n\\(\\hat{b} = r \\cdot \\frac{\\text{Standard Deviation of }y}{\\text{Standard Deviation of }x}\\)\n\\(\\hat{a} = \\text{average of }y - \\text{slope}\\cdot\\text{average of }x\\)\n\n\n\n\n\n\n\nProof:\n\\[\\frac{\\hat{y} - \\bar{y}}{\\sigma_y} = r \\cdot \\frac{x - \\bar{x}}{\\sigma_x}\\]\nMultiply by \\(\\sigma_y\\), and add \\(\\bar{y}\\) on both sides.\n\\[\\hat{y} = \\sigma_y \\cdot r \\cdot \\frac{x - \\bar{x}}{\\sigma_x} + \\bar{y}\\]\nDistribute coefficient \\(\\sigma_{y}\\cdot r\\) to the \\(\\frac{x - \\bar{x}}{\\sigma_x}\\) term\n\\[\\hat{y} = (\\frac{r\\sigma_y}{\\sigma_x} ) \\cdot x + (\\bar{y} - (\\frac{r\\sigma_y}{\\sigma_x} ) \\bar{x})\\]\nWe now see that we have a line that matches our claim:\n\nslope: \\(r\\cdot\\frac{\\text{SD of y}}{\\text{SD of x}} = r\\cdot\\frac{\\sigma_y}{\\sigma_x}\\)\nintercept: \\(\\bar{y} - \\text{slope}\\cdot \\bar{x}\\)\n\nNote that the error for the i-th datapoint is: \\(e_i = y_i - \\hat{y_i}\\)" + }, + { + "objectID": "intro_to_modeling/intro_to_modeling.html#the-modeling-process", + "href": "intro_to_modeling/intro_to_modeling.html#the-modeling-process", + "title": "10  Introduction to Modeling", + "section": "10.3 The Modeling Process", + "text": "10.3 The Modeling Process\nAt a high level, a model is a way of representing a system. In Data 100, we’ll treat a model as some mathematical rule we use to describe the relationship between variables.\nWhat variables are we modeling? Typically, we use a subset of the variables in our sample of collected data to model another variable in this data. To put this more formally, say we have the following dataset \\(\\mathcal{D}\\):\n\\[\\mathcal{D} = \\{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\\}\\]\nEach pair of values \\((x_i, y_i)\\) represents a datapoint. In a modeling setting, we call these observations. \\(y_i\\) is the dependent variable we are trying to model, also called an output or response. \\(x_i\\) is the independent variable inputted into the model to make predictions, also known as a feature.\nOur goal in modeling is to use the observed data \\(\\mathcal{D}\\) to predict the output variable \\(y_i\\). We denote each prediction as \\(\\hat{y}_i\\) (read: “y hat sub i”).\nHow do we generate these predictions? Some examples of models we’ll encounter in the next few lectures are given below:\n\\[\\hat{y}_i = \\theta\\] \\[\\hat{y}_i = \\theta_0 + \\theta_1 x_i\\]\nThe examples above are known as parametric models. They relate the collected data, \\(x_i\\), to the prediction we make, \\(\\hat{y}_i\\). A few parameters (\\(\\theta\\), \\(\\theta_0\\), \\(\\theta_1\\)) are used to describe the relationship between \\(x_i\\) and \\(\\hat{y}_i\\).\nNotice that we don’t immediately know the values of these parameters. While the features, \\(x_i\\), are taken from our observed data, we need to decide what values to give \\(\\theta\\), \\(\\theta_0\\), and \\(\\theta_1\\) ourselves. This is the heart of parametric modeling: what parameter values should we choose so our model makes the best possible predictions?\nTo choose our model parameters, we’ll work through the modeling process.\n\nChoose a model: how should we represent the world?\nChoose a loss function: how do we quantify prediction error?\nFit the model: how do we choose the best parameters of our model given our data?\nEvaluate model performance: how do we evaluate whether this process gave rise to a good model?" + }, + { + "objectID": "intro_to_modeling/intro_to_modeling.html#choosing-a-model", + "href": "intro_to_modeling/intro_to_modeling.html#choosing-a-model", + "title": "10  Introduction to Modeling", + "section": "10.4 Choosing a Model", + "text": "10.4 Choosing a Model\nOur first step is choosing a model: defining the mathematical rule that describes the relationship between the features, \\(x_i\\), and predictions \\(\\hat{y}_i\\).\nIn Data 8, you learned about the Simple Linear Regression (SLR) model. You learned that the model takes the form: \\[\\hat{y}_i = a + bx_i\\]\nIn Data 100, we’ll use slightly different notation: we will replace \\(a\\) with \\(\\theta_0\\) and \\(b\\) with \\(\\theta_1\\). This will allow us to use the same notation when we explore more complex models later on in the course.\n\\[\\hat{y}_i = \\theta_0 + \\theta_1 x_i\\]\nThe parameters of the SLR model are \\(\\theta_0\\), also called the intercept term, and \\(\\theta_1\\), also called the slope term. To create an effective model, we want to choose values for \\(\\theta_0\\) and \\(\\theta_1\\) that most accurately predict the output variable. The “best” fitting model parameters are given the special names: \\(\\hat{\\theta}_0\\) and \\(\\hat{\\theta}_1\\); they are the specific parameter values that allow our model to generate the best possible predictions.\nIn Data 8, you learned that the best SLR model parameters are: \\[\\hat{\\theta}_0 = \\bar{y} - \\hat{\\theta}_1\\bar{x} \\qquad \\qquad \\hat{\\theta}_1 = r \\frac{\\sigma_y}{\\sigma_x}\\]\nA quick reminder on notation:\n\n\\(\\bar{y}\\) and \\(\\bar{x}\\) indicate the mean value of \\(y\\) and \\(x\\), respectively\n\\(\\sigma_y\\) and \\(\\sigma_x\\) indicate the standard deviations of \\(y\\) and \\(x\\)\n\\(r\\) is the correlation coefficient, defined as the average of the product of \\(x\\) and \\(y\\) measured in standard units: \\(\\frac{1}{n} \\sum_{i=1}^n (\\frac{x_i-\\bar{x}}{\\sigma_x})(\\frac{y_i-\\bar{y}}{\\sigma_y})\\)\n\nIn Data 100, we want to understand how to derive these best model coefficients. To do so, we’ll introduce the concept of a loss function." + }, + { + "objectID": "intro_to_modeling/intro_to_modeling.html#choosing-a-loss-function", + "href": "intro_to_modeling/intro_to_modeling.html#choosing-a-loss-function", + "title": "10  Introduction to Modeling", + "section": "10.5 Choosing a Loss Function", + "text": "10.5 Choosing a Loss Function\nWe’ve talked about the idea of creating the “best” possible predictions. This begs the question: how do we decide how “good” or “bad” our model’s predictions are?\nA loss function characterizes the cost, error, or fit resulting from a particular choice of model or model parameters. This function, \\(L(y, \\hat{y})\\), quantifies how “bad” or “far off” a single prediction by our model is from a true, observed value in our collected data.\nThe choice of loss function for a particular model will affect the accuracy and computational cost of estimation, and it’ll also depend on the estimation task at hand. For example,\n\nAre outputs quantitative or qualitative?\nDo outliers matter?\nAre all errors equally costly? (e.g., a false negative on a cancer test is arguably more dangerous than a false positive)\n\nRegardless of the specific function used, a loss function should follow two basic principles:\n\nIf the prediction \\(\\hat{y}_i\\) is close to the actual value \\(y_i\\), loss should be low.\nIf the prediction \\(\\hat{y}_i\\) is far from the actual value \\(y_i\\), loss should be high.\n\nTwo common choices of loss function are squared loss and absolute loss.\nSquared loss, also known as L2 loss, computes loss as the square of the difference between the observed \\(y_i\\) and predicted \\(\\hat{y}_i\\): \\[L(y_i, \\hat{y}_i) = (y_i - \\hat{y}_i)^2\\]\nAbsolute loss, also known as L1 loss, computes loss as the absolute difference between the observed \\(y_i\\) and predicted \\(\\hat{y}_i\\): \\[L(y_i, \\hat{y}_i) = |y_i - \\hat{y}_i|\\]\nL1 and L2 loss give us a tool for quantifying our model’s performance on a single data point. This is a good start, but ideally, we want to understand how our model performs across our entire dataset. A natural way to do this is to compute the average loss across all data points in the dataset. This is known as the cost function, \\(\\hat{R}(\\theta)\\): \\[\\hat{R}(\\theta) = \\frac{1}{n} \\sum^n_{i=1} L(y_i, \\hat{y}_i)\\]\nThe cost function has many names in the statistics literature. You may also encounter the terms:\n\nEmpirical risk (this is why we give the cost function the name \\(R\\))\nError function\nAverage loss\n\nWe can substitute our L1 and L2 loss into the cost function definition. The Mean Squared Error (MSE) is the average squared loss across a dataset: \\[\\text{MSE} = \\frac{1}{n} \\sum_{i=1}^n (y_i - \\hat{y}_i)^2\\]\nThe Mean Absolute Error (MAE) is the average absolute loss across a dataset: \\[\\text{MAE}= \\frac{1}{n} \\sum_{i=1}^n |y_i - \\hat{y}_i|\\]" + }, + { + "objectID": "intro_to_modeling/intro_to_modeling.html#fitting-the-model", + "href": "intro_to_modeling/intro_to_modeling.html#fitting-the-model", + "title": "10  Introduction to Modeling", + "section": "10.6 Fitting the Model", + "text": "10.6 Fitting the Model\nNow that we’ve established the concept of a loss function, we can return to our original goal of choosing model parameters. Specifically, we want to choose the best set of model parameters that will minimize the model’s cost on our dataset. This process is called fitting the model.\nWe know from calculus that a function is minimized when (1) its first derivative is equal to zero and (2) its second derivative is positive. We often call the function being minimized the objective function (our objective is to find its minimum).\nTo find the optimal model parameter, we:\n\nTake the derivative of the cost function with respect to that parameter\nSet the derivative equal to 0\nSolve for the parameter\n\nWe repeat this process for each parameter present in the model. For now, we’ll disregard the second derivative condition.\nTo help us make sense of this process, let’s put it into action by deriving the optimal model parameters for simple linear regression using the mean squared error as our cost function. Remember: although the notation may look tricky, all we are doing is following the three steps above!\nStep 1: take the derivative of the cost function with respect to each model parameter. We substitute the SLR model, \\(\\hat{y}_i = \\theta_0+\\theta_1 x_i\\), into the definition of MSE above and differentiate with respect to \\(\\theta_0\\) and \\(\\theta_1\\). \\[\\text{MSE} = \\frac{1}{n} \\sum_{i=1}^{n} (y_i - \\hat{y}_i)^2 = \\frac{1}{n} \\sum_{i=1}^{n} (y_i - \\theta_0 - \\theta_1 x_i)^2\\]\n\\[\\frac{\\partial}{\\partial \\theta_0} \\text{MSE} = \\frac{-2}{n} \\sum_{i=1}^{n} y_i - \\theta_0 - \\theta_1 x_i\\]\n\\[\\frac{\\partial}{\\partial \\theta_1} \\text{MSE} = \\frac{-2}{n} \\sum_{i=1}^{n} (y_i - \\theta_0 - \\theta_1 x_i)x_i\\]\nLet’s walk through these derivations in more depth, starting with the derivative of MSE with respect to \\(\\theta_0\\).\nGiven our MSE above, we know that: \\[\\frac{\\partial}{\\partial \\theta_0} \\text{MSE} = \\frac{\\partial}{\\partial \\theta_0} \\frac{1}{n} \\sum_{i=1}^{n} {(y_i - \\theta_0 - \\theta_1 x_i)}^{2}\\]\nNoting that the derivative of sum is equivalent to the sum of derivatives, this then becomes: \\[ = \\frac{1}{n} \\sum_{i=1}^{n} \\frac{\\partial}{\\partial \\theta_0} {(y_i - \\theta_0 - \\theta_1 x_i)}^{2}\\]\nWe can then apply the chain rule.\n\\[ = \\frac{1}{n} \\sum_{i=1}^{n} 2 \\cdot{(y_i - \\theta_0 - \\theta_1 x_i)}\\dot(-1)\\]\nFinally, we can simplify the constants, leaving us with our answer.\n\\[\\frac{\\partial}{\\partial \\theta_0} \\text{MSE} = \\frac{-2}{n} \\sum_{i=1}^{n}{(y_i - \\theta_0 - \\theta_1 x_i)}\\]\nFollowing the same procedure, we can take the derivative of MSE with respect to \\(\\theta_1\\).\n\\[\\frac{\\partial}{\\partial \\theta_1} \\text{MSE} = \\frac{\\partial}{\\partial \\theta_1} \\frac{1}{n} \\sum_{i=1}^{n} {(y_i - \\theta_0 - \\theta_1 x_i)}^{2}\\]\n\\[ = \\frac{1}{n} \\sum_{i=1}^{n} \\frac{\\partial}{\\partial \\theta_1} {(y_i - \\theta_0 - \\theta_1 x_i)}^{2}\\]\n\\[ = \\frac{1}{n} \\sum_{i=1}^{n} 2 \\dot{(y_i - \\theta_0 - \\theta_1 x_i)}\\dot(-x_i)\\]\n\\[= \\frac{-2}{n} \\sum_{i=1}^{n} {(y_i - \\theta_0 - \\theta_1 x_i)}x_i\\]\nStep 2: set the derivatives equal to 0. After simplifying terms, this produces two estimating equations. The best set of model parameters \\((\\hat{\\theta}_0, \\hat{\\theta}_1)\\) must satisfy these two optimality conditions. \\[0 = \\frac{-2}{n} \\sum_{i=1}^{n} y_i - \\hat{\\theta}_0 - \\hat{\\theta}_1 x_i \\Longleftrightarrow \\frac{1}{n}\\sum_{i=1}^{n} y_i - \\hat{y}_i = 0\\] \\[0 = \\frac{-2}{n} \\sum_{i=1}^{n} (y_i - \\hat{\\theta}_0 - \\hat{\\theta}_1 x_i)x_i \\Longleftrightarrow \\frac{1}{n}\\sum_{i=1}^{n} (y_i - \\hat{y}_i)x_i = 0\\]\nStep 3: solve the estimating equations to compute estimates for \\(\\hat{\\theta}_0\\) and \\(\\hat{\\theta}_1\\).\nTaking the first equation gives the estimate of \\(\\hat{\\theta}_0\\): \\[\\frac{1}{n} \\sum_{i=1}^n y_i - \\hat{\\theta}_0 - \\hat{\\theta}_1 x_i = 0 \\]\n\\[\\left(\\frac{1}{n} \\sum_{i=1}^n y_i \\right) - \\hat{\\theta}_0 - \\hat{\\theta}_1\\left(\\frac{1}{n} \\sum_{i=1}^n x_i \\right) = 0\\]\n\\[ \\hat{\\theta}_0 = \\bar{y} - \\hat{\\theta}_1 \\bar{x}\\]\nWith a bit more maneuvering, the second equation gives the estimate of \\(\\hat{\\theta}_1\\). Start by multiplying the first estimating equation by \\(\\bar{x}\\), then subtracting the result from the second estimating equation.\n\\[\\frac{1}{n} \\sum_{i=1}^n (y_i - \\hat{y}_i)x_i - \\frac{1}{n} \\sum_{i=1}^n (y_i - \\hat{y}_i)\\bar{x} = 0 \\]\n\\[\\frac{1}{n} \\sum_{i=1}^n (y_i - \\hat{y}_i)(x_i - \\bar{x}) = 0 \\]\nNext, plug in \\(\\hat{y}_i = \\hat{\\theta}_0 + \\hat{\\theta}_1 x_i = \\bar{y} + \\hat{\\theta}_1(x_i - \\bar{x})\\):\n\\[\\frac{1}{n} \\sum_{i=1}^n (y_i - \\bar{y} - \\hat{\\theta}_1(x - \\bar{x}))(x_i - \\bar{x}) = 0 \\]\n\\[\\frac{1}{n} \\sum_{i=1}^n (y_i - \\bar{y})(x_i - \\bar{x}) = \\hat{\\theta}_1 \\times \\frac{1}{n} \\sum_{i=1}^n (x_i - \\bar{x})^2\n\\]\nBy using the definition of correlation \\(\\left(r = \\frac{1}{n} \\sum_{i=1}^n (\\frac{x_i-\\bar{x}}{\\sigma_x})(\\frac{y_i-\\bar{y}}{\\sigma_y}) \\right)\\) and standard deviation \\(\\left(\\sigma_x = \\sqrt{\\frac{1}{n} \\sum_{i=1}^n (x_i - \\bar{x})^2} \\right)\\), we can conclude: \\[r \\sigma_x \\sigma_y = \\hat{\\theta}_1 \\times \\sigma_x^2\\] \\[\\hat{\\theta}_1 = r \\frac{\\sigma_y}{\\sigma_x}\\]\nJust as was given in Data 8!\nRemember, this derivation found the optimal model parameters for SLR when using the MSE cost function. If we had used a different model or different loss function, we likely would have found different values for the best model parameters. However, regardless of the model and loss used, we can always follow these three steps to fit the model." + }, + { + "objectID": "constant_model_loss_transformations/loss_transformations.html#step-4-evaluating-the-slr-model", + "href": "constant_model_loss_transformations/loss_transformations.html#step-4-evaluating-the-slr-model", + "title": "11  Constant Model, Loss, and Transformations", + "section": "11.1 Step 4: Evaluating the SLR Model", + "text": "11.1 Step 4: Evaluating the SLR Model\nNow that we’ve explored the mathematics behind (1) choosing a model, (2) choosing a loss function, and (3) fitting the model, we’re left with one final question – how “good” are the predictions made by this “best” fitted model? To determine this, we can:\n\nVisualize data and compute statistics:\n\nPlot the original data.\nCompute each column’s mean and standard deviation. If the mean and standard deviation of our predictions are close to those of the original observed \\(y_i\\)s, we might be inclined to say that our model has done well.\n(If we’re fitting a linear model) compute the correlation \\(r\\). A large magnitude for the correlation coefficient between the feature and response variables could also indicate that our model has done well.\n\nPerformance metrics:\n\nWe can take the Root Mean Squared Error (RMSE).\n\nIt’s the square root of the mean squared error (MSE), which is the average loss that we’ve been minimizing to determine optimal model parameters.\nRMSE is in the same units as \\(y\\).\nA lower RMSE indicates more “accurate” predictions, as we have a lower “average loss” across the data.\n\n\n\\[\\text{RMSE} = \\sqrt{\\frac{1}{n} \\sum_{i=1}^n (y_i - \\hat{y}_i)^2}\\]\nVisualization:\n\nLook at the residual plot of \\(e_i = y_i - \\hat{y_i}\\) to visualize the difference between actual and predicted values. The good residual plot should not show any pattern between input/features \\(x_i\\) and residual values \\(e_i\\).\n\n\nTo illustrate this process, let’s take a look at Anscombe’s quartet.\n\n11.1.1 Four Mysterious Datasets (Anscombe’s quartet)\nLet’s take a look at four different datasets.\n\n\nCode\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nimport itertools\nfrom mpl_toolkits.mplot3d import Axes3D\n\n\n\n\nCode\n# Big font helper\ndef adjust_fontsize(size=None):\n SMALL_SIZE = 8\n MEDIUM_SIZE = 10\n BIGGER_SIZE = 12\n if size != None:\n SMALL_SIZE = MEDIUM_SIZE = BIGGER_SIZE = size\n\n plt.rc(\"font\", size=SMALL_SIZE) # controls default text sizes\n plt.rc(\"axes\", titlesize=SMALL_SIZE) # fontsize of the axes title\n plt.rc(\"axes\", labelsize=MEDIUM_SIZE) # fontsize of the x and y labels\n plt.rc(\"xtick\", labelsize=SMALL_SIZE) # fontsize of the tick labels\n plt.rc(\"ytick\", labelsize=SMALL_SIZE) # fontsize of the tick labels\n plt.rc(\"legend\", fontsize=SMALL_SIZE) # legend fontsize\n plt.rc(\"figure\", titlesize=BIGGER_SIZE) # fontsize of the figure title\n\n\n# Helper functions\ndef standard_units(x):\n return (x - np.mean(x)) / np.std(x)\n\n\ndef correlation(x, y):\n return np.mean(standard_units(x) * standard_units(y))\n\n\ndef slope(x, y):\n return correlation(x, y) * np.std(y) / np.std(x)\n\n\ndef intercept(x, y):\n return np.mean(y) - slope(x, y) * np.mean(x)\n\n\ndef fit_least_squares(x, y):\n theta_0 = intercept(x, y)\n theta_1 = slope(x, y)\n return theta_0, theta_1\n\n\ndef predict(x, theta_0, theta_1):\n return theta_0 + theta_1 * x\n\n\ndef compute_mse(y, yhat):\n return np.mean((y - yhat) ** 2)\n\n\nplt.style.use(\"default\") # Revert style to default mpl\n\n\n\n\nCode\nplt.style.use(\"default\") # Revert style to default mpl\nNO_VIZ, RESID, RESID_SCATTER = range(3)\n\n\ndef least_squares_evaluation(x, y, visualize=NO_VIZ):\n # statistics\n print(f\"x_mean : {np.mean(x):.2f}, y_mean : {np.mean(y):.2f}\")\n print(f\"x_stdev: {np.std(x):.2f}, y_stdev: {np.std(y):.2f}\")\n print(f\"r = Correlation(x, y): {correlation(x, y):.3f}\")\n\n # Performance metrics\n ahat, bhat = fit_least_squares(x, y)\n yhat = predict(x, ahat, bhat)\n print(f\"\\theta_0: {ahat:.2f}, \\theta_1: {bhat:.2f}\")\n print(f\"RMSE: {np.sqrt(compute_mse(y, yhat)):.3f}\")\n\n # visualization\n fig, ax_resid = None, None\n if visualize == RESID_SCATTER:\n fig, axs = plt.subplots(1, 2, figsize=(8, 3))\n axs[0].scatter(x, y)\n axs[0].plot(x, yhat)\n axs[0].set_title(\"LS fit\")\n ax_resid = axs[1]\n elif visualize == RESID:\n fig = plt.figure(figsize=(4, 3))\n ax_resid = plt.gca()\n\n if ax_resid is not None:\n ax_resid.scatter(x, y - yhat, color=\"red\")\n ax_resid.plot([4, 14], [0, 0], color=\"black\")\n ax_resid.set_title(\"Residuals\")\n\n return fig\n\n\n\n\nCode\n# Load in four different datasets: I, II, III, IV\nx = [10, 8, 13, 9, 11, 14, 6, 4, 12, 7, 5]\ny1 = [8.04, 6.95, 7.58, 8.81, 8.33, 9.96, 7.24, 4.26, 10.84, 4.82, 5.68]\ny2 = [9.14, 8.14, 8.74, 8.77, 9.26, 8.10, 6.13, 3.10, 9.13, 7.26, 4.74]\ny3 = [7.46, 6.77, 12.74, 7.11, 7.81, 8.84, 6.08, 5.39, 8.15, 6.42, 5.73]\nx4 = [8, 8, 8, 8, 8, 8, 8, 19, 8, 8, 8]\ny4 = [6.58, 5.76, 7.71, 8.84, 8.47, 7.04, 5.25, 12.50, 5.56, 7.91, 6.89]\n\nanscombe = {\n \"I\": pd.DataFrame(list(zip(x, y1)), columns=[\"x\", \"y\"]),\n \"II\": pd.DataFrame(list(zip(x, y2)), columns=[\"x\", \"y\"]),\n \"III\": pd.DataFrame(list(zip(x, y3)), columns=[\"x\", \"y\"]),\n \"IV\": pd.DataFrame(list(zip(x4, y4)), columns=[\"x\", \"y\"]),\n}\n\n# Plot the scatter plot and line of best fit\nfig, axs = plt.subplots(2, 2, figsize=(10, 10))\n\nfor i, dataset in enumerate([\"I\", \"II\", \"III\", \"IV\"]):\n ans = anscombe[dataset]\n x, y = ans[\"x\"], ans[\"y\"]\n ahat, bhat = fit_least_squares(x, y)\n yhat = predict(x, ahat, bhat)\n axs[i // 2, i % 2].scatter(x, y, alpha=0.6, color=\"red\") # plot the x, y points\n axs[i // 2, i % 2].plot(x, yhat) # plot the line of best fit\n axs[i // 2, i % 2].set_xlabel(f\"$x_{i+1}$\")\n axs[i // 2, i % 2].set_ylabel(f\"$y_{i+1}$\")\n axs[i // 2, i % 2].set_title(f\"Dataset {dataset}\")\n\nplt.show()\n\n\n\n\n\nWhile these four sets of datapoints look very different, they actually all have identical \\(\\bar x\\), \\(\\bar y\\), \\(\\sigma_x\\), \\(\\sigma_y\\), correlation \\(r\\), and RMSE! If we only look at these statistics, we would probably be inclined to say that these datasets are similar.\n\n\nCode\nfor dataset in [\"I\", \"II\", \"III\", \"IV\"]:\n print(f\">>> Dataset {dataset}:\")\n ans = anscombe[dataset]\n fig = least_squares_evaluation(ans[\"x\"], ans[\"y\"], visualize=NO_VIZ)\n print()\n print()\n\n\n>>> Dataset I:\nx_mean : 9.00, y_mean : 7.50\nx_stdev: 3.16, y_stdev: 1.94\nr = Correlation(x, y): 0.816\n heta_0: 3.00, heta_1: 0.50\nRMSE: 1.119\n\n\n>>> Dataset II:\nx_mean : 9.00, y_mean : 7.50\nx_stdev: 3.16, y_stdev: 1.94\nr = Correlation(x, y): 0.816\n heta_0: 3.00, heta_1: 0.50\nRMSE: 1.119\n\n\n>>> Dataset III:\nx_mean : 9.00, y_mean : 7.50\nx_stdev: 3.16, y_stdev: 1.94\nr = Correlation(x, y): 0.816\n heta_0: 3.00, heta_1: 0.50\nRMSE: 1.118\n\n\n>>> Dataset IV:\nx_mean : 9.00, y_mean : 7.50\nx_stdev: 3.16, y_stdev: 1.94\nr = Correlation(x, y): 0.817\n heta_0: 3.00, heta_1: 0.50\nRMSE: 1.118\n\n\n\n\nWe may also wish to visualize the model’s residuals, defined as the difference between the observed and predicted \\(y_i\\) value (\\(e_i = y_i - \\hat{y}_i\\)). This gives a high-level view of how “off” each prediction is from the true observed value. Recall that you explored this concept in Data 8: a good regression fit should display no clear pattern in its plot of residuals. The residual plots for Anscombe’s quartet are displayed below. Note how only the first plot shows no clear pattern to the magnitude of residuals. This is an indication that SLR is not the best choice of model for the remaining three sets of points.\n\n\n\nCode\n# Residual visualization\nfig, axs = plt.subplots(2, 2, figsize=(10, 10))\n\nfor i, dataset in enumerate([\"I\", \"II\", \"III\", \"IV\"]):\n ans = anscombe[dataset]\n x, y = ans[\"x\"], ans[\"y\"]\n ahat, bhat = fit_least_squares(x, y)\n yhat = predict(x, ahat, bhat)\n axs[i // 2, i % 2].scatter(\n x, y - yhat, alpha=0.6, color=\"red\"\n ) # plot the x, y points\n axs[i // 2, i % 2].plot(\n x, np.zeros_like(x), color=\"black\"\n ) # plot the residual line\n axs[i // 2, i % 2].set_xlabel(f\"$x_{i+1}$\")\n axs[i // 2, i % 2].set_ylabel(f\"$e_{i+1}$\")\n axs[i // 2, i % 2].set_title(f\"Dataset {dataset} Residuals\")\n\nplt.show()" + }, + { + "objectID": "constant_model_loss_transformations/loss_transformations.html#constant-model-mse", + "href": "constant_model_loss_transformations/loss_transformations.html#constant-model-mse", + "title": "11  Constant Model, Loss, and Transformations", + "section": "11.2 Constant Model + MSE", + "text": "11.2 Constant Model + MSE\nNow, we’ll shift from the SLR model to the constant model, also known as a summary statistic. The constant model is slightly different from the simple linear regression model we’ve explored previously. Rather than generating predictions from an inputted feature variable, the constant model always predicts the same constant number. This ignores any relationships between variables. For example, let’s say we want to predict the number of drinks a boba shop sells in a day. Boba tea sales likely depend on the time of year, the weather, how the customers feel, whether school is in session, etc., but the constant model ignores these factors in favor of a simpler model. In other words, the constant model employs a simplifying assumption.\nIt is also a parametric, statistical model:\n\\[\\hat{y} = \\theta_0\\]\n\\(\\theta_0\\) is the parameter of the constant model, just as \\(\\theta_0\\) and \\(\\theta_1\\) were the parameters in SLR. Since our parameter \\(\\theta_0\\) is 1-dimensional (\\(\\theta_0 \\in \\mathbb{R}\\)), we now have no input to our model and will always predict \\(\\hat{y} = \\theta_0\\).\n\n11.2.1 Deriving the optimal \\(\\theta_0\\)\nOur task now is to determine what value of \\(\\theta_0\\) best represents the optimal model – in other words, what number should we guess each time to have the lowest possible average loss on our data?\nLike before, we’ll use Mean Squared Error (MSE). Recall that the MSE is average squared loss (L2 loss) over the data \\(D = \\{y_1, y_2, ..., y_n\\}\\).\n\\[\\hat{R}(\\theta) = \\frac{1}{n}\\sum^{n}_{i=1} (y_i - \\hat{y_i})^2 \\]\nOur modeling process now looks like this:\n\nChoose a model: constant model\nChoose a loss function: L2 loss\nFit the model\nEvaluate model performance\n\nGiven the constant model \\(\\hat{y} = \\theta_0\\), we can rewrite the MSE equation as\n\\[\\hat{R}(\\theta) = \\frac{1}{n}\\sum^{n}_{i=1} (y_i - \\theta_0)^2 \\]\nWe can fit the model by finding the optimal \\(\\hat{\\theta_0}\\) that minimizes the MSE using a calculus approach.\n\nDifferentiate with respect to \\(\\theta_0\\)\n\n\\[\n\\begin{align}\n\\frac{d}{d\\theta_0}\\text{R}(\\theta) & = \\frac{d}{d\\theta_0}(\\frac{1}{n}\\sum^{n}_{i=1} (y_i - \\theta_0)^2)\n\\\\ &= \\frac{1}{n}\\sum^{n}_{i=1} \\frac{d}{d\\theta_0} (y_i - \\theta_0)^2 \\quad \\quad \\text{derivative of sum is a sum of derivatives}\n\\\\ &= \\frac{1}{n}\\sum^{n}_{i=1} 2 (y_i - \\theta_0) (-1) \\quad \\quad \\text{chain rule}\n\\\\ &= {\\frac{-2}{n}}\\sum^{n}_{i=1} (y_i - \\theta_0) \\quad \\quad \\text{simply constants}\n\\end{align}\n\\]\n\nSet equal to 0\n\\[\n0 = {\\frac{-2}{n}}\\sum^{n}_{i=1} (y_i - \\hat{\\theta_0})\n\\]\nSolve for \\(\\hat{\\theta_0}\\)\n\n\\[\n\\begin{align}\n0 &= {\\frac{-2}{n}}\\sum^{n}_{i=1} (y_i - \\hat{\\theta_0})\n\\\\ &= \\sum^{n}_{i=1} (y_i - \\hat{\\theta_0}) \\quad \\quad \\text{divide both sides by} \\frac{-2}{n}\n\\\\ &= \\sum^{n}_{i=1} y_i - \\sum^{n}_{i=1} \\theta_0 \\quad \\quad \\text{separate sums}\n\\\\ &= \\sum^{n}_{i=1} y_i - n \\cdot \\hat{\\theta_0} \\quad \\quad \\text{c + c + … + c = nc}\n\\\\ n \\cdot \\hat{\\theta_0} &= \\sum^{n}_{i=1} y_i\n\\\\ \\hat{\\theta_0} &= \\frac{1}{n} \\sum^{n}_{i=1} y_i\n\\\\ \\hat{\\theta_0} &= \\bar{y}\n\\end{align}\n\\]\nLet’s take a moment to interpret this result. \\(\\hat{\\theta_0} = \\bar{y}\\) is the optimal parameter for constant model + MSE. It holds true regardless of what data sample you have, and it provides some formal reasoning as to why the mean is such a common summary statistic.\nOur optimal model parameter is the value of the parameter that minimizes the cost function. This minimum value of the cost function can be expressed:\n\\[R(\\hat{\\theta_0}) = \\min_{\\theta_0} R(\\theta_0)\\]\nTo restate the above in plain English: we are looking at the value of the cost function when it takes the best parameter as input. This optimal model parameter, \\(\\hat{\\theta_0}\\), is the value of \\(\\theta_0\\) that minimizes the cost \\(R\\).\nFor modeling purposes, we care less about the minimum value of cost, \\(R(\\hat{\\theta_0})\\), and more about the value of \\(\\theta\\) that results in this lowest average loss. In other words, we concern ourselves with finding the best parameter value such that:\n\\[\\hat{\\theta} = \\underset{\\theta}{\\operatorname{\\arg\\min}}\\:R(\\theta)\\]\nThat is, we want to find the argument \\(\\theta\\) that minimizes the cost function.\n\n\n11.2.2 Comparing Two Different Models, Both Fit with MSE\nNow that we’ve explored the constant model with an L2 loss, we can compare it to the SLR model that we learned last lecture. Consider the dataset below, which contains information about the ages and lengths of dugongs. Supposed we wanted to predict dugong ages:\n\n\n\n\n\n\n\n\n\nConstant Model\nSimple Linear Regression\n\n\n\n\nmodel\n\\(\\hat{y} = \\theta_0\\)\n\\(\\hat{y} = \\theta_0 + \\theta_1 x\\)\n\n\ndata\nsample of ages \\(D = \\{y_1, y_2, ..., y_n\\}\\)\nsample of ages \\(D = \\{(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)\\}\\)\n\n\ndimensions\n\\(\\hat{\\theta_0}\\) is 1-D\n\\(\\hat{\\theta} = [\\hat{\\theta_0}, \\hat{\\theta_1}]\\) is 2-D\n\n\nloss surface\n2-D \n3-D \n\n\nLoss Model\n\\(\\hat{R}(\\theta) = \\frac{1}{n}\\sum^{n}_{i=1} (y_i - \\theta_0)^2\\)\n\\(\\hat{R}(\\theta_0, \\theta_1) = \\frac{1}{n}\\sum^{n}_{i=1} (y_i - (\\theta_0 + \\theta_1 x))^2\\)\n\n\nRMSE\n7.72\n4.31\n\n\npredictions visualized\nrug plot \nscatter plot \n\n\n\n(Notice how the points for our SLR scatter plot are visually not a great linear fit. We’ll come back to this).\nThe code for generating the graphs and models is included below, but we won’t go over it in too much depth.\n\n\nCode\ndugongs = pd.read_csv(\"data/dugongs.csv\")\ndata_constant = dugongs[\"Age\"]\ndata_linear = dugongs[[\"Length\", \"Age\"]]\n\n\n\n\nCode\n# Constant Model + MSE\nplt.style.use('default') # Revert style to default mpl\nadjust_fontsize(size=16)\n%matplotlib inline\n\ndef mse_constant(theta, data):\n return np.mean(np.array([(y_obs - theta) ** 2 for y_obs in data]), axis=0)\n\nthetas = np.linspace(-20, 42, 1000)\nl2_loss_thetas = mse_constant(thetas, data_constant)\n\n# Plotting the loss surface\nplt.plot(thetas, l2_loss_thetas)\nplt.xlabel(r'$\\theta_0$')\nplt.ylabel(r'MSE')\n\n# Optimal point\nthetahat = np.mean(data_constant)\nplt.scatter([thetahat], [mse_constant(thetahat, data_constant)], s=50, label = r\"$\\hat{\\theta}_0$\")\nplt.legend();\n# plt.show()\n\n\n\n\n\n\n\nCode\n# SLR + MSE\ndef mse_linear(theta_0, theta_1, data_linear):\n data_x, data_y = data_linear.iloc[:, 0], data_linear.iloc[:, 1]\n return np.mean(\n np.array([(y - (theta_0 + theta_1 * x)) ** 2 for x, y in zip(data_x, data_y)]),\n axis=0,\n )\n\n\n# plotting the loss surface\ntheta_0_values = np.linspace(-80, 20, 80)\ntheta_1_values = np.linspace(-10, 30, 80)\nmse_values = np.array(\n [[mse_linear(x, y, data_linear) for x in theta_0_values] for y in theta_1_values]\n)\n\n# Optimal point\ndata_x, data_y = data_linear.iloc[:, 0], data_linear.iloc[:, 1]\ntheta_1_hat = np.corrcoef(data_x, data_y)[0, 1] * np.std(data_y) / np.std(data_x)\ntheta_0_hat = np.mean(data_y) - theta_1_hat * np.mean(data_x)\n\n# Create the 3D plot\nfig = plt.figure(figsize=(7, 5))\nax = fig.add_subplot(111, projection=\"3d\")\n\nX, Y = np.meshgrid(theta_0_values, theta_1_values)\nsurf = ax.plot_surface(\n X, Y, mse_values, cmap=\"viridis\", alpha=0.6\n) # Use alpha to make it slightly transparent\n\n# Scatter point using matplotlib\nsc = ax.scatter(\n [theta_0_hat],\n [theta_1_hat],\n [mse_linear(theta_0_hat, theta_1_hat, data_linear)],\n marker=\"o\",\n color=\"red\",\n s=100,\n label=\"theta hat\",\n)\n\n# Create a colorbar\ncbar = fig.colorbar(surf, ax=ax, shrink=0.5, aspect=10)\ncbar.set_label(\"Cost Value\")\n\nax.set_title(\"MSE for different $\\\\theta_0, \\\\theta_1$\")\nax.set_xlabel(\"$\\\\theta_0$\")\nax.set_ylabel(\"$\\\\theta_1$\")\nax.set_zlabel(\"MSE\")\n\n# plt.show()\n\n\nText(0.5, 0, 'MSE')\n\n\n\n\n\n\n\nCode\n# Predictions\nyobs = data_linear[\"Age\"] # The true observations y\nxs = data_linear[\"Length\"] # Needed for linear predictions\nn = len(yobs) # Predictions\n\nyhats_constant = [thetahat for i in range(n)] # Not used, but food for thought\nyhats_linear = [theta_0_hat + theta_1_hat * x for x in xs]\n\n\n\n\nCode\n# Constant Model Rug Plot\n# In case we're in a weird style state\nsns.set_theme()\nadjust_fontsize(size=16)\n%matplotlib inline\n\nfig = plt.figure(figsize=(8, 1.5))\nsns.rugplot(yobs, height=0.25, lw=2) ;\nplt.axvline(thetahat, color='red', lw=4, label=r\"$\\hat{\\theta}_0$\");\nplt.legend()\nplt.yticks([]);\n# plt.show()\n\n\n\n\n\n\n\nCode\n# SLR model scatter plot \n# In case we're in a weird style state\nsns.set_theme()\nadjust_fontsize(size=16)\n%matplotlib inline\n\nsns.scatterplot(x=xs, y=yobs)\nplt.plot(xs, yhats_linear, color='red', lw=4);\n# plt.savefig('dugong_line.png', bbox_inches = 'tight');\n# plt.show()\n\n\n\n\n\nInterpreting the RMSE (Root Mean Squared Error):\n\nThe constant error is HIGHER than the linear error.\n\nHence,\n\nThe constant model is WORSE than the linear model (at least for this metric)." + }, + { + "objectID": "constant_model_loss_transformations/loss_transformations.html#constant-model-mae", + "href": "constant_model_loss_transformations/loss_transformations.html#constant-model-mae", + "title": "11  Constant Model, Loss, and Transformations", + "section": "11.3 Constant Model + MAE", + "text": "11.3 Constant Model + MAE\nWe see now that changing the model used for prediction leads to a wildly different result for the optimal model parameter. What happens if we instead change the loss function used in model evaluation?\nThis time, we will consider the constant model with L1 (absolute loss) as the loss function. This means that the average loss will be expressed as the Mean Absolute Error (MAE).\n\nChoose a model: constant model\nChoose a loss function: L1 loss\nFit the model\nEvaluate model performance\n\n\n11.3.1 Deriving the optimal \\(\\theta_0\\)\nRecall that the MAE is average absolute loss (L1 loss) over the data \\(D = \\{y_1, y_2, ..., y_n\\}\\).\n\\[\\hat{R}(\\theta_0) = \\frac{1}{n}\\sum^{n}_{i=1} |y_i - \\hat{y_i}| \\]\nGiven the constant model \\(\\hat{y} = \\theta_0\\), we can write the MAE as:\n\\[\\hat{R}(\\theta_0) = \\frac{1}{n}\\sum^{n}_{i=1} |y_i - \\theta_0| \\]\nTo fit the model, we find the optimal parameter value \\(\\hat{\\theta_0}\\) that minimizes the MAE by differentiating using a calculus approach:\n\nDifferentiate with respect to \\(\\hat{\\theta_0}\\).\n\n\\[\\hat{R}(\\theta_0) = \\frac{1}{n}\\sum^{n}_{i=1} |y_i - \\theta_0| \\]\n\\[\\frac{d}{d\\theta_0} R(\\theta_0) = \\frac{d}{d\\theta_0} \\left(\\frac{1}{n} \\sum^{n}_{i=1} |y_i - \\theta_0| \\right)\\]\n\\[\n= \\frac{1}{n} \\sum^{n}_{i=1} \\frac{d}{d\\theta_0} |y_i - \\theta_0|\n\\]\n\nHere, we seem to have run into a problem: the derivative of an absolute value is undefined when the argument is 0 (i.e. when \\(y_i = \\theta_0\\)). For now, we’ll ignore this issue. It turns out that disregarding this case doesn’t influence our final result.\nTo perform the derivative, consider two cases. When \\(\\theta_0\\) is less than or equal to \\(y_i\\), the term \\(y_i - \\theta_0\\) will be positive and the absolute value has no impact. When \\(\\theta_0\\) is greater than \\(y_i\\), the term \\(y_i - \\theta_0\\) will be negative. Applying the absolute value will convert this to a positive value, which we can express by saying \\(-(y_i - \\theta_0) = \\theta_0 - y_i\\).\n\n\\[|y_i - \\theta_0| = \\begin{cases} y_i - \\theta_0 \\quad \\text{ if } \\theta_0 \\le y_i \\\\ \\theta_0 - y_i \\quad \\text{if }\\theta_0 > y_i \\end{cases}\\]\n\nTaking derivatives:\n\n\\[\\frac{d}{d\\theta_0} |y_i - \\theta_0| = \\begin{cases} \\frac{d}{d\\theta_0} (y_i - \\theta_0) = -1 \\quad \\text{if }\\theta_0 < y_i \\\\ \\frac{d}{d\\theta_0} (\\theta_0 - y_i) = 1 \\quad \\text{if }\\theta_0 > y_i \\end{cases}\\]\n\nThis means that we obtain a different value for the derivative for data points where \\(\\theta_0 < y_i\\) and where \\(\\theta_0 > y_i\\). We can summarize this by saying:\n\n\\[\n\\frac{d}{d\\theta_0} R(\\theta_0) = \\frac{1}{n} \\sum^{n}_{i=1} \\frac{d}{d\\theta_0} |y_i - \\theta_0| \\\\\n= \\frac{1}{n} \\left[\\sum_{\\theta_0 < y_i} (-1) + \\sum_{\\theta_0 > y_i} (+1) \\right]\n\\]\n\nIn other words, we take the sum of values for \\(i = 1, 2, ..., n\\):\n\n\\(-1\\) if our observation \\(y_i\\) is greater than our prediction \\(\\hat{\\theta_0}\\)\n\\(+1\\) if our observation \\(y_i\\) is smaller than our prediction \\(\\hat{\\theta_0}\\)\n\n\n\nSet equal to 0. \\[ 0 = \\frac{1}{n}\\sum_{\\hat{\\theta_0} < y_i} (-1) + \\frac{1}{n}\\sum_{\\hat{\\theta_0} > y_i} (+1) \\]\nSolve for \\(\\hat{\\theta_0}\\). \\[ 0 = -\\frac{1}{n}\\sum_{\\hat{\\theta_0} < y_i} (1) + \\frac{1}{n}\\sum_{\\hat{\\theta_0} > y_i} (1)\\]\n\n\\[\\sum_{\\hat{\\theta_0} < y_i} (1) = \\sum_{\\hat{\\theta_0} > y_i} (1) \\]\nThus, the constant model parameter \\(\\theta = \\hat{\\theta_0}\\) that minimizes MAE must satisfy:\n\\[ \\sum_{\\hat{\\theta_0} < y_i} (1) = \\sum_{\\hat{\\theta_0} > y_i} (1) \\]\nIn other words, the number of observations greater than \\(\\theta_0\\) must be equal to the number of observations less than \\(\\theta_0\\); there must be an equal number of points on the left and right sides of the equation. This is the definition of median, so our optimal value is \\[ \\hat{\\theta_0} = median(y) \\]" + }, + { + "objectID": "constant_model_loss_transformations/loss_transformations.html#summary-loss-optimization-calculus-and-critical-points", + "href": "constant_model_loss_transformations/loss_transformations.html#summary-loss-optimization-calculus-and-critical-points", + "title": "11  Constant Model, Loss, and Transformations", + "section": "11.4 Summary: Loss Optimization, Calculus, and Critical Points", + "text": "11.4 Summary: Loss Optimization, Calculus, and Critical Points\nFirst, define the objective function as average loss.\n\nPlug in L1 or L2 loss.\nPlug in the model so that the resulting expression is a function of \\(\\theta\\).\n\nThen, find the minimum of the objective function:\n\nDifferentiate with respect to \\(\\theta\\).\nSet equal to 0.\nSolve for \\(\\hat{\\theta}\\).\n(If we have multiple parameters) repeat steps 1-3 with partial derivatives.\n\nRecall critical points from calculus: \\(R(\\hat{\\theta})\\) could be a minimum, maximum, or saddle point!\n\nWe should technically also perform the second derivative test, i.e., show \\(R''(\\hat{\\theta}) > 0\\).\nMSE has a property—convexity—that guarantees that \\(R(\\hat{\\theta})\\) is a global minimum.\nThe proof of convexity for MAE is beyond this course." + }, + { + "objectID": "constant_model_loss_transformations/loss_transformations.html#comparing-loss-functions", + "href": "constant_model_loss_transformations/loss_transformations.html#comparing-loss-functions", + "title": "11  Constant Model, Loss, and Transformations", + "section": "11.5 Comparing Loss Functions", + "text": "11.5 Comparing Loss Functions\nWe’ve now tried our hand at fitting a model under both MSE and MAE cost functions. How do the two results compare?\nLet’s consider a dataset where each entry represents the number of drinks sold at a bubble tea store each day. We’ll fit a constant model to predict the number of drinks that will be sold tomorrow.\n\ndrinks = np.array([20, 21, 22, 29, 33])\ndrinks\n\narray([20, 21, 22, 29, 33])\n\n\nFrom our derivations above, we know that the optimal model parameter under MSE cost is the mean of the dataset. Under MAE cost, the optimal parameter is the median of the dataset.\n\nnp.mean(drinks), np.median(drinks)\n\n(25.0, 22.0)\n\n\nIf we plot each empirical risk function across several possible values of \\(\\theta\\), we find that each \\(\\hat{\\theta}\\) does indeed correspond to the lowest value of error:\n\nNotice that the MSE above is a smooth function – it is differentiable at all points, making it easy to minimize using numerical methods. The MAE, in contrast, is not differentiable at each of its “kinks.” We’ll explore how the smoothness of the cost function can impact our ability to apply numerical optimization in a few weeks.\nHow do outliers affect each cost function? Imagine we replace the largest value in the dataset with 1000. The mean of the data increases substantially, while the median is nearly unaffected.\n\ndrinks_with_outlier = np.append(drinks, 1033)\ndisplay(drinks_with_outlier)\nnp.mean(drinks_with_outlier), np.median(drinks_with_outlier)\n\narray([ 20, 21, 22, 29, 33, 1033])\n\n\n(193.0, 25.5)\n\n\n\nThis means that under the MSE, the optimal model parameter \\(\\hat{\\theta}\\) is strongly affected by the presence of outliers. Under the MAE, the optimal parameter is not as influenced by outlying data. We can generalize this by saying that the MSE is sensitive to outliers, while the MAE is robust to outliers.\nLet’s try another experiment. This time, we’ll add an additional, non-outlying datapoint to the data.\n\ndrinks_with_additional_observation = np.append(drinks, 35)\ndrinks_with_additional_observation\n\narray([20, 21, 22, 29, 33, 35])\n\n\nWhen we again visualize the cost functions, we find that the MAE now plots a horizontal line between 22 and 29. This means that there are infinitely many optimal values for the model parameter: any value \\(\\hat{\\theta} \\in [22, 29]\\) will minimize the MAE. In contrast, the MSE still has a single best value for \\(\\hat{\\theta}\\). In other words, the MSE has a unique solution for \\(\\hat{\\theta}\\); the MAE is not guaranteed to have a single unique solution.\n \nTo summarize our example,\n\n\n\n\n\n\n\n\n\nMSE (Mean Squared Loss)\nMAE (Mean Absolute Loss)\n\n\n\n\nLoss Function\n\\(\\hat{R}(\\theta) = \\frac{1}{n}\\sum^{n}_{i=1} (y_i - \\theta_0)^2\\)\n\\(\\hat{R}(\\theta) = \\frac{1}{n}\\sum^{n}_{i=1} |y_i - \\theta_0|\\)\n\n\nOptimal \\(\\hat{\\theta_0}\\)\n\\(\\hat{\\theta_0} = mean(y) = \\bar{y}\\)\n\\(\\hat{\\theta_0} = median(y)\\)\n\n\nLoss Surface\n\n\n\n\nShape\nSmooth - easy to minimize using numerical methods (in a few weeks)\nPiecewise - at each of the “kinks,” it’s not differentiable. Harder to minimize.\n\n\nOutliers\nSensitive to outliers (since they change mean substantially). Sensitivity also depends on the dataset size.\nMore robust to outliers.\n\n\n\\(\\hat{\\theta_0}\\) Uniqueness\nUnique \\(\\hat{\\theta_0}\\)\nInfinitely many \\(\\hat{\\theta_0}\\)s" + }, + { + "objectID": "constant_model_loss_transformations/loss_transformations.html#transformations-to-fit-linear-models", + "href": "constant_model_loss_transformations/loss_transformations.html#transformations-to-fit-linear-models", + "title": "11  Constant Model, Loss, and Transformations", + "section": "11.6 Transformations to fit Linear Models", + "text": "11.6 Transformations to fit Linear Models\nAt this point, we have an effective method of fitting models to predict linear relationships. Given a feature variable and target, we can apply our four-step process to find the optimal model parameters.\nA key word above is linear. When we computed parameter estimates earlier, we assumed that \\(x_i\\) and \\(y_i\\) shared a roughly linear relationship. Data in the real world isn’t always so straightforward, but we can transform the data to try and obtain linearity.\nThe Tukey-Mosteller Bulge Diagram is a useful tool for summarizing what transformations can linearize the relationship between two variables. To determine what transformations might be appropriate, trace the shape of the “bulge” made by your data. Find the quadrant of the diagram that matches this bulge. The transformations shown on the vertical and horizontal axes of this quadrant can help improve the fit between the variables.\n\nNote that:\n\nThere are multiple solutions. Some will fit better than others.\nsqrt and log make a value “smaller.”\nRaising to a power makes a value “bigger.”\nEach of these transformations equates to increasing or decreasing the scale of an axis.\n\nOther goals in addition to linearity are possible, for example, making data appear more symmetric. Linearity allows us to fit lines to the transformed data.\nLet’s revisit our dugongs example. The lengths and ages are plotted below:\n\n\nCode\n# `corrcoef` computes the correlation coefficient between two variables\n# `std` finds the standard deviation\nx = dugongs[\"Length\"]\ny = dugongs[\"Age\"]\nr = np.corrcoef(x, y)[0, 1]\ntheta_1 = r * np.std(y) / np.std(x)\ntheta_0 = np.mean(y) - theta_1 * np.mean(x)\n\nfig, ax = plt.subplots(1, 2, dpi=200, figsize=(8, 3))\nax[0].scatter(x, y)\nax[0].set_xlabel(\"Length\")\nax[0].set_ylabel(\"Age\")\n\nax[1].scatter(x, y)\nax[1].plot(x, theta_0 + theta_1 * x, \"tab:red\")\nax[1].set_xlabel(\"Length\")\nax[1].set_ylabel(\"Age\")\n\n\nText(0, 0.5, 'Age')\n\n\n\n\n\nLooking at the plot on the left, we see that there is a slight curvature to the data points. Plotting the SLR curve on the right results in a poor fit.\nFor SLR to perform well, we’d like there to be a rough linear trend relating \"Age\" and \"Length\". What is making the raw data deviate from a linear relationship? Notice that the data points with \"Length\" greater than 2.6 have disproportionately high values of \"Age\" relative to the rest of the data. If we could manipulate these data points to have lower \"Age\" values, we’d “shift” these points downwards and reduce the curvature in the data. Applying a logarithmic transformation to \\(y_i\\) (that is, taking \\(\\log(\\) \"Age\" \\()\\) ) would achieve just that.\nAn important word on \\(\\log\\): in Data 100 (and most upper-division STEM courses), \\(\\log\\) denotes the natural logarithm with base \\(e\\). The base-10 logarithm, where relevant, is indicated by \\(\\log_{10}\\).\n\n\nCode\nz = np.log(y)\n\nr = np.corrcoef(x, z)[0, 1]\ntheta_1 = r * np.std(z) / np.std(x)\ntheta_0 = np.mean(z) - theta_1 * np.mean(x)\n\nfig, ax = plt.subplots(1, 2, dpi=200, figsize=(8, 3))\nax[0].scatter(x, z)\nax[0].set_xlabel(\"Length\")\nax[0].set_ylabel(r\"$\\log{(Age)}$\")\n\nax[1].scatter(x, z)\nax[1].plot(x, theta_0 + theta_1 * x, \"tab:red\")\nax[1].set_xlabel(\"Length\")\nax[1].set_ylabel(r\"$\\log{(Age)}$\")\n\nplt.subplots_adjust(wspace=0.3)\n\n\n\n\n\nOur SLR fit looks a lot better! We now have a new target variable: the SLR model is now trying to predict the log of \"Age\", rather than the untransformed \"Age\". In other words, we are applying the transformation \\(z_i = \\log{(y_i)}\\). Notice that the resulting model is still linear in the parameters \\(\\theta = [\\theta_0, \\theta_1]\\). The SLR model becomes:\n\\[\\hat{\\log{y}} = \\theta_0 + \\theta_1 x\\] \\[\\hat{z} = \\theta_0 + \\theta_1 x\\]\nIt turns out that this linearized relationship can help us understand the underlying relationship between \\(x\\) and \\(y\\). If we rearrange the relationship above, we find:\n\\[\n\\log{(y)} = \\theta_0 + \\theta_1 x \\\\\ny = e^{\\theta_0 + \\theta_1 x} \\\\\ny = (e^{\\theta_0})e^{\\theta_1 x} \\\\\ny_i = C e^{k x}\n\\]\nFor some constants \\(C\\) and \\(k\\).\n\\(y\\) is an exponential function of \\(x\\). Applying an exponential fit to the untransformed variables corroborates this finding.\n\n\nCode\nplt.figure(dpi=120, figsize=(4, 3))\n\nplt.scatter(x, y)\nplt.plot(x, np.exp(theta_0) * np.exp(theta_1 * x), \"tab:red\")\nplt.xlabel(\"Length\")\nplt.ylabel(\"Age\")\n\n\nText(0, 0.5, 'Age')\n\n\n\n\n\nYou may wonder: why did we choose to apply a log transformation specifically? Why not some other function to linearize the data?\nPractically, many other mathematical operations that modify the relative scales of \"Age\" and \"Length\" could have worked here." + }, + { + "objectID": "constant_model_loss_transformations/loss_transformations.html#terminology-for-multiple-linear-regression", + "href": "constant_model_loss_transformations/loss_transformations.html#terminology-for-multiple-linear-regression", + "title": "11  Constant Model, Loss, and Transformations", + "section": "11.7 Terminology for Multiple Linear Regression", + "text": "11.7 Terminology for Multiple Linear Regression\nThere are several equivalent terms in the context of regression. The ones we use most often for this course are bolded.\n\n\\(x\\) can be called a\n\nFeature(s)\nCovariate(s)\nIndependent variable(s)\nExplanatory variable(s)\nPredictor(s)\nInput(s)\nRegressor(s)\n\n\\(y\\) can be called an\n\nOutput\nOutcome\nResponse\nDependent variable\n\n\\(\\hat{y}\\) can be called a\n\nPrediction\nPredicted response\nEstimated value\n\n\\(\\theta\\) can be called a\n\nWeight(s)\nParameter(s)\nCoefficient(s)\n\n\\(\\hat{\\theta}\\) can be called a\n\nEstimator(s)\nOptimal parameter(s)\n\nA datapoint \\((x, y)\\) is also called an observation." + }, + { + "objectID": "constant_model_loss_transformations/loss_transformations.html#multiple-linear-regression", + "href": "constant_model_loss_transformations/loss_transformations.html#multiple-linear-regression", + "title": "11  Constant Model, Loss, and Transformations", + "section": "11.8 Multiple Linear Regression", + "text": "11.8 Multiple Linear Regression\nMultiple linear regression is an extension of simple linear regression that adds additional features to the model. The multiple linear regression model takes the form:\n\\[\\hat{y} = \\theta_0\\:+\\:\\theta_1x_{1}\\:+\\:\\theta_2 x_{2}\\:+\\:...\\:+\\:\\theta_p x_{p}\\]\nOur predicted value of \\(y\\), \\(\\hat{y}\\), is a linear combination of the single observations (features), \\(x_i\\), and the parameters, \\(\\theta_i\\).\nWe’ll dive deeper into Multiple Linear Regression in the next lecture." + }, + { + "objectID": "constant_model_loss_transformations/loss_transformations.html#bonus-calculating-constant-model-mse-using-an-algebraic-trick", + "href": "constant_model_loss_transformations/loss_transformations.html#bonus-calculating-constant-model-mse-using-an-algebraic-trick", + "title": "11  Constant Model, Loss, and Transformations", + "section": "11.9 Bonus: Calculating Constant Model MSE Using an Algebraic Trick", + "text": "11.9 Bonus: Calculating Constant Model MSE Using an Algebraic Trick\nEarlier, we calculated the constant model MSE using calculus. It turns out that there is a much more elegant way of performing this same minimization algebraically, without using calculus at all.\nIn this calculation, we use the fact that the sum of deviations from the mean is \\(0\\) or that \\(\\sum_{i=1}^{n} (y_i - \\bar{y}) = 0\\).\nLet’s quickly walk through the proof for this: \\[\\sum_{i=1}^{n} (y_i - \\bar{y}) = \\sum_{i=1}^{n} y_i - \\sum_{i=1}^{n} \\bar{y}\\]\n\\[ = \\sum_{i=1}^{n} y_i - n\\bar{y}\\]\n\\[ = \\sum_{i=1}^{n} y_i - n\\frac{1}{n}\\sum_{i=1}^{n}y_i\\]\n\\[ = \\sum_{i=1}^{n} y_i - \\sum_{i=1}^{n}y_i = 0\\]\nIn our calculations, we’ll also be using the definition of the variance as a sample. As a refresher:\n\\[\\sigma_y^2 = \\frac{1}{n}\\sum_{i=1}^{n} (y_i - \\bar{y})^2\\]\nGetting into our calculation for MSE minimization:\n\\[\n\\begin{align}\nR(\\theta) &= {\\frac{1}{n}}\\sum^{n}_{i=1} (y_i - \\theta)^2\n\\\\ &= \\frac{1}{n}\\sum^{n}_{i=1} [(y_i - \\bar{y}) + (\\bar{y} - \\theta)]^2\\quad \\quad \\text{using trick that a-b can be written as (a-c) + (c-b) where a, b, and c are any numbers}\n\\\\ &= \\frac{1}{n}\\sum^{n}_{i=1} [(y_i - \\bar{y})^2 + 2(y_i - \\bar{y})(\\bar{y} - \\theta) + (\\bar{y} - \\theta)^2]\n\\\\ &= \\frac{1}{n}[\\sum^{n}_{i=1}(y_i - \\bar{y})^2 + 2(\\bar{y} - \\theta)\\sum^{n}_{i=1}(y_i - \\bar{y}) + n(\\bar{y} - \\theta)^2] \\quad \\quad \\text{distribute the sum to individual terms}\n\\\\ &= \\frac{1}{n}\\sum^{n}_{i=1}(y_i - \\bar{y})^2 + \\frac{2}{n}(\\bar{y} - \\theta)\\cdot0 + (\\bar{y} - \\theta)^2 \\quad \\quad \\text{using property that sum of deviations from mean is 0}\n\\\\ &= \\sigma_y^2 + (\\bar{y} - \\theta)^2\n\\end{align}\n\\]\nSince variance can’t be negative, we know that our first term, \\(\\sigma_y^2\\) is greater than or equal to \\(0\\). Also note, that the first term doesn’t involve \\(\\theta\\) at all, meaning changing our model won’t change this value. For the purposes of determining $#, we can then essentially ignore this term.\nLooking at the second term, \\((\\bar{y} - \\theta)^2\\), since it is squared, we know it must be greater than or equal to \\(0\\). As this term does involve \\(\\theta\\), picking the value of \\(\\theta\\) that minimizes this term will allow us to minimize our average loss. For the second term to equal \\(0\\), \\(\\theta = \\bar{y}\\), or in other words, \\(\\hat{\\theta} = \\bar{y} = mean(y)\\).\n\n11.9.0.0.1 Note\nIn the derivation above, we decompose the expected loss, \\(R(\\theta)\\), into two key components: the variance of the data, \\(\\sigma_y^2\\), and the square of the bias, \\((\\bar{y} - \\theta)^2\\). This decomposition is insightful for understanding the behavior of estimators in statistical models.\n\nVariance, \\(\\sigma_y^2\\): This term represents the spread of the data points around their mean, \\(\\bar{y}\\), and is a measure of the data’s inherent variability. Importantly, it does not depend on the choice of \\(\\theta\\), meaning it’s a fixed property of the data. Variance serves as an indicator of the data’s dispersion and is crucial in understanding the dataset’s structure, but it remains constant regardless of how we adjust our model parameter \\(\\theta\\).\nBias Squared, \\((\\bar{y} - \\theta)^2\\): This term captures the bias of the estimator, defined as the square of the difference between the mean of the data points, \\(\\bar{y}\\), and the parameter \\(\\theta\\). The bias quantifies the systematic error introduced when estimating \\(\\theta\\). Minimizing this term is essential for improving the accuracy of the estimator. When \\(\\theta = \\bar{y}\\), the bias is \\(0\\), indicating that the estimator is unbiased for the parameter it estimates. This highlights a critical principle in statistical estimation: choosing \\(\\theta\\) to be the sample mean, \\(\\bar{y}\\), minimizes the average loss, rendering the estimator both efficient and unbiased for the population mean." + }, + { + "objectID": "ols/ols.html#ols-problem-formulation", + "href": "ols/ols.html#ols-problem-formulation", + "title": "12  Ordinary Least Squares", + "section": "12.1 OLS Problem Formulation", + "text": "12.1 OLS Problem Formulation\n\n12.1.1 Multiple Linear Regression\nMultiple linear regression is an extension of simple linear regression that adds additional features to the model. The multiple linear regression model takes the form:\n\\[\\hat{y} = \\theta_0\\:+\\:\\theta_1x_{1}\\:+\\:\\theta_2 x_{2}\\:+\\:...\\:+\\:\\theta_p x_{p}\\]\nOur predicted value of \\(y\\), \\(\\hat{y}\\), is a linear combination of the single observations (features), \\(x_i\\), and the parameters, \\(\\theta_i\\).\nWe can explore this idea further by looking at a dataset containing aggregate per-player data from the 2018-19 NBA season, downloaded from Kaggle.\n\n\nCode\nimport pandas as pd\nnba = pd.read_csv('data/nba18-19.csv', index_col=0)\nnba.index.name = None # Drops name of index (players are ordered by rank)\n\n\n\n\nCode\nnba.head(5)\n\n\n\n\n\n\n\n\n\nPlayer\nPos\nAge\nTm\nG\nGS\nMP\nFG\nFGA\nFG%\n...\nFT%\nORB\nDRB\nTRB\nAST\nSTL\nBLK\nTOV\nPF\nPTS\n\n\n\n\n1\nÁlex Abrines\\abrinal01\nSG\n25\nOKC\n31\n2\n19.0\n1.8\n5.1\n0.357\n...\n0.923\n0.2\n1.4\n1.5\n0.6\n0.5\n0.2\n0.5\n1.7\n5.3\n\n\n2\nQuincy Acy\\acyqu01\nPF\n28\nPHO\n10\n0\n12.3\n0.4\n1.8\n0.222\n...\n0.700\n0.3\n2.2\n2.5\n0.8\n0.1\n0.4\n0.4\n2.4\n1.7\n\n\n3\nJaylen Adams\\adamsja01\nPG\n22\nATL\n34\n1\n12.6\n1.1\n3.2\n0.345\n...\n0.778\n0.3\n1.4\n1.8\n1.9\n0.4\n0.1\n0.8\n1.3\n3.2\n\n\n4\nSteven Adams\\adamsst01\nC\n25\nOKC\n80\n80\n33.4\n6.0\n10.1\n0.595\n...\n0.500\n4.9\n4.6\n9.5\n1.6\n1.5\n1.0\n1.7\n2.6\n13.9\n\n\n5\nBam Adebayo\\adebaba01\nC\n21\nMIA\n82\n28\n23.3\n3.4\n5.9\n0.576\n...\n0.735\n2.0\n5.3\n7.3\n2.2\n0.9\n0.8\n1.5\n2.5\n8.9\n\n\n\n\n5 rows × 29 columns\n\n\n\nLet’s say we are interested in predicting the number of points (PTS) an athlete will score in a basketball game this season.\nSuppose we want to fit a linear model by using some characteristics, or features of a player. Specifically, we’ll focus on field goals, assists, and 3-point attempts.\n\nFG, the average number of (2-point) field goals per game\nAST, the average number of assists per game\n3PA, the average number of 3-point field goals attempted per game\n\n\n\nCode\nnba[['FG', 'AST', '3PA', 'PTS']].head()\n\n\n\n\n\n\n\n\n\nFG\nAST\n3PA\nPTS\n\n\n\n\n1\n1.8\n0.6\n4.1\n5.3\n\n\n2\n0.4\n0.8\n1.5\n1.7\n\n\n3\n1.1\n1.9\n2.2\n3.2\n\n\n4\n6.0\n1.6\n0.0\n13.9\n\n\n5\n3.4\n2.2\n0.2\n8.9\n\n\n\n\n\n\n\nBecause we are now dealing with many parameter values, we’ve collected them all into a parameter vector with dimensions \\((p+1) \\times 1\\) to keep things tidy. Remember that \\(p\\) represents the number of features we have (in this case, 3).\n\\[\\theta = \\begin{bmatrix}\n \\theta_{0} \\\\\n \\theta_{1} \\\\\n \\vdots \\\\\n \\theta_{p}\n \\end{bmatrix}\\]\nWe are working with two vectors here: a row vector representing the observed data, and a column vector containing the model parameters. The multiple linear regression model is equivalent to the dot (scalar) product of the observation vector and parameter vector.\n\\[[1,\\:x_{1},\\:x_{2},\\:x_{3},\\:...,\\:x_{p}] \\theta = [1,\\:x_{1},\\:x_{2},\\:x_{3},\\:...,\\:x_{p}] \\begin{bmatrix}\n \\theta_{0} \\\\\n \\theta_{1} \\\\\n \\vdots \\\\\n \\theta_{p}\n \\end{bmatrix} = \\theta_0\\:+\\:\\theta_1x_{1}\\:+\\:\\theta_2 x_{2}\\:+\\:...\\:+\\:\\theta_p x_{p}\\]\nNotice that we have inserted 1 as the first value in the observation vector. When the dot product is computed, this 1 will be multiplied with \\(\\theta_0\\) to give the intercept of the regression model. We call this 1 entry the intercept or bias term.\nGiven that we have three features here, we can express this model as: \\[\\hat{y} = \\theta_0\\:+\\:\\theta_1x_{1}\\:+\\:\\theta_2 x_{2}\\:+\\:\\theta_3 x_{3}\\]\nOur features are represented by \\(x_1\\) (FG), \\(x_2\\) (AST), and \\(x_3\\) (3PA) with each having correpsonding parameters, \\(\\theta_1\\), \\(\\theta_2\\), and \\(\\theta_3\\).\nIn statistics, this model + loss is called Ordinary Least Squares (OLS). The solution to OLS is the minimizing loss for parameters \\(\\hat{\\theta}\\), also called the least squares estimate.\n\n\n12.1.2 Linear Algebra Approach\n\n\n\n\n\n\nLinear Algebra Review: Vector Dot Product\n\n\n\n\n\nThe dot product (or inner product) is a vector operation that:\n\nCan only be carried out on two vectors of the same length\nSums up the products of the corresponding entries of the two vectors\nReturns a single number\n\nFor example, let \\[\n\\begin{align}\n\\vec{u} = \\begin{bmatrix}1 \\\\ 2 \\\\ 3\\end{bmatrix}, \\vec{v} = \\begin{bmatrix}1 \\\\ 1 \\\\ 1\\end{bmatrix}\n\\end{align}\n\\]\nThe dot product between \\(\\vec{u}\\) and \\(\\vec{v}\\) is \\[\n\\begin{align}\n\\vec{u} \\cdot \\vec{v} &= \\vec{u}^T \\vec{v} = \\vec{v}^T \\vec{u} \\\\\n &= 1 \\cdot 1 + 2 \\cdot 1 + 3 \\cdot 1 \\\\\n &= 6\n\\end{align}\n\\]\nWhile not in scope, note that we can also interpret the dot product geometrically:\n\nIt is the product of three things: the magnitude of both vectors, and the cosine of the angles between them: \\[\\vec{u} \\cdot \\vec{v} = ||\\vec{u}|| \\cdot ||\\vec{v}|| \\cdot {cos \\theta}\\]\n\n\n\n\nWe now know how to generate a single prediction from multiple observed features. Data scientists usually work at scale – that is, they want to build models that can produce many predictions, all at once. The vector notation we introduced above gives us a hint on how we can expedite multiple linear regression. We want to use the tools of linear algebra.\nLet’s think about how we can apply what we did above. To accommodate for the fact that we’re considering several feature variables, we’ll adjust our notation slightly. Each observation can now be thought of as a row vector with an entry for each of \\(p\\) features.\n\n\n\n\n\n\n\n\n\n\n\nTo make a prediction from the first observation in the data, we take the dot product of the parameter vector and first observation vector. To make a prediction from the second observation, we would repeat this process to find the dot product of the parameter vector and the second observation vector. If we wanted to find the model predictions for each observation in the dataset, we’d repeat this process for all \\(n\\) observations in the data.\n\\[\\hat{y}_1 = \\theta_0 + \\theta_1 x_{11} + \\theta_2 x_{12} + ... + \\theta_p x_{1p} = [1,\\:x_{11},\\:x_{12},\\:x_{13},\\:...,\\:x_{1p}] \\theta\\] \\[\\hat{y}_2 = \\theta_0 + \\theta_1 x_{21} + \\theta_2 x_{22} + ... + \\theta_p x_{2p} = [1,\\:x_{21},\\:x_{22},\\:x_{23},\\:...,\\:x_{2p}] \\theta\\] \\[\\vdots\\] \\[\\hat{y}_n = \\theta_0 + \\theta_1 x_{n1} + \\theta_2 x_{n2} + ... + \\theta_p x_{np} = [1,\\:x_{n1},\\:x_{n2},\\:x_{n3},\\:...,\\:x_{np}] \\theta\\]\nOur observed data is represented by \\(n\\) row vectors, each with dimension \\((p+1)\\). We can collect them all into a single matrix, which we call \\(\\mathbb{X}\\).\n\n\n\n\n\n\n\n\n\n\n\nThe matrix \\(\\mathbb{X}\\) is known as the design matrix. It contains all observed data for each of our \\(p\\) features, where each row corresponds to one observation, and each column corresponds to a feature. It often (but not always) contains an additional column of all ones to represent the intercept or bias column.\nTo review what is happening in the design matrix: each row represents a single observation. For example, a student in Data 100. Each column represents a feature. For example, the ages of students in Data 100. This convention allows us to easily transfer our previous work in DataFrames over to this new linear algebra perspective.\n\n\n\n\n\n\n\n\n\n\n\nThe multiple linear regression model can then be restated in terms of matrices: \\[\n\\Large\n\\mathbb{\\hat{Y}} = \\mathbb{X} \\theta\n\\]\nHere, \\(\\mathbb{\\hat{Y}}\\) is the prediction vector with \\(n\\) elements (\\(\\mathbb{\\hat{Y}} \\in \\mathbb{R}^{n}\\)); it contains the prediction made by the model for each of the \\(n\\) input observations. \\(\\mathbb{X}\\) is the design matrix with dimensions \\(\\mathbb{X} \\in \\mathbb{R}^{n \\times (p + 1)}\\), and \\(\\theta\\) is the parameter vector with dimensions \\(\\theta \\in \\mathbb{R}^{(p + 1)}\\). Note that our true output \\(\\mathbb{Y}\\) is also a vector with \\(n\\) elements (\\(\\mathbb{Y} \\in \\mathbb{R}^{n}\\)).\n\n\n\n\n\n\nLinear Algebra Review: Linearity\n\n\n\nAn expression is linear in \\(\\theta\\) (a set of parameters) if it is a linear combination of the elements of the set. Checking if an expression can separate into a matrix product of two terms – a vector of \\(\\theta\\) s, and a matrix/vector not involving \\(\\theta\\) – is a good indicator of linearity.\nFor example, consider the vector \\(\\theta = [\\theta_0, \\theta_1, \\theta_2]\\)\n\n\\(\\hat{y} = \\theta_0 + 2\\theta_1 + 3\\theta_2\\) is linear in theta, and we can separate it into a matrix product of two terms:\n\n\\[\\hat{y} = \\begin{bmatrix} 1 \\space 2 \\space 3 \\end{bmatrix} \\begin{bmatrix} \\theta_0 \\\\ \\theta_1 \\\\ \\theta_2 \\end{bmatrix}\\]\n\n\\(\\hat{y} = \\theta_0\\theta_1 + 2\\theta_1^2 + 3log(\\theta_2)\\) is not linear in theta, as the \\(\\theta_1\\) term is squared, and the \\(\\theta_2\\) term is logged. We cannot separate it into a matrix product of two terms.\n\n\n\n\n\n12.1.3 Mean Squared Error\nWe now have a new approach to understanding models in terms of vectors and matrices. To accompany this new convention, we should update our understanding of risk functions and model fitting.\nRecall our definition of MSE: \\[R(\\theta) = \\frac{1}{n} \\sum_{i=1}^n (y_i - \\hat{y}_i)^2\\]\nAt its heart, the MSE is a measure of distance – it gives an indication of how “far away” the predictions are from the true values, on average.\n\n\n\n\n\n\nLinear Algebra: L2 Norm\n\n\n\nWhen working with vectors, this idea of “distance” or the vector’s size/length is represented by the norm. More precisely, the distance between two vectors \\(\\vec{a}\\) and \\(\\vec{b}\\) can be expressed as: \\[||\\vec{a} - \\vec{b}||_2 = \\sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2 + \\ldots + (a_n - b_n)^2} = \\sqrt{\\sum_{i=1}^n (a_i - b_i)^2}\\]\nThe double bars are mathematical notation for the norm. The subscript 2 indicates that we are computing the L2, or squared norm.\nThe two norms we need to know for Data 100 are the L1 and L2 norms (sound familiar?). In this note, we’ll focus on L2 norm. We’ll dive into L1 norm in future lectures.\nFor the n-dimensional vector \\[\\vec{x} = \\begin{bmatrix} x_1 \\\\ x_2 \\\\ \\vdots \\\\ x_n \\end{bmatrix}\\] its L2 vector norm is\n\\[||\\vec{x}||_2 = \\sqrt{(x_1)^2 + (x_2)^2 + \\ldots + (x_n)^2} = \\sqrt{\\sum_{i=1}^n (x_i)^2}\\]\nThe L2 vector norm is a generalization of the Pythagorean theorem in \\(n\\) dimensions. Thus, it can be used as a measure of the length of a vector or even as a measure of the distance between two vectors.\n\n\nWe can express the MSE as a squared L2 norm if we rewrite it in terms of the prediction vector, \\(\\hat{\\mathbb{Y}}\\), and true target vector, \\(\\mathbb{Y}\\):\n\\[R(\\theta) = \\frac{1}{n} \\sum_{i=1}^n (y_i - \\hat{y}_i)^2 = \\frac{1}{n} (||\\mathbb{Y} - \\hat{\\mathbb{Y}}||_2)^2\\]\nHere, the superscript 2 outside of the parentheses means that we are squaring the norm. If we plug in our linear model \\(\\hat{\\mathbb{Y}} = \\mathbb{X} \\theta\\), we find the MSE cost function in vector notation:\n\\[R(\\theta) = \\frac{1}{n} (||\\mathbb{Y} - \\mathbb{X} \\theta||_2)^2\\]\nUnder the linear algebra perspective, our new task is to fit the optimal parameter vector \\(\\theta\\) such that the cost function is minimized. Equivalently, we wish to minimize the norm \\[||\\mathbb{Y} - \\mathbb{X} \\theta||_2 = ||\\mathbb{Y} - \\hat{\\mathbb{Y}}||_2.\\]\nWe can restate this goal in two ways:\n\nMinimize the distance between the vector of true values, \\(\\mathbb{Y}\\), and the vector of predicted values, \\(\\mathbb{\\hat{Y}}\\)\nMinimize the length of the residual vector, defined as: \\[e = \\mathbb{Y} - \\mathbb{\\hat{Y}} = \\begin{bmatrix}\n y_1 - \\hat{y}_1 \\\\\n y_2 - \\hat{y}_2 \\\\\n \\vdots \\\\\n y_n - \\hat{y}_n\n \\end{bmatrix}\\]\n\n\n\n12.1.4 A Note on Terminology for Multiple Linear Regression\nThere are several equivalent terms in the context of regression. The ones we use most often for this course are bolded.\n\n\\(x\\) can be called a\n\nFeature(s)\nCovariate(s)\nIndependent variable(s)\nExplanatory variable(s)\nPredictor(s)\nInput(s)\nRegressor(s)\n\n\\(y\\) can be called an\n\nOutput\nOutcome\nResponse\nDependent variable\n\n\\(\\hat{y}\\) can be called a\n\nPrediction\nPredicted response\nEstimated value\n\n\\(\\theta\\) can be called a\n\nWeight(s)\nParameter(s)\nCoefficient(s)\n\n\\(\\hat{\\theta}\\) can be called a\n\nEstimator(s)\nOptimal parameter(s)\n\nA datapoint \\((x, y)\\) is also called an observation." + }, + { + "objectID": "ols/ols.html#geometric-derivation", + "href": "ols/ols.html#geometric-derivation", + "title": "12  Ordinary Least Squares", + "section": "12.2 Geometric Derivation", + "text": "12.2 Geometric Derivation\n\n\n\n\n\n\nLinear Algebra: Span\n\n\n\nRecall that the span or column space of a matrix \\(\\mathbb{X}\\) (denoted \\(span(\\mathbb{X})\\)) is the set of all possible linear combinations of the matrix’s columns. In other words, the span represents every point in space that could possibly be reached by adding and scaling some combination of the matrix columns. Additionally, if each column of \\(\\mathbb{X}\\) has length \\(n\\), \\(span(\\mathbb{X})\\) is a subspace of \\(\\mathbb{R}^{n}\\).\n\n\n\n\n\n\n\n\nLinear Algebra: Matrix-Vector Multiplication\n\n\n\nThere are 2 ways we can think about matrix-vector multiplication\n\nSo far, we’ve thought of our model as horizontally stacked predictions per datapoint\n\n\n\n\n\n\n\n\n\n\n\nHowever, it is helpful sometimes to think of matrix-vector multiplication as performed by columns. We can also think of \\(\\mathbb{Y}\\) as a linear combination of feature vectors, scaled by parameters.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nUp until now, we’ve mostly thought of our model as a scalar product between horizontally stacked observations and the parameter vector. We can also think of \\(\\hat{\\mathbb{Y}}\\) as a linear combination of feature vectors, scaled by the parameters. We use the notation \\(\\mathbb{X}_{:, i}\\) to denote the \\(i\\)th column of the design matrix. You can think of this as following the same convention as used when calling .iloc and .loc. “:” means that we are taking all entries in the \\(i\\)th column.\n\n\n\n\n\n\n\n\n\n\n\n\\[\n\\hat{\\mathbb{Y}} =\n\\theta_0 \\begin{bmatrix}\n 1 \\\\\n 1 \\\\\n \\vdots \\\\\n 1\n \\end{bmatrix} + \\theta_1 \\begin{bmatrix}\n x_{11} \\\\\n x_{21} \\\\\n \\vdots \\\\\n x_{n1}\n \\end{bmatrix} + \\ldots + \\theta_p \\begin{bmatrix}\n x_{1p} \\\\\n x_{2p} \\\\\n \\vdots \\\\\n x_{np}\n \\end{bmatrix}\n = \\theta_0 \\mathbb{X}_{:,\\:1} + \\theta_1 \\mathbb{X}_{:,\\:2} + \\ldots + \\theta_p \\mathbb{X}_{:,\\:p+1}\\]\nThis new approach is useful because it allows us to take advantage of the properties of linear combinations.\nBecause the prediction vector, \\(\\hat{\\mathbb{Y}} = \\mathbb{X} \\theta\\), is a linear combination of the columns of \\(\\mathbb{X}\\), we know that the predictions are contained in the span of \\(\\mathbb{X}\\). That is, we know that \\(\\mathbb{\\hat{Y}} \\in \\text{Span}(\\mathbb{X})\\).\nThe diagram below is a simplified view of \\(\\text{Span}(\\mathbb{X})\\), assuming that each column of \\(\\mathbb{X}\\) has length \\(n\\). Notice that the columns of \\(\\mathbb{X}\\) define a subspace of \\(\\mathbb{R}^n\\), where each point in the subspace can be reached by a linear combination of \\(\\mathbb{X}\\)’s columns. The prediction vector \\(\\mathbb{\\hat{Y}}\\) lies somewhere in this subspace.\n\n\n\n\n\n\n\n\n\n\n\nExamining this diagram, we find a problem. The vector of true values, \\(\\mathbb{Y}\\), could theoretically lie anywhere in \\(\\mathbb{R}^n\\) space – its exact location depends on the data we collect out in the real world. However, our multiple linear regression model can only make predictions in the subspace of \\(\\mathbb{R}^n\\) spanned by \\(\\mathbb{X}\\). Remember the model fitting goal we established in the previous section: we want to generate predictions such that the distance between the vector of true values, \\(\\mathbb{Y}\\), and the vector of predicted values, \\(\\mathbb{\\hat{Y}}\\), is minimized. This means that we want \\(\\mathbb{\\hat{Y}}\\) to be the vector in \\(\\text{Span}(\\mathbb{X})\\) that is closest to \\(\\mathbb{Y}\\).\nAnother way of rephrasing this goal is to say that we wish to minimize the length of the residual vector \\(e\\), as measured by its \\(L_2\\) norm.\n\n\n\n\n\n\n\n\n\n\n\nThe vector in \\(\\text{Span}(\\mathbb{X})\\) that is closest to \\(\\mathbb{Y}\\) is always the orthogonal projection of \\(\\mathbb{Y}\\) onto \\(\\text{Span}(\\mathbb{X}).\\) Thus, we should choose the parameter vector \\(\\theta\\) that makes the residual vector orthogonal to any vector in \\(\\text{Span}(\\mathbb{X})\\). You can visualize this as the vector created by dropping a perpendicular line from \\(\\mathbb{Y}\\) onto the span of \\(\\mathbb{X}\\).\n\n\n\n\n\n\nLinear Algebra: Orthogonality\n\n\n\nRecall that two vectors \\(\\vec{a}\\) and \\(\\vec{b}\\) are orthogonal if their dot product is zero: \\(\\vec{a}^{T}\\vec{b} = 0\\).\nA vector \\(v\\) is orthogonal to the span of a matrix \\(M\\) if and only if \\(v\\) is orthogonal to each column in \\(M\\). Put together, a vector \\(v\\) is orthogonal to \\(\\text{Span}(M)\\) if:\n\\[M^Tv = \\vec{0}\\]\nNote that \\(\\vec{0}\\) represents the zero vector, a \\(d\\)-length vector full of 0s.\n\n\nRemember our goal is to find \\(\\hat{\\theta}\\) such that we minimize the objective function \\(R(\\theta)\\). Equivalently, this is the \\(\\hat{\\theta}\\) such that the residual vector \\(e = \\mathbb{Y} - \\mathbb{X} \\hat{\\theta}\\) is orthogonal to \\(\\text{Span}(\\mathbb{X})\\).\nLooking at the definition of orthogonality of \\(\\mathbb{Y} - \\mathbb{X}\\hat{\\theta}\\) to \\(span(\\mathbb{X})\\), we can write: \\[\\mathbb{X}^T (\\mathbb{Y} - \\mathbb{X}\\hat{\\theta}) = \\vec{0}\\]\nLet’s then rearrange the terms: \\[\\mathbb{X}^T \\mathbb{Y} - \\mathbb{X}^T \\mathbb{X} \\hat{\\theta} = \\vec{0}\\]\nAnd finally, we end up with the normal equation: \\[\\mathbb{X}^T \\mathbb{X} \\hat{\\theta} = \\mathbb{X}^T \\mathbb{Y}\\]\nAny vector \\(\\theta\\) that minimizes MSE on a dataset must satisfy this equation.\nIf \\(\\mathbb{X}^T \\mathbb{X}\\) is invertible, we can conclude: \\[\\hat{\\theta} = (\\mathbb{X}^T \\mathbb{X})^{-1} \\mathbb{X}^T \\mathbb{Y}\\]\nThis is called the least squares estimate of \\(\\theta\\): it is the value of \\(\\theta\\) that minimizes the squared loss.\nNote that the least squares estimate was derived under the assumption that \\(\\mathbb{X}^T \\mathbb{X}\\) is invertible. This condition holds true when \\(\\mathbb{X}^T \\mathbb{X}\\) is full column rank, which, in turn, happens when \\(\\mathbb{X}\\) is full column rank. The proof for why \\(\\mathbb{X}\\) needs to be full column rank is optional and in the Bonus section at the end." + }, + { + "objectID": "ols/ols.html#evaluating-model-performance", + "href": "ols/ols.html#evaluating-model-performance", + "title": "12  Ordinary Least Squares", + "section": "12.3 Evaluating Model Performance", + "text": "12.3 Evaluating Model Performance\nOur geometric view of multiple linear regression has taken us far! We have identified the optimal set of parameter values to minimize MSE in a model of multiple features. Now, we want to understand how well our fitted model performs.\n\n12.3.1 RMSE\nOne measure of model performance is the Root Mean Squared Error, or RMSE. The RMSE is simply the square root of MSE. Taking the square root converts the value back into the original, non-squared units of \\(y_i\\), which is useful for understanding the model’s performance. A low RMSE indicates more “accurate” predictions – that there is a lower average loss across the dataset.\n\\[\\text{RMSE} = \\sqrt{\\frac{1}{n} \\sum_{i=1}^n (y_i - \\hat{y}_i)^2}\\]\n\n\n12.3.2 Residual Plots\nWhen working with SLR, we generated plots of the residuals against a single feature to understand the behavior of residuals. When working with several features in multiple linear regression, it no longer makes sense to consider a single feature in our residual plots. Instead, multiple linear regression is evaluated by making plots of the residuals against the predicted values. As was the case with SLR, a multiple linear model performs well if its residual plot shows no patterns.\n\n\n\n\n\n\n\n\n\n\n\n\n\n12.3.3 Multiple \\(R^2\\)\nFor SLR, we used the correlation coefficient to capture the association between the target variable and a single feature variable. In a multiple linear model setting, we will need a performance metric that can account for multiple features at once. Multiple \\(R^2\\), also called the coefficient of determination, is the proportion of variance of our fitted values (predictions) \\(\\hat{y}_i\\) to our true values \\(y_i\\). It ranges from 0 to 1 and is effectively the proportion of variance in the observations that the model explains.\n\\[R^2 = \\frac{\\text{variance of } \\hat{y}_i}{\\text{variance of } y_i} = \\frac{\\sigma^2_{\\hat{y}}}{\\sigma^2_y}\\]\nNote that for OLS with an intercept term, for example \\(\\hat{y} = \\theta_0 + \\theta_1x_1 + \\theta_2x_2 + \\cdots + \\theta_px_p\\), \\(R^2\\) is equal to the square of the correlation between \\(y\\) and \\(\\hat{y}\\). On the other hand for SLR, \\(R^2\\) is equal to \\(r^2\\), the correlation between \\(x\\) and \\(y\\). The proof of these last two properties is out of scope for this course.\nAdditionally, as we add more features, our fitted values tend to become closer and closer to our actual values. Thus, \\(R^2\\) increases.\nAdding more features doesn’t always mean our model is better though! We’ll see why later in the course." + }, + { + "objectID": "ols/ols.html#ols-properties", + "href": "ols/ols.html#ols-properties", + "title": "12  Ordinary Least Squares", + "section": "12.4 OLS Properties", + "text": "12.4 OLS Properties\n\nWhen using the optimal parameter vector, our residuals \\(e = \\mathbb{Y} - \\hat{\\mathbb{Y}}\\) are orthogonal to \\(span(\\mathbb{X})\\).\n\n\\[\\mathbb{X}^Te = 0 \\]\n\n\n\n\n\n\nProof:\n\nThe optimal parameter vector, \\(\\hat{\\theta}\\), solves the normal equations \\(\\implies \\hat{\\theta} = (\\mathbb{X}^T\\mathbb{X})^{-1}\\mathbb{X}^T\\mathbb{Y}\\)\n\n\\[\\mathbb{X}^Te = \\mathbb{X}^T (\\mathbb{Y} - \\mathbb{\\hat{Y}}) \\]\n\\[\\mathbb{X}^T (\\mathbb{Y} - \\mathbb{X}\\hat{\\theta}) = \\mathbb{X}^T\\mathbb{Y} - \\mathbb{X}^T\\mathbb{X}\\hat{\\theta}\\]\n\nAny matrix multiplied with its own inverse is the identity matrix \\(\\mathbb{I}\\)\n\n\\[\\mathbb{X}^T\\mathbb{Y} - (\\mathbb{X}^T\\mathbb{X})(\\mathbb{X}^T\\mathbb{X})^{-1}\\mathbb{X}^T\\mathbb{Y} = \\mathbb{X}^T\\mathbb{Y} - \\mathbb{X}^T\\mathbb{Y} = 0\\]\n\n\n\n\nFor all linear models with an intercept term, the sum of residuals is zero.\n\n\\[\\sum_i^n e_i = 0\\]\n\n\n\n\n\n\nProof:\n\nFor all linear models with an intercept term, the average of the predicted \\(y\\) values is equal to the average of the true \\(y\\) values. \\[\\bar{y} = \\bar{\\hat{y}}\\]\nRewriting the sum of residuals as two separate sums, \\[\\sum_i^n e_i = \\sum_i^n y_i - \\sum_i^n\\hat{y}_i\\]\nEach respective sum is a multiple of the average of the sum. \\[\\sum_i^n e_i = n\\bar{y} - n\\bar{y} = n(\\bar{y} - \\bar{y}) = 0\\]\n\n\n\n\nTo summarize:\n\n\n\n\n\n\n\n\n\n\nModel\nEstimate\nUnique?\n\n\n\n\nConstant Model + MSE\n\\(\\hat{y} = \\theta_0\\)\n\\(\\hat{\\theta}_0 = mean(y) = \\bar{y}\\)\nYes. Any set of values has a unique mean.\n\n\nConstant Model + MAE\n\\(\\hat{y} = \\theta_0\\)\n\\(\\hat{\\theta}_0 = median(y)\\)\nYes, if odd. No, if even. Return the average of the middle 2 values.\n\n\nSimple Linear Regression + MSE\n\\(\\hat{y} = \\theta_0 + \\theta_1x\\)\n\\(\\hat{\\theta}_0 = \\bar{y} - \\hat{\\theta}_1\\hat{x}\\) \\(\\hat{\\theta}_1 = r\\frac{\\sigma_y}{\\sigma_x}\\)\nYes. Any set of non-constant* values has a unique mean, SD, and correlation coefficient.\n\n\nOLS (Linear Model + MSE)\n\\(\\mathbb{\\hat{Y}} = \\mathbb{X}\\mathbb{\\theta}\\)\n\\(\\hat{\\theta} = (\\mathbb{X}^T\\mathbb{X})^{-1}\\mathbb{X}^T\\mathbb{Y}\\)\nYes, if \\(\\mathbb{X}\\) is full column rank (all columns are linearly independent, # of datapoints >>> # of features)." + }, + { + "objectID": "ols/ols.html#bonus-uniqueness-of-the-solution", + "href": "ols/ols.html#bonus-uniqueness-of-the-solution", + "title": "12  Ordinary Least Squares", + "section": "12.5 Bonus: Uniqueness of the Solution", + "text": "12.5 Bonus: Uniqueness of the Solution\nThe Least Squares estimate \\(\\hat{\\theta}\\) is unique if and only if \\(\\mathbb{X}\\) is full column rank.\n\n\n\n\n\n\nProof:\n\nWe know the solution to the normal equation \\(\\mathbb{X}^T\\mathbb{X}\\hat{\\theta} = \\mathbb{X}^T\\mathbb{Y}\\) is the least square estimate that minimizes the squared loss.\n\\(\\hat{\\theta}\\) has a unique solution \\(\\iff\\) the square matrix \\(\\mathbb{X}^T\\mathbb{X}\\) is invertible \\(\\iff\\) \\(\\mathbb{X}^T\\mathbb{X}\\) is full rank.\n\nThe column rank of a square matrix is the max number of linearly independent columns it contains.\nAn \\(n\\) x \\(n\\) square matrix is deemed full column rank when all of its columns are linearly independent. That is, its rank would be equal to \\(n\\).\n\\(\\mathbb{X}^T\\mathbb{X}\\) has shape \\((p + 1) \\times (p + 1)\\), and therefore has max rank \\(p + 1\\).\n\n\\(rank(\\mathbb{X}^T\\mathbb{X})\\) = \\(rank(\\mathbb{X})\\) (proof out of scope).\nTherefore, \\(\\mathbb{X}^T\\mathbb{X}\\) has rank \\(p + 1\\) \\(\\iff\\) \\(\\mathbb{X}\\) has rank \\(p + 1\\) \\(\\iff \\mathbb{X}\\) is full column rank.\n\n\n\n\nTherefore, if \\(\\mathbb{X}\\) is not full column rank, we will not have unique estimates. This can happen for two major reasons.\n\nIf our design matrix \\(\\mathbb{X}\\) is “wide”:\n\nIf n < p, then we have way more features (columns) than observations (rows).\nThen \\(rank(\\mathbb{X})\\) = min(n, p+1) < p+1, so \\(\\hat{\\theta}\\) is not unique.\nTypically we have n >> p so this is less of an issue.\n\nIf our design matrix \\(\\mathbb{X}\\) has features that are linear combinations of other features:\n\nBy definition, rank of \\(\\mathbb{X}\\) is number of linearly independent columns in \\(\\mathbb{X}\\).\nExample: If “Width”, “Height”, and “Perimeter” are all columns,\n\nPerimeter = 2 * Width + 2 * Height \\(\\rightarrow\\) \\(\\mathbb{X}\\) is not full rank.\n\nImportant with one-hot encoding (to discuss later)." + } +] \ No newline at end of file