-
Notifications
You must be signed in to change notification settings - Fork 99
/
intro.html
208 lines (193 loc) · 21.7 KB
/
intro.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
<!DOCTYPE html>
<html lang="en">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<title>Chapter 2 Introduction | Machine Learning for Factor Investing</title>
<meta name="author" content="Guillaume Coqueret and Tony Guida">
<meta name="generator" content="bookdown 0.24 with bs4_book()">
<meta property="og:title" content="Chapter 2 Introduction | Machine Learning for Factor Investing">
<meta property="og:type" content="book">
<meta name="twitter:card" content="summary">
<meta name="twitter:title" content="Chapter 2 Introduction | Machine Learning for Factor Investing">
<!-- JS --><script src="https://cdnjs.cloudflare.com/ajax/libs/clipboard.js/2.0.6/clipboard.min.js" integrity="sha256-inc5kl9MA1hkeYUt+EC3BhlIgyp/2jDIyBLS6k3UxPI=" crossorigin="anonymous"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/fuse.js/6.4.6/fuse.js" integrity="sha512-zv6Ywkjyktsohkbp9bb45V6tEMoWhzFzXis+LrMehmJZZSys19Yxf1dopHx7WzIKxr5tK2dVcYmaCk2uqdjF4A==" crossorigin="anonymous"></script><script src="https://kit.fontawesome.com/6ecbd6c532.js" crossorigin="anonymous"></script><script src="libs/header-attrs-2.11/header-attrs.js"></script><script src="libs/jquery-3.6.0/jquery-3.6.0.min.js"></script><meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<link href="libs/bootstrap-4.6.0/bootstrap.min.css" rel="stylesheet">
<script src="libs/bootstrap-4.6.0/bootstrap.bundle.min.js"></script><script src="libs/bs3compat-0.3.1/transition.js"></script><script src="libs/bs3compat-0.3.1/tabs.js"></script><script src="libs/bs3compat-0.3.1/bs3compat.js"></script><link href="libs/bs4_book-1.0.0/bs4_book.css" rel="stylesheet">
<script src="libs/bs4_book-1.0.0/bs4_book.js"></script><script src="libs/kePrint-0.0.1/kePrint.js"></script><link href="libs/lightable-0.0.1/lightable.css" rel="stylesheet">
<script src="https://cdnjs.cloudflare.com/ajax/libs/autocomplete.js/0.38.0/autocomplete.jquery.min.js" integrity="sha512-GU9ayf+66Xx2TmpxqJpliWbT5PiGYxpaG8rfnBEk1LL8l1KGkRShhngwdXK1UgqhAzWpZHSiYPc09/NwDQIGyg==" crossorigin="anonymous"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/mark.js/8.11.1/mark.min.js" integrity="sha512-5CYOlHXGh6QpOFA/TeTylKLWfB3ftPsde7AnmhuitiTX4K5SqCLBeKro6sPS8ilsz1Q4NRx3v8Ko2IBiszzdww==" crossorigin="anonymous"></script><!-- CSS --><meta name="description" content=".container-fluid main { max-width: 60rem; } Conclusions often echo introductions. This chapter was completed at the very end of the writing of the book. It outlines principles and ideas that are...">
<meta property="og:description" content=".container-fluid main { max-width: 60rem; } Conclusions often echo introductions. This chapter was completed at the very end of the writing of the book. It outlines principles and ideas that are...">
<meta name="twitter:description" content=".container-fluid main { max-width: 60rem; } Conclusions often echo introductions. This chapter was completed at the very end of the writing of the book. It outlines principles and ideas that are...">
</head>
<body data-spy="scroll" data-target="#toc">
<div class="container-fluid">
<div class="row">
<header class="col-sm-12 col-lg-3 sidebar sidebar-book"><a class="sr-only sr-only-focusable" href="#content">Skip to main content</a>
<div class="d-flex align-items-start justify-content-between">
<h1>
<a href="index.html" title="">Machine Learning for Factor Investing</a>
</h1>
<button class="btn btn-outline-primary d-lg-none ml-2 mt-1" type="button" data-toggle="collapse" data-target="#main-nav" aria-expanded="true" aria-controls="main-nav"><i class="fas fa-bars"></i><span class="sr-only">Show table of contents</span></button>
</div>
<div id="main-nav" class="collapse-lg">
<form role="search">
<input id="search" class="form-control" type="search" placeholder="Search" aria-label="Search">
</form>
<nav aria-label="Table of contents"><h2>Table of contents</h2>
<ul class="book-toc list-unstyled">
<li><a class="" href="index.html">Preface</a></li>
<li class="book-part">Introduction</li>
<li><a class="" href="notdata.html"><span class="header-section-number">1</span> Notations and data</a></li>
<li><a class="active" href="intro.html"><span class="header-section-number">2</span> Introduction</a></li>
<li><a class="" href="factor.html"><span class="header-section-number">3</span> Factor investing and asset pricing anomalies</a></li>
<li><a class="" href="Data.html"><span class="header-section-number">4</span> Data preprocessing</a></li>
<li class="book-part">Common supervised algorithms</li>
<li><a class="" href="lasso.html"><span class="header-section-number">5</span> Penalized regressions and sparse hedging for minimum variance portfolios</a></li>
<li><a class="" href="trees.html"><span class="header-section-number">6</span> Tree-based methods</a></li>
<li><a class="" href="NN.html"><span class="header-section-number">7</span> Neural networks</a></li>
<li><a class="" href="svm.html"><span class="header-section-number">8</span> Support vector machines</a></li>
<li><a class="" href="bayes.html"><span class="header-section-number">9</span> Bayesian methods</a></li>
<li class="book-part">From predictions to portfolios</li>
<li><a class="" href="valtune.html"><span class="header-section-number">10</span> Validating and tuning</a></li>
<li><a class="" href="ensemble.html"><span class="header-section-number">11</span> Ensemble models</a></li>
<li><a class="" href="backtest.html"><span class="header-section-number">12</span> Portfolio backtesting</a></li>
<li class="book-part">Further important topics</li>
<li><a class="" href="interp.html"><span class="header-section-number">13</span> Interpretability</a></li>
<li><a class="" href="causality.html"><span class="header-section-number">14</span> Two key concepts: causality and non-stationarity</a></li>
<li><a class="" href="unsup.html"><span class="header-section-number">15</span> Unsupervised learning</a></li>
<li><a class="" href="RL.html"><span class="header-section-number">16</span> Reinforcement learning</a></li>
<li class="book-part">Appendix</li>
<li><a class="" href="data-description.html"><span class="header-section-number">17</span> Data description</a></li>
<li><a class="" href="python.html"><span class="header-section-number">18</span> Python notebooks</a></li>
<li><a class="" href="solutions-to-exercises.html"><span class="header-section-number">19</span> Solutions to exercises</a></li>
</ul>
<div class="book-extra">
</div>
</nav>
</div>
</header><main class="col-sm-12 col-md-9 col-lg-7" id="content"><div id="intro" class="section level1" number="2">
<h1>
<span class="header-section-number">2</span> Introduction<a class="anchor" aria-label="anchor" href="#intro"><i class="fas fa-link"></i></a>
</h1>
<style>
.container-fluid main {
max-width: 60rem;
}
</style>
<p>Conclusions often echo introductions. This chapter was completed at the very end of the writing of the book. It outlines principles and ideas that are probably more relevant than the sum of technical details covered subsequently. When stuck with disappointing results, we advise the reader to take a step away from the algorithm and come back to this section to get a broader perspective of some of the issues in predictive modelling.</p>
<div id="context" class="section level2" number="2.1">
<h2>
<span class="header-section-number">2.1</span> Context<a class="anchor" aria-label="anchor" href="#context"><i class="fas fa-link"></i></a>
</h2>
<p>The blossoming of machine learning in factor investing has it source at the confluence of three favorable developments: data availability, computational capacity, and economic groundings.</p>
<p>First, the <strong>data</strong>. Nowadays, classical providers, such as Bloomberg and Reuters have seen their playing field invaded by niche players and aggregation platforms.<a href="solutions-to-exercises.html#fn4" class="footnote-ref" id="fnref4"><sup>4</sup></a> In addition, high-frequency data and derivative quotes have become mainstream. Hence, firm-specific attributes are easy and often cheap to compile. This means that the size of <span class="math inline">\(\mathbf{X}\)</span> in <a href="intro.html#eq:ML">(2.1)</a> is now sufficiently large to be plugged into ML algorithms. The order of magnitude (in 2019) that can be reached is the following: a few hundred monthly observations over several thousand stocks (US listed at least) covering a few hundred attributes. This makes a dataset of dozens of millions of points. While it is a reasonably high figure, we highlight that the chronological depth is probably the weak point and will remain so for decades to come because accounting figures are only released on a quarterly basis. Needless to say that this drawback does not hold for high-frequency strategies.</p>
<p>Second, <strong>computational power</strong>, both through hardware and software. Storage and processing speed are not technical hurdles anymore and models can even be run on the cloud thanks to services hosted by major actors (Amazon, Microsoft, IBM and Google) and by smaller players (Rackspace, Techila). On the software side, open source has become the norm, funded by corporations (TensorFlow & Keras by Google, Pytorch by Facebook, h2o, etc.), universities (Scikit-Learn by INRIA, NLPCore by Stanford, NLTK by UPenn) and small groups of researchers (caret, xgboost, tidymodels to list but a pair of frameworks). Consequently, ML is no longer the private turf of a handful of expert computer scientists, but is on the contrary <strong>accessible</strong> to anyone willing to learn and code.</p>
<p>Finally, <strong>economic framing</strong>. Machine learning applications in finance were initially introduced by computer scientists and information system experts (e.g., <span class="citation">Braun and Chandler (<a href="solutions-to-exercises.html#ref-braun1987predicting" role="doc-biblioref">1987</a>)</span>, <span class="citation">White (<a href="solutions-to-exercises.html#ref-white1988economic" role="doc-biblioref">1988</a>)</span>) and exploited shortly after by academics in financial economics (<span class="citation">Bansal and Viswanathan (<a href="solutions-to-exercises.html#ref-bansal1993no" role="doc-biblioref">1993</a>)</span>), and hedge funds (see, e.g., <span class="citation">Zuckerman (<a href="solutions-to-exercises.html#ref-zuckerman2019man" role="doc-biblioref">2019</a>)</span>). Nonlinear relationships then became more mainstream in asset pricing (<span class="citation">Freeman and Tse (<a href="solutions-to-exercises.html#ref-freeman1992nonlinear" role="doc-biblioref">1992</a>)</span>, <span class="citation">Bansal, Hsieh, and Viswanathan (<a href="solutions-to-exercises.html#ref-bansal1993new" role="doc-biblioref">1993</a>)</span>). These contributions started to pave the way for the more brute-force approaches that have blossomed since the 2010 decade and which are mentioned throughout the book.</p>
<p>In the synthetic proposal of <span class="citation">R. Arnott, Harvey, and Markowitz (<a href="solutions-to-exercises.html#ref-arnott2019backtesting" role="doc-biblioref">2019</a>)</span>, the first piece of advice is to rely on a model that makes sense economically. We agree with this stance, and the only assumption that we make in this book is that future returns depend on firm characteristics. The relationship between these features and performance is largely unknown and probably time-varying. This is why ML can be useful: to detect some hidden patterns beyond the documented asset pricing anomalies. Moreover, dynamic training allows to adapt to changing market conditions.</p>
</div>
<div id="portfolio-construction-the-workflow" class="section level2" number="2.2">
<h2>
<span class="header-section-number">2.2</span> Portfolio construction: the workflow<a class="anchor" aria-label="anchor" href="#portfolio-construction-the-workflow"><i class="fas fa-link"></i></a>
</h2>
<p>Building successful portfolio strategies requires many steps. This book covers many of them but focuses predominantly on the prediction part. Indeed, allocating to assets most of the time requires to make bets and thus to presage and foresee which ones will do well and which ones will not. In this book, we mostly resort to supervised learning to forecast returns in the cross-section. The baseline equation in supervised learning,</p>
<p><span class="math display" id="eq:ML">\[\begin{equation}
\mathbf{y}=f(\mathbf{X})+\mathbf{\epsilon},
\tag{2.1}
\end{equation}\]</span></p>
<p>is translated in financial terms as</p>
<p><span class="math display" id="eq:MLfin">\[\begin{equation}
\mathbf{r}_{t+1,n}=f(\mathbf{x}_{t,n})+\mathbf{\epsilon}_{t+1,n},
\tag{2.2}
\end{equation}\]</span>
where <span class="math inline">\(f(\mathbf{x}_{t,n})\)</span> can be viewed as the <strong>expected return</strong> for time <span class="math inline">\(t+1\)</span> computed at time <span class="math inline">\(t\)</span>, that is, <span class="math inline">\(\mathbb{E}_t[r_{t+1,n}]\)</span>. Note that the model is <strong>common to all assets</strong> (<span class="math inline">\(f\)</span> is not indexed by <span class="math inline">\(n\)</span>), thus it shares similarity with panel approaches.</p>
<p>Building accurate predictions requires to pay attention to <strong>all</strong> terms in the above equation. Chronologically, the first step is to gather data and to process it (see Chapter <a href="Data.html#Data">4</a>). To the best of our knowledge, the only consensus is that, on the <span class="math inline">\(\textbf{x}\)</span> side, the features should include classical predictors reported in the literature: market capitalization, accounting ratios, risk measures, momentum proxies (see Chapter <a href="factor.html#factor">3</a>). For the dependent variable, many researchers and practitioners work with monthly returns, but other maturities may perform better out-of-sample.</p>
<p>While it is tempting to believe that the most crucial part is the choice of <span class="math inline">\(f\)</span> (it is the most sophisticated, mathematically), we believe that the choice and engineering of inputs, that is, the variables, are at least as important. The usual modelling families for <span class="math inline">\(f\)</span> are covered in Chapters <a href="lasso.html#lasso">5</a> to <a href="bayes.html#bayes">9</a>. Finally, the errors <span class="math inline">\(\mathbf{\epsilon}_{t+1,n}\)</span> are often overlooked. People consider that vanilla quadratic programming is the best way to go (the most common for sure!), thus the mainstream objective is to minimize squared errors. In fact, other options may be wiser choices (see for instance Section <a href="NN.html#custloss">7.4.3</a>).</p>
<p>Even if the overall process, depicted in Figure <a href="intro.html#fig:figscheme2">2.1</a>, seems very sequential, it is more judicious to conceive it as <strong>integrated</strong>. All steps are intertwined and each part should not be dealt with independently from the others.<a href="solutions-to-exercises.html#fn5" class="footnote-ref" id="fnref5"><sup>5</sup></a> The global framing of the problem is essential, from the choice of predictors, to the family of algorithms, not to mention the portfolio weighting schemes (see Chapter <a href="backtest.html#backtest">12</a> for the latter).</p>
<div class="figure" style="text-align: center">
<span style="display:block;" id="fig:figscheme2"></span>
<img src="images/scheme2.png" alt="Simplified workflow in ML-based portfolio construction." width="794"><p class="caption">
FIGURE 2.1: Simplified workflow in ML-based portfolio construction.
</p>
</div>
</div>
<div id="machine-learning-is-no-magic-wand" class="section level2" number="2.3">
<h2>
<span class="header-section-number">2.3</span> Machine learning is no magic wand<a class="anchor" aria-label="anchor" href="#machine-learning-is-no-magic-wand"><i class="fas fa-link"></i></a>
</h2>
<p>By definition, the curse of predictions is that they rely on <strong>past</strong> data to infer patterns about <strong>subsequent</strong> fluctuations. The more or less explicit hope of any forecaster is that the past will turn out to be a good approximation of the future. Needless to say, this is a pious wish; in general, predictions fare badly. Surprisingly, this does not depend much on the sophistication of the econometric tool. In fact, heuristic guesses are often hard to beat.</p>
<!--Hard to translate computer vision or textual ML into financial predictions. -->
<p>To illustrate this sad truth, the baseline algorithms that we detail in Chapters <a href="lasso.html#lasso">5</a> to <a href="NN.html#NN">7</a> yield at best mediocre results. This is done <strong>on purpose</strong>. This forces the reader to understand that blindly feeding data and parameters to a coded function will seldom suffice to reach satisfactory out-of-sample accuracy.</p>
<p>Below, we sum up some key points that we have learned through our exploratory journey in financial ML.</p>
<ul>
<li>The first point is that <strong>causality</strong> is key. If one is able to identify <span class="math inline">\(X \rightarrow y\)</span>, where <span class="math inline">\(y\)</span> are expected returns, then the problem is solved. Unfortunately, causality is incredibly hard to uncover.<br>
</li>
<li>Thus, researchers have most of the time to make do with simple <strong>correlation</strong> patterns, which are far less informative and robust.<br>
</li>
<li>Relatedly, financial datasets are extremely noisy. It is a daunting task to <strong>extract signals</strong> out of them. <strong>No-arbitrage</strong> reasonings imply that if a simple pattern yielded durable profits, it would mechanically and rapidly vanish.<br>
</li>
<li>The no-free lunch theorem of <span class="citation">Wolpert (<a href="solutions-to-exercises.html#ref-wolpert1992connection" role="doc-biblioref">1992a</a>)</span> imposes that the analyst formulates views on the model. This is why economic or <strong>econometric framing</strong> is key. The assumptions and choices that are made regarding both the dependent variables and the explanatory features are decisive. As a corollary, data is key. The inputs given to the models are probably much more important than the choice of the model itself.<br>
</li>
<li>To maximize out-of-sample efficiency, the right question is probably to paraphrase Jeff Bezos: what’s not going to change? <strong>Persistent</strong> series are more likely to unveil enduring patterns.<br>
</li>
<li>Everybody makes mistakes. Errors in loops or variable indexing are part of the journey. What matters is to <strong>learn</strong> from those lapses.</li>
</ul>
<p>To conclude, we remind the reader of this obvious truth: nothing will ever replace <strong>practice</strong>. Gathering and cleaning data, coding backtests, tuning ML models, testing weighting schemes, debugging, starting all over again: these are all absolutely indispensable steps and tasks that must be repeated indefinitely. There is no substitute to experience.</p>
</div>
</div>
<div class="chapter-nav">
<div class="prev"><a href="notdata.html"><span class="header-section-number">1</span> Notations and data</a></div>
<div class="next"><a href="factor.html"><span class="header-section-number">3</span> Factor investing and asset pricing anomalies</a></div>
</div></main><div class="col-md-3 col-lg-2 d-none d-md-block sidebar sidebar-chapter">
<nav id="toc" data-toggle="toc" aria-label="On this page"><h2>On this page</h2>
<ul class="nav navbar-nav">
<li><a class="nav-link" href="#intro"><span class="header-section-number">2</span> Introduction</a></li>
<li><a class="nav-link" href="#context"><span class="header-section-number">2.1</span> Context</a></li>
<li><a class="nav-link" href="#portfolio-construction-the-workflow"><span class="header-section-number">2.2</span> Portfolio construction: the workflow</a></li>
<li><a class="nav-link" href="#machine-learning-is-no-magic-wand"><span class="header-section-number">2.3</span> Machine learning is no magic wand</a></li>
</ul>
<div class="book-extra">
<ul class="list-unstyled">
</ul>
</div>
</nav>
</div>
</div>
</div> <!-- .container -->
<footer class="bg-primary text-light mt-5"><div class="container"><div class="row">
<div class="col-12 col-md-6 mt-3">
<p>"<strong>Machine Learning for Factor Investing</strong>" was written by Guillaume Coqueret and Tony Guida. It was last built on 2022-10-18.</p>
</div>
<div class="col-12 col-md-6 mt-3">
<p>This book was built by the <a class="text-light" href="https://bookdown.org">bookdown</a> R package.</p>
</div>
</div></div>
</footer><!-- dynamically load mathjax for compatibility with self-contained --><script>
(function () {
var script = document.createElement("script");
script.type = "text/javascript";
var src = "true";
if (src === "" || src === "true") src = "https://mathjax.rstudio.com/latest/MathJax.js?config=TeX-MML-AM_CHTML";
if (location.protocol !== "file:")
if (/^https?:/.test(src))
src = src.replace(/^https?:/, '');
script.src = src;
document.getElementsByTagName("head")[0].appendChild(script);
})();
</script><script type="text/x-mathjax-config">const popovers = document.querySelectorAll('a.footnote-ref[data-toggle="popover"]');
for (let popover of popovers) {
const div = document.createElement('div');
div.setAttribute('style', 'position: absolute; top: 0, left:0; width:0, height:0, overflow: hidden; visibility: hidden;');
div.innerHTML = popover.getAttribute('data-content');
var has_math = div.querySelector("span.math");
if (has_math) {
document.body.appendChild(div);
MathJax.Hub.Queue(["Typeset", MathJax.Hub, div]);
MathJax.Hub.Queue(function() {
popover.setAttribute('data-content', div.innerHTML);
document.body.removeChild(div);
})
}
}
</script>
</body>
</html>