Skip to content

Commit

Permalink
Stop-gap fix.
Browse files Browse the repository at this point in the history
  • Loading branch information
ludgerpaehler committed Dec 15, 2023
1 parent 9de0c4f commit 14adc2d
Show file tree
Hide file tree
Showing 2 changed files with 0 additions and 202 deletions.
123 changes: 0 additions & 123 deletions _site/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -47,129 +47,6 @@ <h2 id="abstract">Abstract</h2>

<p>The Bellman equation and its continuous form, the Hamilton-Jacobi-Bellman (HJB) equation, are ubiquitous in reinforcement learning (RL) and control theory contexts due, in part, to their guaranteed convergence towards a system’s optimal value function. However, this approach has severe limitations. This paper explores the connection between the data-driven Koopman operator and Bellman Markov Decision Processes, resulting in the development of two new RL algorithms to address these limitations. In particular, we focus on Koopman operator methods that reformulate a nonlinear system by lifting into new coordinates where the dynamics become linear, and where HJB-based methods are more tractable. These transformations enable the estimation, prediction, and control of strongly nonlinear dynamics. Viewing the Bellman equation as a controlled dynamical system, the Koopman operator is able to capture the expectation of the time evolution of the value function in the given systems via linear dynamics in the lifted coordinates. By parameterizing the Koopman operator with the control actions, we construct a new <em>Koopman tensor</em> that facilitates the estimation of the optimal value function. Then, a transformation of Bellman’s framework in terms of the Koopman tensor enables us to reformulate two max-entropy RL algorithms: soft-value iteration and soft actor-critic (SAC). This highly flexible framework can be used for deterministic or stochastic systems as well as for discrete or continuous-time dynamics. Finally, we show that these algorithms attain state-of-the-art (SOTA) performance with respect to traditional neural network-based SAC and linear quadratic regulator (LQR) baselines on three controlled dynamical systems: the Lorenz system, fluid flow past a cylinder, and a double-well potential with non-isotropic stochastic forcing. It does this all while maintaining an interpretability that shows how inputs tend to affect outputs, what we call <em>input-output</em> interpretability.</p>

<h2 id="the-different-aspects-of-the-dataset-visualized">The Different Aspects of the Dataset Visualized</h2>

<h3 id="header-3">Header 3</h3>

<div class="language-js highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// Javascript code with syntax highlighting.</span>
<span class="kd">var</span> <span class="nx">fun</span> <span class="o">=</span> <span class="kd">function</span> <span class="nf">lang</span><span class="p">(</span><span class="nx">l</span><span class="p">)</span> <span class="p">{</span>
<span class="nx">dateformat</span><span class="p">.</span><span class="nx">i18n</span> <span class="o">=</span> <span class="nf">require</span><span class="p">(</span><span class="dl">'</span><span class="s1">./lang/</span><span class="dl">'</span> <span class="o">+</span> <span class="nx">l</span><span class="p">)</span>
<span class="k">return</span> <span class="kc">true</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>

<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1"># Ruby code with syntax highlighting</span>
<span class="no">GitHubPages</span><span class="o">::</span><span class="no">Dependencies</span><span class="p">.</span><span class="nf">gems</span><span class="p">.</span><span class="nf">each</span> <span class="k">do</span> <span class="o">|</span><span class="n">gem</span><span class="p">,</span> <span class="n">version</span><span class="o">|</span>
<span class="n">s</span><span class="p">.</span><span class="nf">add_dependency</span><span class="p">(</span><span class="n">gem</span><span class="p">,</span> <span class="s2">"= </span><span class="si">#{</span><span class="n">version</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="k">end</span>
</code></pre></div></div>

<h4 id="header-4">Header 4</h4>

<ul>
<li>This is an unordered list following a header.</li>
<li>This is an unordered list following a header.</li>
<li>This is an unordered list following a header.</li>
</ul>

<h5 id="header-5">Header 5</h5>

<ol>
<li>This is an ordered list following a header.</li>
<li>This is an ordered list following a header.</li>
<li>This is an ordered list following a header.</li>
</ol>

<h6 id="header-6">Header 6</h6>

<table>
<thead>
<tr>
<th style="text-align: left">head1</th>
<th style="text-align: left">head two</th>
<th style="text-align: left">three</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left">ok</td>
<td style="text-align: left">good swedish fish</td>
<td style="text-align: left">nice</td>
</tr>
<tr>
<td style="text-align: left">out of stock</td>
<td style="text-align: left">good and plenty</td>
<td style="text-align: left">nice</td>
</tr>
<tr>
<td style="text-align: left">ok</td>
<td style="text-align: left">good <code class="language-plaintext highlighter-rouge">oreos</code></td>
<td style="text-align: left">hmm</td>
</tr>
<tr>
<td style="text-align: left">ok</td>
<td style="text-align: left">good <code class="language-plaintext highlighter-rouge">zoute</code> drop</td>
<td style="text-align: left">yumm</td>
</tr>
</tbody>
</table>

<h3 id="theres-a-horizontal-rule-below-this">There’s a horizontal rule below this.</h3>

<hr />

<h3 id="here-is-an-unordered-list">Here is an unordered list:</h3>

<ul>
<li>Item foo</li>
<li>Item bar</li>
<li>Item baz</li>
<li>Item zip</li>
</ul>

<h3 id="and-an-ordered-list">And an ordered list:</h3>

<ol>
<li>Item one</li>
<li>Item two</li>
<li>Item three</li>
<li>Item four</li>
</ol>

<h3 id="and-a-nested-list">And a nested list:</h3>

<ul>
<li>level 1 item
<ul>
<li>level 2 item</li>
<li>level 2 item
<ul>
<li>level 3 item</li>
<li>level 3 item</li>
</ul>
</li>
</ul>
</li>
<li>level 1 item
<ul>
<li>level 2 item</li>
<li>level 2 item</li>
<li>level 2 item</li>
</ul>
</li>
<li>level 1 item
<ul>
<li>level 2 item</li>
<li>level 2 item</li>
</ul>
</li>
<li>level 1 item</li>
</ul>

<h3 id="small-image">Small image</h3>

<p><img src="https://github.githubassets.com/images/icons/emoji/octocat.png" alt="Octocat" /></p>

<h2 id="authors">Authors</h2>

<center>
Expand Down
79 changes: 0 additions & 79 deletions index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,85 +8,6 @@ title: Koopman-Assisted Reinforcement Learning

The Bellman equation and its continuous form, the Hamilton-Jacobi-Bellman (HJB) equation, are ubiquitous in reinforcement learning (RL) and control theory contexts due, in part, to their guaranteed convergence towards a system’s optimal value function. However, this approach has severe limitations. This paper explores the connection between the data-driven Koopman operator and Bellman Markov Decision Processes, resulting in the development of two new RL algorithms to address these limitations. In particular, we focus on Koopman operator methods that reformulate a nonlinear system by lifting into new coordinates where the dynamics become linear, and where HJB-based methods are more tractable. These transformations enable the estimation, prediction, and control of strongly nonlinear dynamics. Viewing the Bellman equation as a controlled dynamical system, the Koopman operator is able to capture the expectation of the time evolution of the value function in the given systems via linear dynamics in the lifted coordinates. By parameterizing the Koopman operator with the control actions, we construct a new _Koopman tensor_ that facilitates the estimation of the optimal value function. Then, a transformation of Bellman’s framework in terms of the Koopman tensor enables us to reformulate two max-entropy RL algorithms: soft-value iteration and soft actor-critic (SAC). This highly flexible framework can be used for deterministic or stochastic systems as well as for discrete or continuous-time dynamics. Finally, we show that these algorithms attain state-of-the-art (SOTA) performance with respect to traditional neural network-based SAC and linear quadratic regulator (LQR) baselines on three controlled dynamical systems: the Lorenz system, fluid flow past a cylinder, and a double-well potential with non-isotropic stochastic forcing. It does this all while maintaining an interpretability that shows how inputs tend to affect outputs, what we call _input-output_ interpretability.

## The Different Aspects of the Dataset Visualized

### Header 3

```js
// Javascript code with syntax highlighting.
var fun = function lang(l) {
dateformat.i18n = require('./lang/' + l)
return true;
}
```

```ruby
# Ruby code with syntax highlighting
GitHubPages::Dependencies.gems.each do |gem, version|
s.add_dependency(gem, "= #{version}")
end
```

#### Header 4

* This is an unordered list following a header.
* This is an unordered list following a header.
* This is an unordered list following a header.

##### Header 5

1. This is an ordered list following a header.
2. This is an ordered list following a header.
3. This is an ordered list following a header.

###### Header 6

| head1 | head two | three |
|:-------------|:------------------|:------|
| ok | good swedish fish | nice |
| out of stock | good and plenty | nice |
| ok | good `oreos` | hmm |
| ok | good `zoute` drop | yumm |

### There's a horizontal rule below this.

* * *

### Here is an unordered list:

* Item foo
* Item bar
* Item baz
* Item zip

### And an ordered list:

1. Item one
1. Item two
1. Item three
1. Item four

### And a nested list:

- level 1 item
- level 2 item
- level 2 item
- level 3 item
- level 3 item
- level 1 item
- level 2 item
- level 2 item
- level 2 item
- level 1 item
- level 2 item
- level 2 item
- level 1 item

### Small image

![Octocat](https://github.githubassets.com/images/icons/emoji/octocat.png)


## Authors

<center>
Expand Down

0 comments on commit 14adc2d

Please sign in to comment.