Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Correct argument for optimizer ranger in Temporal Fusion Transformer tutorial #1724

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 4 additions & 11 deletions docs/source/tutorials/stallion.ipynb
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm unable to see this diff in Github (it saus unable to render rich diff), and ReviewNB also seems to lack the notebook.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not quite sure about that, I logged in with my github and was able to see diff on ReviewNB
image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why it's not working for me, but not a big deal. I checked it from branch it directly.

I am not what change of yours is making the predictions worse. In main branch, the predicted orange line and shaded area seemed to contain the blue actuals pretty well, but in the modified one the gaps seem to have increased notably. Do you agree with observation? Any guess what's causing it (seed remained same at 42 in both cases)?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about the cause as I can see even the learning_rate got changed from 0.041 to 0.009. I think I'll need to check the logs and see where it is going wrong

Original file line number Diff line number Diff line change
Expand Up @@ -924,7 +924,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
Expand Down Expand Up @@ -975,7 +975,7 @@
" dropout=0.1, # between 0.1 and 0.3 are good values\n",
" hidden_continuous_size=8, # set to <= hidden_size\n",
" loss=QuantileLoss(),\n",
" optimizer=\"Ranger\",\n",
" optimizer=\"ranger\",\n",
" # reduce learning rate if no improvement in validation loss after x epochs\n",
" # reduce_on_plateau_patience=1000,\n",
")\n",
Expand Down Expand Up @@ -1088,7 +1088,7 @@
},
{
"cell_type": "code",
"execution_count": 9,
"execution_count": null,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -1135,7 +1135,7 @@
" hidden_continuous_size=8,\n",
" loss=QuantileLoss(),\n",
" log_interval=10, # uncomment for learning rate finder and otherwise, e.g. to 10 for logging every 10 batches\n",
" optimizer=\"Ranger\",\n",
" optimizer=\"ranger\",\n",
" reduce_on_plateau_patience=4,\n",
")\n",
"print(f\"Number of parameters in network: {tft.size() / 1e3:.1f}k\")"
Expand Down Expand Up @@ -2621,13 +2621,6 @@
"ax = agg_dependency.plot(y=\"median\")\n",
"ax.fill_between(agg_dependency.index, agg_dependency.q25, agg_dependency.q75, alpha=0.3)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand Down
Loading