Releases: sktime/pytorch-forecasting
Releases · sktime/pytorch-forecasting
v1.2.0
What's Changed
Maintenance update, minor feature additions and bugfixes.
- support for
numpy 2.X
- end of life for
python 3.8
- fixed documentation build
- bugfixes
New Contributors
- @ewth made their first contribution in #1696
- @airookie17 made their first contribution in #1692
- @benHeid made their first contribution in #1704
- @eugenio-mercuriali made their first contribution in #1699
All Contributors
@airookie17,
@benHeid,
@eugenio-mercuriali,
@ewth,
@fkiraly,
@fnhirwa,
@XinyuWuu,
@yarnabrina
Full Changelog: v1.1.1...v1.2.0
v1.1.1
v1.1.0
What's Changed
Maintenance update widening compatibility ranges and consolidating dependencies:
- support for python 3.11 and 3.12, added CI testing
- support for MacOS, added CI testing
- core dependencies have been minimized to
numpy
,torch
,lightning
,scipy
,pandas
, andscikit-learn
. - soft dependencies are available in soft dependency sets:
all_extras
for all soft dependencies, andtuning
foroptuna
based optimization.
Dependency changes
- the following are no longer core dependencies and have been changed to optional dependencies :
optuna
,statsmodels
,pytorch-optimize
,matplotlib
. Environments relying on functionality requiring these dependencies need to be updated to install these explicitly. optuna
bounds have been updated tooptuna >=3.1.0,<4.0.0
optuna-integrate
is now an additional soft dependency, in case ofoptuna >=3.3.0
Deprecations and removals
- from 1.2.0, the default optimizer will be changed from
"ranger"
to"adam"
to avoid non-torch
dependencies in defaults.pytorch-optimize
optimizers can still be used. Users should set the optimizer explicitly to continue using"ranger"
. - from 1.1.0, the loggers do not log figures if soft dependency
matplotlib
is not present, but will raise no exceptions in this case. To log figures, ensure thatmatplotlib
is installed.
All Contributors
@andre-marcos-perez,
@avirsaha,
@bendavidsteel,
@benHeid,
@bohdan-safoniuk,
@Borda,
@CahidArda,
@fkiraly,
@fnhirwa,
@germanKoch,
@jacktang,
@jdb78,
@jurgispods,
@maartensukel,
@MBelniak,
@orangehe,
@pavelzw,
@sfalkena,
@tmct,
@XinyuWuu,
@yarnabrina,
New Contributors
- @jurgispods made their first contribution in #1366
- @jacktang made their first contribution in #1353
- @andre-marcos-perez made their first contribution in #1346
- @tmct made their first contribution in #1340
- @bohdan-safoniuk made their first contribution in #1318
- @MBelniak made their first contribution in #1230
- @CahidArda made their first contribution in #1175
- @bendavidsteel made their first contribution in #1359
- @Borda made their first contribution in #1498
- @fkiraly made their first contribution in #1598
- @XinyuWuu made their first contribution in #1599
- @pavelzw made their first contribution in #1407
- @yarnabrina made their first contribution in #1630
- @fnhirwa made their first contribution in #1646
- @avirsaha made their first contribution in #1649
Full Changelog: v1.0.0...v1.1.0
Update to pytorch 2.0
Breaking Changes
- Upgraded to pytorch 2.0 and lightning 2.0. This brings a couple of changes, such as configuration of trainers. See the lightning upgrade guide. For PyTorch Forecasting, this particularly means if you are developing own models, the class method
epoch_end
has been renamed toon_epoch_end
and replacingmodel.summarize()
withModelSummary(model, max_depth=-1)
andTuner(trainer)
is its own class, sotrainer.tuner
needs replacing. (#1280) - Changed the
predict()
interface returning named tuple - see tutorials.
Changes
- The predict method is now using the lightning predict functionality and allows writing results to disk (#1280).
Fixed
- Fixed robust scaler when quantiles are 0.0, and 1.0, i.e. minimum and maximum (#1142)
Poetry update
Multivariate networks
Added
- DeepVar network (#923)
- Enable quantile loss for N-HiTS (#926)
- MQF2 loss (multivariate quantile loss) (#949)
- Non-causal attention for TFT (#949)
- Tweedie loss (#949)
- ImplicitQuantileNetworkDistributionLoss (#995)
Fixed
- Fix learning scale schedule (#912)
- Fix TFT list/tuple issue at interpretation (#924)
- Allowed encoder length down to zero for EncoderNormalizer if transformation is not needed (#949)
- Fix Aggregation and CompositeMetric resets (#949)
Changed
- Dropping Python 3.6 suppport, adding 3.10 support (#479)
- Refactored dataloader sampling - moved samplers to pytorch_forecasting.data.samplers module (#479)
- Changed transformation format for Encoders to dict from tuple (#949)
Contributors
- jdb78
Bugfixes
Adding N-HiTS network (N-BEATS successor)
Added
- Added new
N-HiTS
network that has consistently beatenN-BEATS
(#890) - Allow using torchmetrics as loss metrics (#776)
- Enable fitting
EncoderNormalizer()
with limited data history usingmax_length
argument (#782) - More flexible
MultiEmbedding()
with convenienceoutput_size
andinput_size
properties (#829) - Fix concatentation of attention (#902)
Fixed
- Fix pip install via github (#798)
Contributors
- jdb78
- christy
- lukemerrick
- Seon82
Maintenance Release
Maintenance Release (26/09/2021)
Added
- Use target name instead of target number for logging metrics (#588)
- Optimizer can be initialized by passing string, class or function (#602)
- Add support for multiple outputs in Baseline model (#603)
- Added Optuna pruner as optional parameter in
TemporalFusionTransformer.optimize_hyperparameters
(#619) - Dropping support for Python 3.6 and starting support for Python 3.9 (#639)
Fixed
- Initialization of TemporalFusionTransformer with multiple targets but loss for only one target (#550)
- Added missing transformation of prediction for MLP (#602)
- Fixed logging hyperparameters (#688)
- Ensure MultiNormalizer fit state is detected (#681)
- Fix infinite loop in TimeDistributedEmbeddingBag (#672)
Contributors
- jdb78
- TKlerx
- chefPony
- eavae
- L0Z1K