Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot reproduce deterministic results #21

Open
HRHLALALA opened this issue Jun 7, 2022 · 7 comments
Open

Cannot reproduce deterministic results #21

HRHLALALA opened this issue Jun 7, 2022 · 7 comments

Comments

@HRHLALALA
Copy link

HRHLALALA commented Jun 7, 2022

Hi, I have tried to reproduce your results by running SGNet.pytorch/tools/ethucy/train_deterministic.py and SGNet.pytorch/tools/ethucy/eval_deterministic.py. I didn't change anything except adding some model saving code from your previous commits in train_deterministic.py.

Can you help have a look at whether anything is wrong here?

Here are my training arguments:
python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ${dset_name} --model SGNet ${args}
where
dset_name=ETH,args='--lr=0.0005 --dropout=0.5 --sigma=1.5'
dset_name=HOTEL,args='--lr=0.0001 --dropout=0.3'
dset_name=UNIV,args='--lr=0.0001'
dset_name=ZARA1,args='--lr=0.0001'
dset_name=ZARA2,args='--lr=0.0001'

Here are are training results:

ETH: ADE_08: 0.543764; FDE_08: 0.981109; ADE_12: 0.816298; FDE_12: 1.603263;
HOTEL: ADE_08: 0.251949; FDE_08: 0.487048; ADE_12: 0.406558; FDE_12: 0.865508;
UNIV: ADE_08: 0.405024; FDE_08: 0.795781; ADE_12: 0.647388; FDE_12: 1.345341;
ZARA1: ADE_08: 0.235853; FDE_08: 0.470461; ADE_12: 0.381671; FDE_12: 0.803334;
ZARA2: ADE_08: 0.188649; FDE_08: 0.383418; ADE_12: 0.311853; FDE_12: 0.669926;

Here are the training outputs from my terminal (e.g. Zara1):

ZARA1
ZARA1
ZARA1
Number of validation samples: 41
Number of test samples: 19
Train Epoch: 1 	 Goal loss: 13.3936	 Decoder loss: 11.0987	 Total: 24.4923
ADE_08: 0.294679;  FDE_08: 0.551690;  ADE_12: 0.447580;   FDE_12: 0.887748

Saving checkpoints: metric_epoch_001_loss_0.4476.pth
Train Epoch: 2 	 Goal loss: 8.3851	 Decoder loss: 7.7700	 Total: 16.1551
ADE_08: 0.380591;  FDE_08: 0.738910;  ADE_12: 0.574689;   FDE_12: 1.084587

Train Epoch: 3 	 Goal loss: 7.4418	 Decoder loss: 7.2957	 Total: 14.7374
ADE_08: 0.273209;  FDE_08: 0.530974;  ADE_12: 0.424851;   FDE_12: 0.850812

Saving checkpoints: metric_epoch_003_loss_0.4249.pth
Train Epoch: 4 	 Goal loss: 7.1369	 Decoder loss: 7.0551	 Total: 14.1920
ADE_08: 0.286998;  FDE_08: 0.565734;  ADE_12: 0.450437;   FDE_12: 0.906875

Train Epoch: 5 	 Goal loss: 6.9545	 Decoder loss: 6.9175	 Total: 13.8720
ADE_08: 0.260329;  FDE_08: 0.512583;  ADE_12: 0.414379;   FDE_12: 0.856634

Saving checkpoints: metric_epoch_005_loss_0.4144.pth
Train Epoch: 6 	 Goal loss: 6.7755	 Decoder loss: 6.7289	 Total: 13.5045
ADE_08: 0.290368;  FDE_08: 0.585993;  ADE_12: 0.480131;   FDE_12: 1.037864

Train Epoch: 7 	 Goal loss: 6.6577	 Decoder loss: 6.6200	 Total: 13.2777
ADE_08: 0.258839;  FDE_08: 0.510076;  ADE_12: 0.415547;   FDE_12: 0.869290

Train Epoch: 8 	 Goal loss: 6.5346	 Decoder loss: 6.4946	 Total: 13.0292
ADE_08: 0.246034;  FDE_08: 0.491056;  ADE_12: 0.397619;   FDE_12: 0.835792

Saving checkpoints: metric_epoch_008_loss_0.3976.pth
Train Epoch: 9 	 Goal loss: 6.4372	 Decoder loss: 6.4017	 Total: 12.8388
ADE_08: 0.251146;  FDE_08: 0.509857;  ADE_12: 0.417938;   FDE_12: 0.909648

Train Epoch: 10 	 Goal loss: 6.3593	 Decoder loss: 6.3158	 Total: 12.6751
ADE_08: 0.253486;  FDE_08: 0.506422;  ADE_12: 0.410926;   FDE_12: 0.867227

Train Epoch: 11 	 Goal loss: 6.2701	 Decoder loss: 6.2177	 Total: 12.4878
ADE_08: 0.278813;  FDE_08: 0.553365;  ADE_12: 0.449940;   FDE_12: 0.942615

Train Epoch: 12 	 Goal loss: 6.2296	 Decoder loss: 6.2038	 Total: 12.4334
ADE_08: 0.245076;  FDE_08: 0.489589;  ADE_12: 0.398719;   FDE_12: 0.846358

Train Epoch: 13 	 Goal loss: 6.1766	 Decoder loss: 6.1378	 Total: 12.3144
ADE_08: 0.261642;  FDE_08: 0.502992;  ADE_12: 0.413555;   FDE_12: 0.856336

Train Epoch: 14 	 Goal loss: 6.1079	 Decoder loss: 6.0661	 Total: 12.1739
ADE_08: 0.253986;  FDE_08: 0.501157;  ADE_12: 0.408927;   FDE_12: 0.858842

Train Epoch: 15 	 Goal loss: 6.0482	 Decoder loss: 6.0074	 Total: 12.0556
ADE_08: 0.237412;  FDE_08: 0.472600;  ADE_12: 0.385304;   FDE_12: 0.816462

Saving checkpoints: metric_epoch_015_loss_0.3853.pth
Train Epoch: 16 	 Goal loss: 5.9789	 Decoder loss: 5.9207	 Total: 11.8996
ADE_08: 0.248604;  FDE_08: 0.494847;  ADE_12: 0.403149;   FDE_12: 0.851186

Train Epoch: 17 	 Goal loss: 5.9886	 Decoder loss: 5.9620	 Total: 11.9507
ADE_08: 0.264590;  FDE_08: 0.521281;  ADE_12: 0.425393;   FDE_12: 0.888515

Train Epoch: 18 	 Goal loss: 5.8995	 Decoder loss: 5.8390	 Total: 11.7384
ADE_08: 0.249374;  FDE_08: 0.483068;  ADE_12: 0.396050;   FDE_12: 0.820998

Train Epoch: 19 	 Goal loss: 5.8871	 Decoder loss: 5.8269	 Total: 11.7140
ADE_08: 0.241822;  FDE_08: 0.488735;  ADE_12: 0.399218;   FDE_12: 0.861516

Train Epoch: 20 	 Goal loss: 5.8296	 Decoder loss: 5.7692	 Total: 11.5988
ADE_08: 0.238986;  FDE_08: 0.467400;  ADE_12: 0.382569;   FDE_12: 0.800729

Saving checkpoints: metric_epoch_020_loss_0.3826.pth
Train Epoch: 21 	 Goal loss: 5.8035	 Decoder loss: 5.7227	 Total: 11.5262
ADE_08: 0.246278;  FDE_08: 0.498420;  ADE_12: 0.406117;   FDE_12: 0.873861

Train Epoch: 22 	 Goal loss: 5.7840	 Decoder loss: 5.7143	 Total: 11.4984
ADE_08: 0.245556;  FDE_08: 0.489192;  ADE_12: 0.399299;   FDE_12: 0.848396

Train Epoch: 23 	 Goal loss: 5.7622	 Decoder loss: 5.6884	 Total: 11.4506
ADE_08: 0.241803;  FDE_08: 0.473674;  ADE_12: 0.386678;   FDE_12: 0.807247

Train Epoch: 24 	 Goal loss: 5.7139	 Decoder loss: 5.6287	 Total: 11.3426
ADE_08: 0.237854;  FDE_08: 0.473140;  ADE_12: 0.384558;   FDE_12: 0.809680

Train Epoch: 25 	 Goal loss: 5.6809	 Decoder loss: 5.5965	 Total: 11.2774
ADE_08: 0.243205;  FDE_08: 0.483125;  ADE_12: 0.392962;   FDE_12: 0.827676

Train Epoch: 26 	 Goal loss: 5.6746	 Decoder loss: 5.5734	 Total: 11.2479
ADE_08: 0.241132;  FDE_08: 0.483847;  ADE_12: 0.393726;   FDE_12: 0.837666

Train Epoch: 27 	 Goal loss: 5.6426	 Decoder loss: 5.5449	 Total: 11.1875
ADE_08: 0.238708;  FDE_08: 0.473718;  ADE_12: 0.385182;   FDE_12: 0.809509

Train Epoch: 28 	 Goal loss: 5.5965	 Decoder loss: 5.4837	 Total: 11.0802
ADE_08: 0.245108;  FDE_08: 0.486398;  ADE_12: 0.396431;   FDE_12: 0.837428

Train Epoch: 29 	 Goal loss: 5.6095	 Decoder loss: 5.5072	 Total: 11.1167
ADE_08: 0.244368;  FDE_08: 0.483016;  ADE_12: 0.391393;   FDE_12: 0.814580

Train Epoch: 30 	 Goal loss: 5.5618	 Decoder loss: 5.4448	 Total: 11.0066
ADE_08: 0.242074;  FDE_08: 0.474495;  ADE_12: 0.387724;   FDE_12: 0.810403

Train Epoch: 31 	 Goal loss: 5.5415	 Decoder loss: 5.4163	 Total: 10.9578
ADE_08: 0.235853;  FDE_08: 0.470461;  ADE_12: 0.381671;   FDE_12: 0.803334

Saving checkpoints: metric_epoch_031_loss_0.3817.pth
Train Epoch: 32 	 Goal loss: 5.5443	 Decoder loss: 5.4107	 Total: 10.9550
ADE_08: 0.244977;  FDE_08: 0.486834;  ADE_12: 0.394192;   FDE_12: 0.823241

Train Epoch: 33 	 Goal loss: 5.4947	 Decoder loss: 5.3558	 Total: 10.8505
ADE_08: 0.240755;  FDE_08: 0.478803;  ADE_12: 0.389154;   FDE_12: 0.818785

Train Epoch: 34 	 Goal loss: 5.4762	 Decoder loss: 5.3309	 Total: 10.8071
ADE_08: 0.244053;  FDE_08: 0.485707;  ADE_12: 0.394861;   FDE_12: 0.832431

Train Epoch: 35 	 Goal loss: 5.4659	 Decoder loss: 5.3150	 Total: 10.7809
ADE_08: 0.245577;  FDE_08: 0.487264;  ADE_12: 0.395676;   FDE_12: 0.829373

Train Epoch: 36 	 Goal loss: 5.4495	 Decoder loss: 5.2897	 Total: 10.7392
ADE_08: 0.239127;  FDE_08: 0.477790;  ADE_12: 0.386703;   FDE_12: 0.811449

Train Epoch: 37 	 Goal loss: 5.4346	 Decoder loss: 5.2822	 Total: 10.7167
ADE_08: 0.248550;  FDE_08: 0.500743;  ADE_12: 0.406487;   FDE_12: 0.865554

Train Epoch: 38 	 Goal loss: 5.3894	 Decoder loss: 5.2266	 Total: 10.6161
ADE_08: 0.258613;  FDE_08: 0.504668;  ADE_12: 0.411209;   FDE_12: 0.851276

Train Epoch: 39 	 Goal loss: 5.3827	 Decoder loss: 5.2078	 Total: 10.5905
ADE_08: 0.251402;  FDE_08: 0.500842;  ADE_12: 0.406572;   FDE_12: 0.854904

Train Epoch: 40 	 Goal loss: 5.3578	 Decoder loss: 5.1803	 Total: 10.5381
ADE_08: 0.252681;  FDE_08: 0.500086;  ADE_12: 0.407249;   FDE_12: 0.855631

Train Epoch: 41 	 Goal loss: 5.3440	 Decoder loss: 5.1498	 Total: 10.4937
ADE_08: 0.244587;  FDE_08: 0.492789;  ADE_12: 0.399140;   FDE_12: 0.845556

Train Epoch: 42 	 Goal loss: 5.3255	 Decoder loss: 5.1329	 Total: 10.4584
ADE_08: 0.240022;  FDE_08: 0.479255;  ADE_12: 0.389297;   FDE_12: 0.821116

Train Epoch: 43 	 Goal loss: 5.3073	 Decoder loss: 5.1165	 Total: 10.4238
ADE_08: 0.243913;  FDE_08: 0.486718;  ADE_12: 0.395276;   FDE_12: 0.832791

Train Epoch: 44 	 Goal loss: 5.2892	 Decoder loss: 5.0951	 Total: 10.3842
ADE_08: 0.246676;  FDE_08: 0.489490;  ADE_12: 0.398171;   FDE_12: 0.835557

Train Epoch: 45 	 Goal loss: 5.2664	 Decoder loss: 5.0586	 Total: 10.3250
ADE_08: 0.249650;  FDE_08: 0.498006;  ADE_12: 0.402356;   FDE_12: 0.839605

Train Epoch: 46 	 Goal loss: 5.2504	 Decoder loss: 5.0400	 Total: 10.2904
ADE_08: 0.246008;  FDE_08: 0.496606;  ADE_12: 0.402466;   FDE_12: 0.855231

Train Epoch: 47 	 Goal loss: 5.2392	 Decoder loss: 5.0044	 Total: 10.2437
ADE_08: 0.248422;  FDE_08: 0.500811;  ADE_12: 0.406097;   FDE_12: 0.862268

Train Epoch: 48 	 Goal loss: 5.2152	 Decoder loss: 4.9837	 Total: 10.1989
ADE_08: 0.247908;  FDE_08: 0.505135;  ADE_12: 0.409776;   FDE_12: 0.880500

Train Epoch: 49 	 Goal loss: 5.1839	 Decoder loss: 4.9462	 Total: 10.1302
ADE_08: 0.243334;  FDE_08: 0.492921;  ADE_12: 0.399223;   FDE_12: 0.851002

Train Epoch: 50 	 Goal loss: 5.1815	 Decoder loss: 4.9328	 Total: 10.1143
ADE_08: 0.247995;  FDE_08: 0.501935;  ADE_12: 0.406349;   FDE_12: 0.864324

@ChuhuaW
Copy link
Owner

ChuhuaW commented Jun 8, 2022

@HRHLALALA Thanks for interesting in our paper. The dropout rate was probably set to 0.5 for your experiment on UNIV, ZARA1, ZARA2, as it is the default setting. And for deterministic experiment, you don't have to set sigma -- this is the standard deviation, and only apply to CVAE model. I'm attaching the checkpoints for ETH/UCY dataset here, and let me know if you can get similar results as in the paper. I also trained again on ZARA1 and attach my log here for reference.

Train Epoch: 1   Goal loss: 7.8752       Decoder loss: 7.4240    Total: 15.2992
ADE_08: 0.205366;  FDE_08: 0.427921;  ADE_12: 0.338283;   FDE_12: 0.712461

Train Epoch: 2   Goal loss: 4.6825       Decoder loss: 4.8405    Total: 9.5230
ADE_08: 0.288085;  FDE_08: 0.559531;  ADE_12: 0.463402;   FDE_12: 0.981222

Train Epoch: 3   Goal loss: 4.4858       Decoder loss: 4.6523    Total: 9.1381
ADE_08: 0.181439;  FDE_08: 0.392425;  ADE_12: 0.313678;   FDE_12: 0.704375

Train Epoch: 4   Goal loss: 4.2913       Decoder loss: 4.4210    Total: 8.7124
ADE_08: 0.230115;  FDE_08: 0.487011;  ADE_12: 0.389181;   FDE_12: 0.845964

Train Epoch: 5   Goal loss: 4.1755       Decoder loss: 4.2758    Total: 8.4513
ADE_08: 0.167939;  FDE_08: 0.377876;  ADE_12: 0.317006;   FDE_12: 0.785227

Train Epoch: 6   Goal loss: 4.1162       Decoder loss: 4.1888    Total: 8.3050
ADE_08: 0.166464;  FDE_08: 0.354999;  ADE_12: 0.301010;   FDE_12: 0.718141

Train Epoch: 7   Goal loss: 4.0531       Decoder loss: 4.1317    Total: 8.1847
ADE_08: 0.184430;  FDE_08: 0.414347;  ADE_12: 0.343615;   FDE_12: 0.837005

Train Epoch: 8   Goal loss: 3.9779       Decoder loss: 4.0690    Total: 8.0469
ADE_08: 0.138170;  FDE_08: 0.331704;  ADE_12: 0.265631;   FDE_12: 0.642644

Train Epoch: 9   Goal loss: 3.9018       Decoder loss: 3.9493    Total: 7.8510
ADE_08: 0.212174;  FDE_08: 0.439908;  ADE_12: 0.367158;   FDE_12: 0.835217

Train Epoch: 10          Goal loss: 3.7895       Decoder loss: 3.7699    Total: 7.5594
ADE_08: 0.165783;  FDE_08: 0.377216;  ADE_12: 0.311176;   FDE_12: 0.754264

@HRHLALALA
Copy link
Author

HRHLALALA commented Jun 9, 2022

Hi, thanks for your reply. It is really helpful to have the checkpoints. I can see a similar performance as in your paper.

I think the reason why I got worse results is that I used the data generated from the latest Trajectron++ version. As mentioned in #19 (comment) , they have fixed the gradient calculations.

I post my reproduced results below. Here are my training arguments:
python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ${dset_name} --model SGNet ${args}
where
dset_name=ETH,args='--lr=0.0005 --dropout=0.5 '
dset_name=HOTEL,args='--lr=0.0001 --dropout=0.3'
dset_name=UNIV,args='--lr=0.0001 --dropout=0.0'
dset_name=ZARA1,args='--lr=0.0001 --dropout=0.0'
dset_name=ZARA2,args='--lr=0.0001 --dropout=0.0'

Results from ETH HOTEL UNIV ZARA1 ZARA2
Paper 0.63/1.38 0.27/0.63 0.40/0.96 0.26/0.64 0.21/0.53
Provided Checkpoints on data A 0.63/1.38 0.28/0.64 0.42/0.99 0.26/0.64 0.21/0.53
Provided Checkpoints on data B 0.97/1.81 0.55/1.07 0.66/1.37 0.69/1.36 0.50/1.02
Weights trained on data A 0.63/1.38 0.30/0.70 0.43/1.01 0.28/0.65 0.22/0.55
Weights trained on data B 0.81/1.60 0.41/0.87 0.58/1.24 0.37/0.79 0.31/0.68

where data A and data B are data preprocessed from Trajectron++ without and with fixed gradient.

Could you confirm these new results are reasonable? Please let me know if my hyperparameters are not consistent with yours. Thanks!

@ChuhuaW
Copy link
Owner

ChuhuaW commented Jun 9, 2022

Thank you so much for conducting new experiments! I don't have any progress on the fixed Trajectron++ dataset yet. I will also post some numbers and hopefully they are close to yours.

@haomengz
Copy link

haomengz commented Nov 7, 2022

Hi, thanks for your reply. It is really helpful to have the checkpoints. I can see a similar performance as in your paper.

I think the reason why I got worse results is that I used the data generated from the latest Trajectron++ version. As mentioned in #19 (comment) , they have fixed the gradient calculations.

I post my reproduced results below. Here are my training arguments: python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ${dset_name} --model SGNet ${args} where dset_name=ETH,args='--lr=0.0005 --dropout=0.5 ' dset_name=HOTEL,args='--lr=0.0001 --dropout=0.3' dset_name=UNIV,args='--lr=0.0001 --dropout=0.0' dset_name=ZARA1,args='--lr=0.0001 --dropout=0.0' dset_name=ZARA2,args='--lr=0.0001 --dropout=0.0'

Results from ETH HOTEL UNIV ZARA1 ZARA2
Paper 0.63/1.38 0.27/0.63 0.40/0.96 0.26/0.64 0.21/0.53
Provided Checkpoints on data A 0.63/1.38 0.28/0.64 0.42/0.99 0.26/0.64 0.21/0.53
Provided Checkpoints on data B 0.97/1.81 0.55/1.07 0.66/1.37 0.69/1.36 0.50/1.02
Weights trained on data A 0.63/1.38 0.30/0.70 0.43/1.01 0.28/0.65 0.22/0.55
Weights trained on data B 0.81/1.60 0.41/0.87 0.58/1.24 0.37/0.79 0.31/0.68
where data A and data B are data preprocessed from Trajectron++ without and with fixed gradient.

Could you confirm these new results are reasonable? Please let me know if my hyperparameters are not consistent with yours. Thanks!

Hi @HRHLALALA, the results I reproduce are very similar to your last row 'Weights trained on data B' and I think it is a valid updated result.

@CrisCloseTheDoor
Copy link

Hi, thanks for your reply. It is really helpful to have the checkpoints. I can see a similar performance as in your paper.

I think the reason why I got worse results is that I used the data generated from the latest Trajectron++ version. As mentioned in #19 (comment) , they have fixed the gradient calculations.

I post my reproduced results below. Here are my training arguments: python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ${dset_name} --model SGNet ${args} where dset_name=ETH,args='--lr=0.0005 --dropout=0.5 ' dset_name=HOTEL,args='--lr=0.0001 --dropout=0.3' dset_name=UNIV,args='--lr=0.0001 --dropout=0.0' dset_name=ZARA1,args='--lr=0.0001 --dropout=0.0' dset_name=ZARA2,args='--lr=0.0001 --dropout=0.0'

Results from ETH HOTEL UNIV ZARA1 ZARA2
Paper 0.63/1.38 0.27/0.63 0.40/0.96 0.26/0.64 0.21/0.53
Provided Checkpoints on data A 0.63/1.38 0.28/0.64 0.42/0.99 0.26/0.64 0.21/0.53
Provided Checkpoints on data B 0.97/1.81 0.55/1.07 0.66/1.37 0.69/1.36 0.50/1.02
Weights trained on data A 0.63/1.38 0.30/0.70 0.43/1.01 0.28/0.65 0.22/0.55
Weights trained on data B 0.81/1.60 0.41/0.87 0.58/1.24 0.37/0.79 0.31/0.68
where data A and data B are data preprocessed from Trajectron++ without and with fixed gradient.

Could you confirm these new results are reasonable? Please let me know if my hyperparameters are not consistent with yours. Thanks!

Hi, thanks for providing a comparing experiment. It looks like all of the results related to data B are actually correct cause they don't use future trajectory after np.gradient bug was fixed.
And that is, all of the results whose model based on trajectron++ have to be updated to a lower value ...
Do I get the right understanding?

@HRHLALALA
Copy link
Author

Hi, thanks for your reply. It is really helpful to have the checkpoints. I can see a similar performance as in your paper.
I think the reason why I got worse results is that I used the data generated from the latest Trajectron++ version. As mentioned in #19 (comment) , they have fixed the gradient calculations.
I post my reproduced results below. Here are my training arguments: python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ${dset_name} --model SGNet ${args} where dset_name=ETH,args='--lr=0.0005 --dropout=0.5 ' dset_name=HOTEL,args='--lr=0.0001 --dropout=0.3' dset_name=UNIV,args='--lr=0.0001 --dropout=0.0' dset_name=ZARA1,args='--lr=0.0001 --dropout=0.0' dset_name=ZARA2,args='--lr=0.0001 --dropout=0.0'
Results from ETH HOTEL UNIV ZARA1 ZARA2
Paper 0.63/1.38 0.27/0.63 0.40/0.96 0.26/0.64 0.21/0.53
Provided Checkpoints on data A 0.63/1.38 0.28/0.64 0.42/0.99 0.26/0.64 0.21/0.53
Provided Checkpoints on data B 0.97/1.81 0.55/1.07 0.66/1.37 0.69/1.36 0.50/1.02
Weights trained on data A 0.63/1.38 0.30/0.70 0.43/1.01 0.28/0.65 0.22/0.55
Weights trained on data B 0.81/1.60 0.41/0.87 0.58/1.24 0.37/0.79 0.31/0.68
where data A and data B are data preprocessed from Trajectron++ without and with fixed gradient.
Could you confirm these new results are reasonable? Please let me know if my hyperparameters are not consistent with yours. Thanks!

Hi, thanks for providing a comparing experiment. It looks like all of the results related to data B are actually correct cause they don't use future trajectory after np.gradient bug was fixed. And that is, all of the results whose model based on trajectron++ have to be updated to a lower value ... Do I get the right understanding?

Yes

@CrisCloseTheDoor
Copy link

Hi, thanks for your reply. It is really helpful to have the checkpoints. I can see a similar performance as in your paper.

I think the reason why I got worse results is that I used the data generated from the latest Trajectron++ version. As mentioned in #19 (comment) , they have fixed the gradient calculations.

I post my reproduced results below. Here are my training arguments: python tools/ethucy/train_deterministic.py --gpu $CUDA_VISIBLE_DEVICES --dataset ${dset_name} --model SGNet ${args} where dset_name=ETH,args='--lr=0.0005 --dropout=0.5 ' dset_name=HOTEL,args='--lr=0.0001 --dropout=0.3' dset_name=UNIV,args='--lr=0.0001 --dropout=0.0' dset_name=ZARA1,args='--lr=0.0001 --dropout=0.0' dset_name=ZARA2,args='--lr=0.0001 --dropout=0.0'

Results from ETH HOTEL UNIV ZARA1 ZARA2
Paper 0.63/1.38 0.27/0.63 0.40/0.96 0.26/0.64 0.21/0.53
Provided Checkpoints on data A 0.63/1.38 0.28/0.64 0.42/0.99 0.26/0.64 0.21/0.53
Provided Checkpoints on data B 0.97/1.81 0.55/1.07 0.66/1.37 0.69/1.36 0.50/1.02
Weights trained on data A 0.63/1.38 0.30/0.70 0.43/1.01 0.28/0.65 0.22/0.55
Weights trained on data B 0.81/1.60 0.41/0.87 0.58/1.24 0.37/0.79 0.31/0.68
where data A and data B are data preprocessed from Trajectron++ without and with fixed gradient.

Could you confirm these new results are reasonable? Please let me know if my hyperparameters are not consistent with yours. Thanks!

Hi HRHLALALA, I've tried several times but I couldn't reproduce similar results like you do.
My results are always a little worse: (data A)

ETH: ADE_12: 0.643965; FDE_12: 1.457909;

HOTEL: ADE_12: 0.298510; FDE_12: 0.632796;

UNIV: ADE_12: 0.428731; FDE_12: 1.014463;

ZARA1: ADE_12: 0.285387; FDE_12: 0.700799; 

I wonder if you have changed some of the original settings?

optimizer: Adam
scheduler: no
batch_size=128, lr=5e-4(ETH), 1e-4(HOTEL~ZARA02)
dropout=0.5(ETH), 0.3(HOTEL)  0(others)

Thanks a lot :D

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants