You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I reran the coldstart demo and found that the hrnn-coldstart precision@5 has gone up to 0.092 from 0.040. The baseline has also gone up to 0.0076 from 0.0044. I can understand that there is a shuffle on the item-ids, which causes the baseline changes. But I was wondering if it would cause this amount of variance in the trained model, too.
Related, the demo notebook created fake_user. I wonder if that is actually necessary.
I reran the coldstart demo and found that the hrnn-coldstart precision@5 has gone up to 0.092 from 0.040. The baseline has also gone up to 0.0076 from 0.0044. I can understand that there is a shuffle on the item-ids, which causes the baseline changes. But I was wondering if it would cause this amount of variance in the trained model, too.
Related, the demo notebook created fake_user. I wonder if that is actually necessary.
Attaching a script with the new results:
personalize_coldstart_demo.ipynb.zip
Thanks.
The text was updated successfully, but these errors were encountered: