You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Numpy version: 1.18.5 (which is not the same as the version 1.16 in the requirements.txt file)
In using nanograv/enterprise git # f66648f, running the NANOGrav 9-yr data reduction tutorial notebook yields different results than the ones that come with the notebok.
The histogram looks less skewed towards lower values of log10_A_gw compared to the original (I had to use density=True instead of normed=True in the call to PLT.hist()). The upper limit estimated is also correspondingly different (less tight than the original). See attached screenshots. Note that the percentile() function in numpy has undergone a change between version wherein it now (in v1.18.5) requires q to lie between 0-100 (as percent), whereas it was expecting between 0-1 (as fraction) in v1.16. So I set q=95.0 instead of q=0.95 . If we rule out that the change in numpy has to do anything with these non-matching results, my guess was that there must be some difference in the pulsar selection/properties between the original and my run. I did see some pulsar related warnings whose screenshot is also attached.
The text was updated successfully, but these errors were encountered:
It was suggested that the clock files getting used may not have been identical.
I see the NG-9yr data paper lists TT(BIPM) as the clock file, which is the same as what I found in the .par files. I assume if a psr is initialized without explicitly setting clk=, it defaults to what's in the .par file. If that's the case, I expect it to be using identical clock files between my run and the tutorial. So not sure why the difference arose.
Just to be sure, I ran the notebook again by explicitly setting clk=TT(BIPM) in the call psr = Pulsar(p, t, ephem='DE421', clk='TT(BIPM)')
But it doesn't seem to have changed the results substantially, and is consistent with my previous run, which is different from the NG-9yr results as before.
Although unlikely to be the reason for this difference, is it possible that there was a seed set for the MCMC sampling for the NG 9-yr results that could be used here to produce identical results?
nithyanandan
changed the title
Running NANOGav 9-yr data tutorial notebook yields different results
Running NANOGav 9-yr GWB data tutorial notebook yields different results
Jun 28, 2020
See also issue #226
My operating system name and version and system architecture:
Running Python 3.7
Tempo2 version below:
To be more specific, I am using the git # 8aca16c from https://bitbucket.org/psrsoft/tempo2/commits/
Numpy version: 1.18.5 (which is not the same as the version 1.16 in the requirements.txt file)
In using nanograv/enterprise git # f66648f, running the NANOGrav 9-yr data reduction tutorial notebook yields different results than the ones that come with the notebok.
The histogram looks less skewed towards lower values of
log10_A_gw
compared to the original (I had to usedensity=True
instead ofnormed=True
in the call toPLT.hist()
). The upper limit estimated is also correspondingly different (less tight than the original). See attached screenshots. Note that thepercentile()
function innumpy
has undergone a change between version wherein it now (in v1.18.5) requiresq
to lie between0-100
(as percent), whereas it was expecting between 0-1 (as fraction) in v1.16. So I setq=95.0
instead ofq=0.95
. If we rule out that the change in numpy has to do anything with these non-matching results, my guess was that there must be some difference in the pulsar selection/properties between the original and my run. I did see some pulsar related warnings whose screenshot is also attached.The text was updated successfully, but these errors were encountered: