Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running NANOGav 9-yr GWB data tutorial notebook yields different results #227

Open
nithyanandan opened this issue Jun 28, 2020 · 1 comment

Comments

@nithyanandan
Copy link

nithyanandan commented Jun 28, 2020

See also issue #226

My operating system name and version and system architecture:

$ lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.8.2003 (Core)
Release:	7.8.2003
Codename:	Core

$ hostnamectl
...
Operating System: CentOS Linux 7 (Core)
CPE OS Name: cpe:/o:centos:centos:7
Kernel: Linux 3.10.0-1127.10.1.el7.x86_64
Architecture: x86-64

Running Python 3.7
Tempo2 version below:

In [1]: import libstempo as T2
In [2]: T2.tempo2version()
Out[2]: StrictVersion ('2019.1.1')

To be more specific, I am using the git # 8aca16c from https://bitbucket.org/psrsoft/tempo2/commits/

Numpy version: 1.18.5 (which is not the same as the version 1.16 in the requirements.txt file)

In using nanograv/enterprise git # f66648f, running the NANOGrav 9-yr data reduction tutorial notebook yields different results than the ones that come with the notebok.

The histogram looks less skewed towards lower values of log10_A_gw compared to the original (I had to use density=True instead of normed=True in the call to PLT.hist()). The upper limit estimated is also correspondingly different (less tight than the original). See attached screenshots. Note that the percentile() function in numpy has undergone a change between version wherein it now (in v1.18.5) requires q to lie between 0-100 (as percent), whereas it was expecting between 0-1 (as fraction) in v1.16. So I set q=95.0 instead of q=0.95 . If we rule out that the change in numpy has to do anything with these non-matching results, my guess was that there must be some difference in the pulsar selection/properties between the original and my run. I did see some pulsar related warnings whose screenshot is also attached.
NG_9yr_plots_new_vs_original
NG_9yr_upper_limits_new_vs_original
NG_9yr_pulsar_warnings

@nithyanandan
Copy link
Author

nithyanandan commented Jun 28, 2020

It was suggested that the clock files getting used may not have been identical.

I see the NG-9yr data paper lists TT(BIPM) as the clock file, which is the same as what I found in the .par files. I assume if a psr is initialized without explicitly setting clk=, it defaults to what's in the .par file. If that's the case, I expect it to be using identical clock files between my run and the tutorial. So not sure why the difference arose.

Just to be sure, I ran the notebook again by explicitly setting clk=TT(BIPM) in the call psr = Pulsar(p, t, ephem='DE421', clk='TT(BIPM)')
NG_9yr_psr_initialization

But it doesn't seem to have changed the results substantially, and is consistent with my previous run, which is different from the NG-9yr results as before.

NG_9yr_plot_clk_TT_BPIM_new_vs_original
NG_9yr_upper_limit_clk_TT_BIPM_new_vs_original

Although unlikely to be the reason for this difference, is it possible that there was a seed set for the MCMC sampling for the NG 9-yr results that could be used here to produce identical results?

@nithyanandan nithyanandan changed the title Running NANOGav 9-yr data tutorial notebook yields different results Running NANOGav 9-yr GWB data tutorial notebook yields different results Jun 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant