diff --git a/404.html b/404.html index 4955af31..6c85c02d 100644 --- a/404.html +++ b/404.html @@ -31,6 +31,7 @@ + @@ -84,7 +85,7 @@ diff --git a/defs/beam.html b/defs/beam.html index 311ab540..a279d853 100644 --- a/defs/beam.html +++ b/defs/beam.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ diff --git a/defs/blocks.html b/defs/blocks.html index f6a08310..92f0d83f 100644 --- a/defs/blocks.html +++ b/defs/blocks.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ diff --git a/defs/cal_sols.html b/defs/cal_sols.html index d064231f..6a4cba6f 100644 --- a/defs/cal_sols.html +++ b/defs/cal_sols.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -149,7 +150,7 @@

mwa_hyperdrive documentation

Calibration solutions file formats

Calibration solutions are Jones matrices that, when applied to raw data, -"calibrate" the visibilities.

+"calibrate" the visibilities.

hyperdrive can convert between supported formats (see solutions-convert). Soon it will also be able to apply them (but users can write out calibrated visibilities as part of di-calibrate).

diff --git a/defs/cal_sols_ao.html b/defs/cal_sols_ao.html index 8ec98dd8..e8440f21 100644 --- a/defs/cal_sols_ao.html +++ b/defs/cal_sols_ao.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -149,17 +150,17 @@

mwa_hyperdrive documentation

The André Offringa (ao) calibration solutions format

This format is output by calibrate and is documented in mwa-reduce as -follows. Note that the startTime and endTime should be populated with "AIPS -time", although calibrate appears to always write 0 for these. hyperdrive +follows. Note that the startTime and endTime should be populated with "AIPS +time", although calibrate appears to always write 0 for these. hyperdrive instead opts to write the centroid GPS times here (the end time is the last timestep inclusive).

Tiles are ordered by antenna number, i.e. the second column in the observation's -corresponding metafits files labelled "antenna". Times and frequencies are +corresponding metafits files labelled "antenna". Times and frequencies are sorted ascendingly.

mwa-reduce documentation

| Bytes  |  Description |
 |-------:|:-------------|
-|  0- 7  |  string intro ; 8-byte null terminated string "MWAOCAL" |
+|  0- 7  |  string intro ; 8-byte null terminated string "MWAOCAL" |
 |  8-11  |  int fileType ; always 0, reserved for indicating something other than complex Jones solutions |
 | 12-15  |  int structureType ; always 0, reserved for indicating different ordering |
 | 16-19  |  int intervalCount ; Number of solution intervals in file |
@@ -181,7 +182,7 @@ 

diff --git a/defs/cal_sols_hyp.html b/defs/cal_sols_hyp.html index 9e718fb1..633d1b8f 100644 --- a/defs/cal_sols_hyp.html +++ b/defs/cal_sols_hyp.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -148,8 +149,8 @@

mwa_hyperdrive documentation

The hyperdrive calibration solutions format

-

Jones matrices are stored in a fits file as an "image" with 4 dimensions -(timeblock, tile, chanblock, float, in that order) in the "SOLUTIONS" HDU (which +

Jones matrices are stored in a fits file as an "image" with 4 dimensions +(timeblock, tile, chanblock, float, in that order) in the "SOLUTIONS" HDU (which is the second HDU). An element of the solutions is a 64-bit float (a.k.a. double-precision float). The last dimension always has a length of 8; these correspond to the complex gains of the X dipoles (\( g_x \)), the leakage of @@ -157,10 +158,10 @@

-

Note that in the context of the MWA, "antenna" and "tile" are used +

Note that in the context of the MWA, "antenna" and "tile" are used interchangeably.

Metadata

@@ -186,17 +187,17 @@

Cal path to this file.

Raw MWA data corrections

PFB describes the PFB gains flavour applied to -the raw MWA data. At the time of writing, this flavour is described as "jake", -"cotter2014", "empirical", "levine", or "none".

-

D_GAINS is "Y" if the digital +the raw MWA data. At the time of writing, this flavour is described as "jake", +"cotter2014", "empirical", "levine", or "none".

+

D_GAINS is "Y" if the digital gains were applied to the raw MWA -data. "N" if they were not.

-

CABLELEN is "Y" if the cable length +data. "N" if they were not.

+

CABLELEN is "Y" if the cable length corrections were applied to the raw -MWA data. "N" if they were not.

-

GEOMETRY is "Y" if the geometric delay +MWA data. "N" if they were not.

+

GEOMETRY is "Y" if the geometric delay correction -was applied to the raw MWA data. "N" if they were not.

+was applied to the raw MWA data. "N" if they were not.

Others

MODELLER describes what was used to generate model visibilities in calibration. This is either CPU or details on the CUDA device used, e.g. @@ -204,18 +205,18 @@

Others

Extra HDUs

More metadata are contained in HDUs other than the first one (that which contains the metadata keys described above). Other than the first HDU and the -"SOLUTIONS" HDU (HDUs 1 and 2, respectfully), all HDUs and their contents are +"SOLUTIONS" HDU (HDUs 1 and 2, respectfully), all HDUs and their contents are optional.

TIMEBLOCKS

See blocks for an explanation of what timeblocks are.

-

The "TIMEBLOCKS" HDU is a FITS table with three columns:

+

The "TIMEBLOCKS" HDU is a FITS table with three columns:

  1. Start
  2. End
  3. Average

Each row represents a calibration timeblock, and there must be the same number -of rows as there are timeblocks in the calibration solutions (in the "SOLUTIONS" +of rows as there are timeblocks in the calibration solutions (in the "SOLUTIONS" HDU). Each of these times is a centroid GPS timestamp.

It is possible to have one or multiple columns without data; cfitsio will write zeros for values, but hyperdrive will ignore columns with all zeros.

@@ -226,7 +227,7 @@

TIMEBLOCKS

timesteps in that timeblock are used, then the average time could be 12.666 or 13.333.

TILES

-

The "TILES" HDU is a FITS table with up to five columns:

+

The "TILES" HDU is a FITS table with up to five columns:

  1. Antenna
  2. Flag
  3. @@ -235,7 +236,7 @@

    TILES

  4. DipoleDelays

Antenna is the 0-N antenna index (where N is the total number of antennas in -the observation). These indices match the "Antenna" column of an MWA +the observation). These indices match the "Antenna" column of an MWA metafits file.

Flag is a boolean indicating whether an antenna was flagged for calibration (1) or not (0).

@@ -248,7 +249,7 @@

TILES

There are 16 values per tile.

CHANBLOCKS

See blocks for an explanation of what chanblocks are.

-

The "CHANBLOCKS" HDU is a FITS table with up to three columns:

+

The "CHANBLOCKS" HDU is a FITS table with up to three columns:

  1. Index
  2. Flag
  3. @@ -264,16 +265,16 @@

    CHANBLOCKS

    If any of the frequencies is an NaN, then hyperdrive will not use the Freq column.

    RESULTS (Calibration results)

    -

    The "RESULTS" HDU is a FITS image with two dimensions -- timeblock and +

    The "RESULTS" HDU is a FITS image with two dimensions -- timeblock and chanblock, in that order -- that describe the precision to which a chanblock converged for that timeblock (as double-precision floats). If a chanblock was flagged, NaN is provided for its precision. NaN is also listed for chanblocks that completely failed to calibrate.

    These calibration precisions must have the same number of timeblocks and -chanblocks described by the calibration solutions (in the "SOLUTIONS" HDU).

    +chanblocks described by the calibration solutions (in the "SOLUTIONS" HDU).

    BASELINES

    -

    The "BASELINES" HDU is a FITS image with one dimension. The values of the -"image" (let's call it an array) are the double-precision float baseline weights +

    The "BASELINES" HDU is a FITS image with one dimension. The values of the +"image" (let's call it an array) are the double-precision float baseline weights used in calibration (controlled by UVW minimum and maximum cutoffs). The length of the array is the total number of baselines (i.e. flagged and unflagged). Flagged baselines have weights of NaN, e.g. baseline 0 is between antennas 0 and @@ -282,10 +283,12 @@

    BASELINES

    These baseline weights must have a non-NaN value for all tiles in the observation (e.g. if there are 128 tiles in the calibration solutions, then there must be 8128 baseline weights).

    -
    +
    +

    Python code for reading

    -

    +
    +

    A full example of reading and plotting solutions is @@ -295,20 +298,20 @@

    BASELINES

    from astropy.io import fits -f = fits.open("hyperdrive_solutions.fits") -sols = f["SOLUTIONS"].data +f = fits.open("hyperdrive_solutions.fits") +sols = f["SOLUTIONS"].data num_timeblocks, num_tiles, num_chanblocks, _ = sols.shape -obsid = f[0].header["OBSID"] -pfb_flavour = f[0].header["PFB"] -start_times = f[0].header["S_TIMES"] +obsid = f[0].header["OBSID"] +pfb_flavour = f[0].header["PFB"] +start_times = f[0].header["S_TIMES"] -tile_names = [tile["TileName"] for tile in f["TILES"].data] -tile_flags = [tile["Flag"] for tile in f["TILES"].data] +tile_names = [tile["TileName"] for tile in f["TILES"].data] +tile_flags = [tile["Flag"] for tile in f["TILES"].data] -freqs = [chan["FREQ"] for chan in f["CHANBLOCKS"].data] +freqs = [chan["FREQ"] for chan in f["CHANBLOCKS"].data] -cal_precisions_for_timeblock_0 = f["RESULTS"].data[0] +cal_precisions_for_timeblock_0 = f["RESULTS"].data[0]
    diff --git a/defs/cal_sols_rts.html b/defs/cal_sols_rts.html index 56fc5e4c..e3512dd5 100644 --- a/defs/cal_sols_rts.html +++ b/defs/cal_sols_rts.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -153,10 +154,12 @@

    +
    hyperdrive solutions-convert /path/to/rts/solutions/ rts-as-hyp-solutions.fits -m /path/to/obs.metafits
    @@ -167,10 +170,12 @@ 

    +

    I (CHJ) spent a very long time trying to make the writing of RTS solutions diff --git a/defs/coords.html b/defs/coords.html index 94c67111..14163012 100644 --- a/defs/coords.html +++ b/defs/coords.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    @@ -152,38 +153,38 @@

    ITRF -frame (internally we refer to this as "geocentric XYZ"). There's also a -"geodetic XYZ" frame; an example of this is WGS +frame (internally we refer to this as "geocentric XYZ"). There's also a +"geodetic XYZ" frame; an example of this is WGS 84 (which we assume everywhere when converting, as it's the current best ellipsoid). Finally, -there's also an "East North Height" coordinate system.

    +there's also an "East North Height" coordinate system.

    To calculate UVW baseline coordinates, geodetic XYZ coordinates are required1. Therefore, various coordinate conversions are required to obtain UVWs. The conversion between all of these systems is briefly described below. The relevant code lives within Marlu.

    -

    ITRF and "geocentric XYZ"

    +

    ITRF and "geocentric XYZ"

    As the name implies, this coordinate system uses the centre of the Earth as a reference. To convert between geocentric and geodetic, an array position is -required (i.e. the "average" location on the Earth of the instrument collecting +required (i.e. the "average" location on the Earth of the instrument collecting visibilities). When all antenna positions are geocentric, the array position is given by the mean antenna position.

    -

    Measurement sets indicate the usage of ITRF with the "MEASURE_REFERENCE" keyword -attached to the POSITION column of an ANTENNA table (value "ITRF").

    +

    Measurement sets indicate the usage of ITRF with the "MEASURE_REFERENCE" keyword +attached to the POSITION column of an ANTENNA table (value "ITRF").

    The uvfits standard states -that only supported frame is "ITRF", and hyperdrive assumes that only ITRF is +that only supported frame is "ITRF", and hyperdrive assumes that only ITRF is used. However, CASA/casacore seem to write out antenna positions incorrectly; the positions look like what you would find in an equivalent measurement set. The incorrect behaviour is detected and accounted for.

    -

    "Geodetic XYZ"

    +

    "Geodetic XYZ"

    This coordinate system is similar to geocentric, but uses an array position as its reference.

    -

    Measurement sets support the WGS 84 frame, again with the "MEASURE_REFERENCE" -keyword attached to the POSITION column of an ANTENNA table (value "WGS84"). +

    Measurement sets support the WGS 84 frame, again with the "MEASURE_REFERENCE" +keyword attached to the POSITION column of an ANTENNA table (value "WGS84"). However, hyperdrive currently does not check if geodetic positions are used; it instead just assumes geocentric.

    -

    When read literally, the antenna positions in a uvfits file ("STABXYZ" column of -the "AIPS AN" HDU) should be geodetic, not counting the aforementioned +

    When read literally, the antenna positions in a uvfits file ("STABXYZ" column of +the "AIPS AN" HDU) should be geodetic, not counting the aforementioned casacore bug.

    East North Height (ENH)

    MWA tiles positions are listed in metafits files with @@ -232,7 +233,7 @@

    UVWs

    Note that this is a UVW coordinate for an antenna. To get the proper baseline UVW, a difference between two antennas' UVWs needs to be taken. The order of -this subtraction is important; hyperdrive uses the "antenna1 - antenna2" +this subtraction is important; hyperdrive uses the "antenna1 - antenna2" convention. Software that reads data may need to conjugate visibilities if this convention is different.

    Further reading

    diff --git a/defs/dut1.html b/defs/dut1.html index 0006932b..47c222da 100644 --- a/defs/dut1.html +++ b/defs/dut1.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -164,7 +165,7 @@

    More explan

    A lot of good, easy-to-read information is here.

    UTC keeps track with TAI but only through the aid of leap seconds (both are -"atomic time frames"). UT1 is the "actual time", but the Earth's rate of +"atomic time frames"). UT1 is the "actual time", but the Earth's rate of rotation is difficult to measure and predict. DUT1 is not allowed to be more than -0.9 or 0.9 seconds; a leap second is introduced before that threshold is reached.

    diff --git a/defs/fd_types.html b/defs/fd_types.html index a5a7ea67..15b68099 100644 --- a/defs/fd_types.html +++ b/defs/fd_types.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -151,10 +152,12 @@

    Flux-de

    This page describes supported flux-density types within hyperdrive. The following pages detail their usage within sky-model source lists. This page details how each type is estimated in modelling.

    -
    +
    +

    Power laws and Curved power laws

    -

    +
    +

    Most astrophysical sources are modelled as power laws. These are simply @@ -163,16 +166,18 @@

    Flux-de spectral index \( \alpha \).

    Curved power laws are formalised in Section 4.1 of Callingham et al. 2017. These are -the same as power laws but with an additional "spectral curvature" parameter \( +the same as power laws but with an additional "spectral curvature" parameter \( q \).

    Both kinds of power law flux-density representations are preferred in hyperdrive.

    -
    +
    +

    Flux density lists

    -

    +
    +

    The list type is simply many instances of a Stokes \( \text{I} \), \( diff --git a/defs/modelling/estimating.html b/defs/modelling/estimating.html index f60bd4eb..66a9a755 100644 --- a/defs/modelling/estimating.html +++ b/defs/modelling/estimating.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    @@ -155,10 +156,12 @@

    +

    Both power-law and curved-power-law sources have a spectral index (\( \alpha @@ -199,13 +202,15 @@

    +
    -

    When estimating flux densities from a list, it is feared that the "jagged" shape +

    When estimating flux densities from a list, it is feared that the "jagged" shape of a component's spectral energy distribution introduces artefacts into an EoR power spectrum.

    It is relatively expensive to estimate flux densities from a list type. For all diff --git a/defs/modelling/intro.html b/defs/modelling/intro.html index 7129a089..ca482fa2 100644 --- a/defs/modelling/intro.html +++ b/defs/modelling/intro.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    diff --git a/defs/modelling/rime.html b/defs/modelling/rime.html index e2470068..e0799f3d 100644 --- a/defs/modelling/rime.html +++ b/defs/modelling/rime.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -148,10 +149,12 @@

    mwa_hyperdrive documentation

    Measurement equation

    -
    +
    +

    Note

    -

    +
    +

    A lot of this content was taken from Jack Line's diff --git a/defs/mwa/corrections.html b/defs/mwa/corrections.html index 1f5a7233..2c7bd4ab 100644 --- a/defs/mwa/corrections.html +++ b/defs/mwa/corrections.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    @@ -148,16 +149,18 @@

    mwa_hyperdrive documentation

    Raw data corrections

    -

    A number of things can be done to "correct" or "pre-process" raw MWA data before +

    A number of things can be done to "correct" or "pre-process" raw MWA data before it is ready for calibration (or other analysis). These tasks are handled by Birli, either as the Birli executable itself, or internally in hyperdrive. cotter used to perform these tasks but it has been superseded by Birli.

    -
    +
    +

    Geometric correction (a.k.a. phase tracking)

    -

    +
    +

    Many MWA observations do not apply a geometric correction despite having a @@ -168,44 +171,50 @@

    Raw on calibration!

    -
    +

    The poly-phase filter bank used by the MWA affects visibilities before they get -saved to disk. Over time, a number of "flavours" of these gains have been used:

    +saved to disk. Over time, a number of "flavours" of these gains have been used:

      -
    • "Jake Jones" (jake; 200 Hz)
    • -
    • "cotter 2014" (cotter2014; 10 kHz)
    • -
    • "RTS empirical" (empirical; 40 kHz)
    • -
    • "Alan Levine" (levine; 40 kHz)
    • +
    • "Jake Jones" (jake; 200 Hz)
    • +
    • "cotter 2014" (cotter2014; 10 kHz)
    • +
    • "RTS empirical" (empirical; 40 kHz)
    • +
    • "Alan Levine" (levine; 40 kHz)
    -

    When correcting raw data, the "Jake Jones" gains are used by default. For each +

    When correcting raw data, the "Jake Jones" gains are used by default. For each flavour, the first item in the parentheses (e.g. cotter2014) indicates what should be supplied to hyperdrive if you want to use those gains instead. There -is also a none "flavour" if you want to disable PFB gain correction.

    +is also a none "flavour" if you want to disable PFB gain correction.

    In CHJ's experience, using different flavours have very little effect on calibration quality.

    Some more information on the PFB can be found here.

    -
    +
    +

    Cable lengths

    -

    +
    +

    Each tile is connected by a cable, and that cable might have a different length to others. This correction aims to better align the signals of each tile.

    -
    +
    +

    Digital gains

    -

    +
    +
    todo!()
    diff --git a/defs/mwa/dead_dipoles.html b/defs/mwa/dead_dipoles.html
    index 75e3453b..0c140dff 100644
    --- a/defs/mwa/dead_dipoles.html
    +++ b/defs/mwa/dead_dipoles.html
    @@ -30,6 +30,7 @@
     
             
             
    +        
     
             
             
    @@ -83,7 +84,7 @@
     
             
    @@ -148,16 +149,16 @@ 

    mwa_hyperdrive documentation

    Dead dipoles

    -

    Each MWA tile has 16 "bowties", and each bowtie is made up of two dipoles (one -X, one Y). We refer to a "dead" dipole as one that is not functioning correctly +

    Each MWA tile has 16 "bowties", and each bowtie is made up of two dipoles (one +X, one Y). We refer to a "dead" dipole as one that is not functioning correctly (hopefully not receiving any power at all). This information is used in generating beam responses as part of modelling visibilities. The more accurate the visibilities, the better that calibration performs, so it is important to account for dead dipoles if possible.

    Beam responses are generated with hyperbeam and dead dipole -information is encoded as a "dipole gain" of 1 ("alive") or 0 ("dead"). It is -possible to supply other values for dipole gains with a "DipAmps" column; see +information is encoded as a "dipole gain" of 1 ("alive") or 0 ("dead"). It is +possible to supply other values for dipole gains with a "DipAmps" column; see the metafits page.

    For the relevant functions, dead dipole information can be ignored by supplying a flag --unity-dipole-gains. This sets all dipole gains to 1.

    @@ -166,8 +167,8 @@

    Dead dipolesSee this page for more info on dipole ordering.

    -

    In the image below, you can see the 12th Y dipole is dead for "Tile022". All -other dipoles are "alive".

    +

    In the image below, you can see the 12th Y dipole is dead for "Tile022". All +other dipoles are "alive".

    diff --git a/defs/mwa/delays.html b/defs/mwa/delays.html index 693c33da..d08c827b 100644 --- a/defs/mwa/delays.html +++ b/defs/mwa/delays.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -148,7 +149,7 @@

    mwa_hyperdrive documentation

    Dipole delays

    -

    A tile's dipole delays control where it is "pointing". Delays are provided as +

    A tile's dipole delays control where it is "pointing". Delays are provided as numbers, and this controls how long a dipole's response is delayed before its response correlated with other dipoles. This effectively allows the MWA to be more sensitive in a particular direction without any physical movement.

    @@ -168,8 +169,8 @@

    Dipole delays

    would correspond to the example above. Note that these user-supplied delays will override delays that are otherwise provided.

    -

    Dipoles cannot be delayed by more than "31". "32" is code for "dead -dipole", which means that these dipoles should not be used +

    Dipoles cannot be delayed by more than "31". "32" is code for "dead +dipole", which means that these dipoles should not be used when modelling a tile's response.

    Ideal dipole delays

    Most (all?) MWA observations use a single set of delays for all tiles. Dipole @@ -178,10 +179,10 @@

    Ideal
  4. In the DELAYS key in HDU 1; and
  5. For each tile in HDU 2.
  6. -

    The delays in HDU 1 are referred to as "ideal" dipole delays. A set of delays -are not ideal if any are "32" (i.e. dead).

    -

    However, the HDU 1 delays may all be "32". This is an indication from the -observatory that this observation is "bad" and should not be used. hyperdrive +

    The delays in HDU 1 are referred to as "ideal" dipole delays. A set of delays +are not ideal if any are "32" (i.e. dead).

    +

    However, the HDU 1 delays may all be "32". This is an indication from the +observatory that this observation is "bad" and should not be used. hyperdrive will proceed with such observations but issue a warning. In this case, the ideal delays are obtained by iterating over all tile delays until each delay is not 32.

    diff --git a/defs/mwa/metafits.html b/defs/mwa/metafits.html index dfda8050..b019910a 100644 --- a/defs/mwa/metafits.html +++ b/defs/mwa/metafits.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@
    @@ -148,25 +149,29 @@

    mwa_hyperdrive documentation

    Metafits files

    -

    The MWA tracks observation metadata with "metafits" files. Often these accompany -the raw visibilities in a download, but these could be old (such as the "PPD -metafits" files). hyperdrive does not support PPD metafits files; only new +

    The MWA tracks observation metadata with "metafits" files. Often these accompany +the raw visibilities in a download, but these could be old (such as the "PPD +metafits" files). hyperdrive does not support PPD metafits files; only new metafits files should be used.

    This command downloads a new metafits file for the specified observation ID:

    -
    +
    +

    Download MWA metafits file

    -

    +
    +
    -
    OBSID=1090008640; wget "http://ws.mwatelescope.org/metadata/fits?obs_id=${OBSID}" -O "${OBSID}".metafits
    +
    OBSID=1090008640; wget "http://ws.mwatelescope.org/metadata/fits?obs_id=${OBSID}" -O "${OBSID}".metafits
     
    -
    +
    +

    Why should I use a metafits file?

    -

    +
    +

    Measurement sets and uvfits files do not contain MWA-specific information, @@ -175,10 +180,12 @@

    Metafits files< uvfits file may also lack dipole delay information.

    -
    +
    +

    Why are new metafits files better?

    -

    +
    +

    The database of MWA metadata can change over time for observations conducted @@ -191,11 +198,11 @@

    Metafits files<

    Controlling dipole gains

    -

    If the "TILEDATA" HDU of a metafits contains a "DipAmps" column, each row +

    If the "TILEDATA" HDU of a metafits contains a "DipAmps" column, each row containing 16 double-precision values for bowties in the M&C order, these are -used as the dipole gains in beam calculations. If the "DipAmps" column isn't +used as the dipole gains in beam calculations. If the "DipAmps" column isn't available, the default behaviour is to use gains of 1.0 for all dipoles, except -those that have delays of 32 in the "Delays" column (they will have a gain of +those that have delays of 32 in the "Delays" column (they will have a gain of 0.0, and are considered dead).

    diff --git a/defs/mwa/mwaf.html b/defs/mwa/mwaf.html index 54eb2f70..c3e15aa5 100644 --- a/defs/mwa/mwaf.html +++ b/defs/mwa/mwaf.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -157,10 +158,12 @@

    mwaf flag fil

    At the time of writing, hyperdrive only utilises mwaf files when reading visibilities from raw data.

    -
    +

    cotter-produced mwaf files are unreliable because

    diff --git a/defs/mwa/mwalib.html b/defs/mwa/mwalib.html index ca12b618..ba400ada 100644 --- a/defs/mwa/mwalib.html +++ b/defs/mwa/mwalib.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -151,7 +152,7 @@

    mwalib

    mwalib is the official MWA raw-data-reading library. hyperdrive users usually don't need to concern themselves with it, but mwalib errors may arise.

    -

    mwalib can be quite noisy with log messages (particularly at the "trace" +

    mwalib can be quite noisy with log messages (particularly at the "trace" level); it is possible to suppress these messages by setting an environment variable:

    RUST_LOG=mwalib=error
    diff --git a/defs/mwa/picket_fence.html b/defs/mwa/picket_fence.html
    index 46b8fc38..619d33b5 100644
    --- a/defs/mwa/picket_fence.html
    +++ b/defs/mwa/picket_fence.html
    @@ -30,6 +30,7 @@
     
             
             
    +        
     
             
             
    @@ -83,7 +84,7 @@
     
             
    @@ -148,8 +149,8 @@ 

    mwa_hyperdrive documentation

    Picket fence observations

    -

    A "picket fence" observation contains more than one "spectral window" (or -"SPW"). That is, not all the frequency channels in an observation are +

    A "picket fence" observation contains more than one "spectral window" (or +"SPW"). That is, not all the frequency channels in an observation are continuous; there's at least one gap somewhere.

    hyperdrive does not currently support picket fence observations, but it will eventually support them properly. However, it is possible to calibrate a diff --git a/defs/pols.html b/defs/pols.html index f7fc334a..2c72e000 100644 --- a/defs/pols.html +++ b/defs/pols.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    diff --git a/defs/source_list_ao.html b/defs/source_list_ao.html index b3d758d6..483e5510 100644 --- a/defs/source_list_ao.html +++ b/defs/source_list_ao.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -157,20 +158,22 @@

    Source names are allowed to have spaces inside them, because the names are surrounded by quotes. This is fine for reading, but when converting one of these sources to another format, the spaces need to be translated to underscores.

    -
    +
    skymodel fileformat 1.1
     source {
    -  name "J002549-260211"
    +  name "J002549-260211"
       component {
         type point
         position 0h25m49.2s -26d02m13s
    @@ -185,7 +188,7 @@ 

    + @@ -83,7 +84,7 @@ @@ -161,10 +162,12 @@

    +

    The following are the contents of a valid YAML file. super_sweet_source1 is a @@ -224,10 +227,12 @@

    +

    The following are the contents of a valid JSON file. super_sweet_source1 is a @@ -235,79 +240,79 @@

    { - "super_sweet_source1": [ + "super_sweet_source1": [ { - "ra": 10.0, - "dec": -27.0, - "comp_type": "point", - "flux_type": { - "list": [ + "ra": 10.0, + "dec": -27.0, + "comp_type": "point", + "flux_type": { + "list": [ { - "freq": 150000000.0, - "i": 10.0 + "freq": 150000000.0, + "i": 10.0 }, { - "freq": 170000000.0, - "i": 5.0, - "q": 1.0, - "u": 2.0, - "v": 3.0 + "freq": 170000000.0, + "i": 5.0, + "q": 1.0, + "u": 2.0, + "v": 3.0 } ] } } ], - "super_sweet_source2": [ + "super_sweet_source2": [ { - "ra": 0.0, - "dec": -35.0, - "comp_type": { - "gaussian": { - "maj": 20.0, - "min": 10.0, - "pa": 75.0 + "ra": 0.0, + "dec": -35.0, + "comp_type": { + "gaussian": { + "maj": 20.0, + "min": 10.0, + "pa": 75.0 } }, - "flux_type": { - "power_law": { - "si": -0.8, - "fd": { - "freq": 170000000.0, - "i": 5.0, - "q": 1.0, - "u": 2.0, - "v": 3.0 + "flux_type": { + "power_law": { + "si": -0.8, + "fd": { + "freq": 170000000.0, + "i": 5.0, + "q": 1.0, + "u": 2.0, + "v": 3.0 } } } }, { - "ra": 155.0, - "dec": -10.0, - "comp_type": { - "shapelet": { - "maj": 20.0, - "min": 10.0, - "pa": 75.0, - "coeffs": [ + "ra": 155.0, + "dec": -10.0, + "comp_type": { + "shapelet": { + "maj": 20.0, + "min": 10.0, + "pa": 75.0, + "coeffs": [ { - "n1": 0, - "n2": 1, - "value": 0.5 + "n1": 0, + "n2": 1, + "value": 0.5 } ] } }, - "flux_type": { - "curved_power_law": { - "si": -0.6, - "fd": { - "freq": 150000000.0, - "i": 50.0, - "q": 0.5, - "u": 0.1 + "flux_type": { + "curved_power_law": { + "si": -0.6, + "fd": { + "freq": 150000000.0, + "i": 50.0, + "q": 0.5, + "u": 0.1 }, - "q": 0.2 + "q": 0.2 } } } diff --git a/defs/source_list_rts.html b/defs/source_list_rts.html index b8bf038f..e27111c8 100644 --- a/defs/source_list_rts.html +++ b/defs/source_list_rts.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -155,16 +156,18 @@

    Keywords like SOURCE, COMPONENT, POINT etc. must be at the start of a line (i.e. no preceding space).

    -

    RTS sources always have a "base source", which can be thought of as a +

    RTS sources always have a "base source", which can be thought of as a non-optional component or the first component in a collection of components.

    -
    +

    Taken from srclists, file diff --git a/defs/source_lists.html b/defs/source_lists.html index e8a7f824..0869d501 100644 --- a/defs/source_lists.html +++ b/defs/source_lists.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    @@ -160,13 +161,15 @@

    -

    hyperdrive can also convert between formats, although in a "lossy" way; +

    hyperdrive can also convert between formats, although in a "lossy" way; non-hyperdrive formats cannot represent all component and/or flux-density types.

    -
    +
      @@ -176,10 +179,12 @@

    -
    +

    hyperdrive can convert (as best it can) between different source list formats. @@ -189,10 +194,12 @@

    -
    +

    hyperdrive can be given many source lists in order to test that they are @@ -201,10 +208,12 @@

    -
    +
    diff --git a/defs/vis_formats_read.html b/defs/vis_formats_read.html index 58b50cad..4d68cae4 100644 --- a/defs/vis_formats_read.html +++ b/defs/vis_formats_read.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -148,13 +149,15 @@

    mwa_hyperdrive documentation

    Supported visibility formats for reading

    -
    +
    +

    Raw MWA data

    -

    +
    +
    -

    Raw "legacy" MWA data comes in "gpubox" files. "MWAX" data comes in a similar +

    Raw "legacy" MWA data comes in "gpubox" files. "MWAX" data comes in a similar format, and *ch???*.fits is a useful glob to identify them. Raw data can be accessed from the ASVO.

    Here are examples of using each of these MWA formats with di-calibrate:

    @@ -167,10 +170,12 @@

    +
    hyperdrive di-calibrate -d *.ms *.metafits -s a_good_sky_model.yaml
    @@ -186,10 +191,12 @@ 

    below for more info.

    -
    +
    +

    uvfits

    -

    +
    +
    hyperdrive di-calibrate -d *.uvfits *.metafits -s a_good_sky_model.yaml
    @@ -203,10 +210,12 @@ 

    here.

    -
    +
    +

    When using a metafits

    -

    +
    +

    When using a metafits file with a uvfits/MS, the tile names in the metafits and diff --git a/defs/vis_formats_write.html b/defs/vis_formats_write.html index cb03f383..791676b5 100644 --- a/defs/vis_formats_write.html +++ b/defs/vis_formats_write.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    @@ -152,10 +153,12 @@

    +
    hyperdrive solutions-apply \
    @@ -165,10 +168,12 @@ 

    +
    hyperdrive solutions-apply \
    @@ -180,10 +185,12 @@ 

    here.

    -
    +
    +

    Visibility averaging

    -

    +
    +

    When writing out visibilities, they can be averaged in time and frequency. Units @@ -210,10 +217,12 @@

    +

    All aspects of hyperdrive that can write visibilities can write to multiple diff --git a/dev/ndarray.html b/dev/ndarray.html index 200f6a88..bd24651f 100644 --- a/dev/ndarray.html +++ b/dev/ndarray.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    diff --git a/dev/vec1.html b/dev/vec1.html index 8c4d286f..4d92ab93 100644 --- a/dev/vec1.html +++ b/dev/vec1.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ diff --git a/index.html b/index.html index 333d1b6e..7a6cef95 100644 --- a/index.html +++ b/index.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ diff --git a/installation/from_source.html b/installation/from_source.html index 50c1b2ce..d3588bef 100644 --- a/installation/from_source.html +++ b/installation/from_source.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -150,10 +151,12 @@

    mwa_hyperdrive documentation

    Installing hyperdrive from source code

    Dependencies

    hyperdrive depends on these C libraries:

    -
    +
    +

    cfitsio

    -

    +
    +
    -
    +
    +

    hdf5

    -

    +
    +

    Optional dependencies

    -
    +
    +

    freetype2 (for calibration solutions plotting)

    -

    +
    +
      @@ -213,10 +220,12 @@

      O

    -
    + -
    +

    Installing Rust

    -
    +
    +

    TL;DR

    -

    +
    +
    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    @@ -289,20 +302,24 @@ 

    +

    It is possible to compile with more optimisations if you give --profile production to the cargo install command. This may make things a few percent faster, but compilation will take much longer.

    -
    +
    +

    CUDA

    -

    +
    +

    Do you have a CUDA-capable NVIDIA GPU? Ensure you have installed @@ -316,8 +333,8 @@

    cargo install --path . --locked --features=cuda,gpu-single

    -

    If you're using "datacentre" products (e.g. a V100 available on the -Pawsey-hosted supercomputer "garrawarla"), you probably want double-precision +

    If you're using "datacentre" products (e.g. a V100 available on the +Pawsey-hosted supercomputer "garrawarla"), you probably want double-precision floats:

    cargo install --path . --locked --features=cuda
     
    @@ -329,17 +346,19 @@

    +

    Do you have a HIP-capable AMD GPU? Ensure you have installed HIP (instructions are above), and compile with the hip feature (single-precision floats):

    cargo install --path . --locked --features=hip,gpu-single
     
    -

    If you're using "datacentre" products (e.g. the GPUs on the "setonix" +

    If you're using "datacentre" products (e.g. the GPUs on the "setonix" supercomputer), you probably want double-precision floats:

    cargo install --path . --locked --features=hip
     
    @@ -348,10 +367,12 @@

    +

    The aforementioned C libraries can each be compiled by cargo. all-static @@ -361,10 +382,12 @@

    +

    cargo features can be chained in a comma-separated list:

    @@ -372,10 +395,12 @@

    +

    If you're having problems compiling, it's possible you have an older Rust @@ -390,10 +415,12 @@

    +

    hyperdrive used to depend on the ERFA C diff --git a/installation/intro.html b/installation/intro.html index b0b865d9..a99a3ea0 100644 --- a/installation/intro.html +++ b/installation/intro.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    diff --git a/installation/post.html b/installation/post.html index 9189312c..63af0d1c 100644 --- a/installation/post.html +++ b/installation/post.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -148,10 +149,12 @@

    mwa_hyperdrive documentation

    Post installation instructions

    -
    +
    +

    Setting up the beam

    -

    +
    +

    Many hyperdrive functions require the beam code to function. The MWA FEE beam diff --git a/installation/pre_compiled.html b/installation/pre_compiled.html index 136226e5..d6f12e5c 100644 --- a/installation/pre_compiled.html +++ b/installation/pre_compiled.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    @@ -151,30 +152,32 @@

    Visit the GitHub releases page. You should see releases like the following:

    Release example

      -
    • Under "Assets", download one of the tar.gz files starting with +
    • Under "Assets", download one of the tar.gz files starting with mwa_hyperdrive;
    • Untar it (e.g. tar -xvf mwa_hyperdrive*.tar.gz); and
    • Run the binary (./hyperdrive).

    If you intend on running hyperdrive on a desktop GPU, then you probably want -the "CUDA-single" release. You can still use the double-precision version on a +the "CUDA-single" release. You can still use the double-precision version on a desktop GPU, but it will be much slower than single-precision. Instructions to install CUDA are on the next page.

    It is possible to run hyperdrive with HIP (i.e. the AMD equivalent to NVIDIA's CUDA), but HIP does not appear to offer static libraries, so no static feature is provided, and users will need to compile hyperdrive themselves with instructions on the next page.

    -
    +
    +

    Note

    -

    +
    +

    The pre-compiled binaries are made by GitHub actions using:

    cargo build --release --locked --no-default-features --features=hdf5-static,cfitsio-static
     

    This means they cannot plot calibration solutions. -"CUDA-double" binaries have the cuda feature and "CUDA-single" binaries have +"CUDA-double" binaries have the cuda feature and "CUDA-single" binaries have the cuda and gpu-single features. CUDA cannot legally be statically linked so a local installation of CUDA is required.

    diff --git a/mdbook-admonish.css b/mdbook-admonish.css new file mode 100644 index 00000000..45aeff05 --- /dev/null +++ b/mdbook-admonish.css @@ -0,0 +1,348 @@ +@charset "UTF-8"; +:is(.admonition) { + display: flow-root; + margin: 1.5625em 0; + padding: 0 1.2rem; + color: var(--fg); + page-break-inside: avoid; + background-color: var(--bg); + border: 0 solid black; + border-inline-start-width: 0.4rem; + border-radius: 0.2rem; + box-shadow: 0 0.2rem 1rem rgba(0, 0, 0, 0.05), 0 0 0.1rem rgba(0, 0, 0, 0.1); +} +@media print { + :is(.admonition) { + box-shadow: none; + } +} +:is(.admonition) > * { + box-sizing: border-box; +} +:is(.admonition) :is(.admonition) { + margin-top: 1em; + margin-bottom: 1em; +} +:is(.admonition) > .tabbed-set:only-child { + margin-top: 0; +} +html :is(.admonition) > :last-child { + margin-bottom: 1.2rem; +} + +a.admonition-anchor-link { + display: none; + position: absolute; + left: -1.2rem; + padding-right: 1rem; +} +a.admonition-anchor-link:link, a.admonition-anchor-link:visited { + color: var(--fg); +} +a.admonition-anchor-link:link:hover, a.admonition-anchor-link:visited:hover { + text-decoration: none; +} +a.admonition-anchor-link::before { + content: "§"; +} + +:is(.admonition-title, summary.admonition-title) { + position: relative; + min-height: 4rem; + margin-block: 0; + margin-inline: -1.6rem -1.2rem; + padding-block: 0.8rem; + padding-inline: 4.4rem 1.2rem; + font-weight: 700; + background-color: rgba(68, 138, 255, 0.1); + print-color-adjust: exact; + -webkit-print-color-adjust: exact; + display: flex; +} +:is(.admonition-title, summary.admonition-title) p { + margin: 0; +} +html :is(.admonition-title, summary.admonition-title):last-child { + margin-bottom: 0; +} +:is(.admonition-title, summary.admonition-title)::before { + position: absolute; + top: 0.625em; + inset-inline-start: 1.6rem; + width: 2rem; + height: 2rem; + background-color: #448aff; + print-color-adjust: exact; + -webkit-print-color-adjust: exact; + mask-image: url('data:image/svg+xml;charset=utf-8,'); + -webkit-mask-image: url('data:image/svg+xml;charset=utf-8,'); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-size: contain; + content: ""; +} +:is(.admonition-title, summary.admonition-title):hover a.admonition-anchor-link { + display: initial; +} + +details.admonition > summary.admonition-title::after { + position: absolute; + top: 0.625em; + inset-inline-end: 1.6rem; + height: 2rem; + width: 2rem; + background-color: currentcolor; + mask-image: var(--md-details-icon); + -webkit-mask-image: var(--md-details-icon); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-size: contain; + content: ""; + transform: rotate(0deg); + transition: transform 0.25s; +} +details[open].admonition > summary.admonition-title::after { + transform: rotate(90deg); +} + +:root { + --md-details-icon: url("data:image/svg+xml;charset=utf-8,"); +} + +:root { + --md-admonition-icon--admonish-note: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-abstract: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-info: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-tip: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-success: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-question: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-warning: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-failure: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-danger: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-bug: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-example: url("data:image/svg+xml;charset=utf-8,"); + --md-admonition-icon--admonish-quote: url("data:image/svg+xml;charset=utf-8,"); +} + +:is(.admonition):is(.admonish-note) { + border-color: #448aff; +} + +:is(.admonish-note) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(68, 138, 255, 0.1); +} +:is(.admonish-note) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #448aff; + mask-image: var(--md-admonition-icon--admonish-note); + -webkit-mask-image: var(--md-admonition-icon--admonish-note); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-abstract, .admonish-summary, .admonish-tldr) { + border-color: #00b0ff; +} + +:is(.admonish-abstract, .admonish-summary, .admonish-tldr) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(0, 176, 255, 0.1); +} +:is(.admonish-abstract, .admonish-summary, .admonish-tldr) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #00b0ff; + mask-image: var(--md-admonition-icon--admonish-abstract); + -webkit-mask-image: var(--md-admonition-icon--admonish-abstract); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-info, .admonish-todo) { + border-color: #00b8d4; +} + +:is(.admonish-info, .admonish-todo) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(0, 184, 212, 0.1); +} +:is(.admonish-info, .admonish-todo) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #00b8d4; + mask-image: var(--md-admonition-icon--admonish-info); + -webkit-mask-image: var(--md-admonition-icon--admonish-info); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-tip, .admonish-hint, .admonish-important) { + border-color: #00bfa5; +} + +:is(.admonish-tip, .admonish-hint, .admonish-important) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(0, 191, 165, 0.1); +} +:is(.admonish-tip, .admonish-hint, .admonish-important) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #00bfa5; + mask-image: var(--md-admonition-icon--admonish-tip); + -webkit-mask-image: var(--md-admonition-icon--admonish-tip); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-success, .admonish-check, .admonish-done) { + border-color: #00c853; +} + +:is(.admonish-success, .admonish-check, .admonish-done) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(0, 200, 83, 0.1); +} +:is(.admonish-success, .admonish-check, .admonish-done) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #00c853; + mask-image: var(--md-admonition-icon--admonish-success); + -webkit-mask-image: var(--md-admonition-icon--admonish-success); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-question, .admonish-help, .admonish-faq) { + border-color: #64dd17; +} + +:is(.admonish-question, .admonish-help, .admonish-faq) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(100, 221, 23, 0.1); +} +:is(.admonish-question, .admonish-help, .admonish-faq) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #64dd17; + mask-image: var(--md-admonition-icon--admonish-question); + -webkit-mask-image: var(--md-admonition-icon--admonish-question); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-warning, .admonish-caution, .admonish-attention) { + border-color: #ff9100; +} + +:is(.admonish-warning, .admonish-caution, .admonish-attention) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(255, 145, 0, 0.1); +} +:is(.admonish-warning, .admonish-caution, .admonish-attention) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #ff9100; + mask-image: var(--md-admonition-icon--admonish-warning); + -webkit-mask-image: var(--md-admonition-icon--admonish-warning); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-failure, .admonish-fail, .admonish-missing) { + border-color: #ff5252; +} + +:is(.admonish-failure, .admonish-fail, .admonish-missing) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(255, 82, 82, 0.1); +} +:is(.admonish-failure, .admonish-fail, .admonish-missing) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #ff5252; + mask-image: var(--md-admonition-icon--admonish-failure); + -webkit-mask-image: var(--md-admonition-icon--admonish-failure); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-danger, .admonish-error) { + border-color: #ff1744; +} + +:is(.admonish-danger, .admonish-error) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(255, 23, 68, 0.1); +} +:is(.admonish-danger, .admonish-error) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #ff1744; + mask-image: var(--md-admonition-icon--admonish-danger); + -webkit-mask-image: var(--md-admonition-icon--admonish-danger); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-bug) { + border-color: #f50057; +} + +:is(.admonish-bug) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(245, 0, 87, 0.1); +} +:is(.admonish-bug) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #f50057; + mask-image: var(--md-admonition-icon--admonish-bug); + -webkit-mask-image: var(--md-admonition-icon--admonish-bug); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-example) { + border-color: #7c4dff; +} + +:is(.admonish-example) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(124, 77, 255, 0.1); +} +:is(.admonish-example) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #7c4dff; + mask-image: var(--md-admonition-icon--admonish-example); + -webkit-mask-image: var(--md-admonition-icon--admonish-example); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +:is(.admonition):is(.admonish-quote, .admonish-cite) { + border-color: #9e9e9e; +} + +:is(.admonish-quote, .admonish-cite) > :is(.admonition-title, summary.admonition-title) { + background-color: rgba(158, 158, 158, 0.1); +} +:is(.admonish-quote, .admonish-cite) > :is(.admonition-title, summary.admonition-title)::before { + background-color: #9e9e9e; + mask-image: var(--md-admonition-icon--admonish-quote); + -webkit-mask-image: var(--md-admonition-icon--admonish-quote); + mask-repeat: no-repeat; + -webkit-mask-repeat: no-repeat; + mask-size: contain; + -webkit-mask-repeat: no-repeat; +} + +.navy :is(.admonition) { + background-color: var(--sidebar-bg); +} + +.ayu :is(.admonition), +.coal :is(.admonition) { + background-color: var(--theme-hover); +} + +.rust :is(.admonition) { + background-color: var(--sidebar-bg); + color: var(--sidebar-fg); +} +.rust .admonition-anchor-link:link, .rust .admonition-anchor-link:visited { + color: var(--sidebar-fg); +} diff --git a/print.html b/print.html index 930def8f..f5e1e13d 100644 --- a/print.html +++ b/print.html @@ -31,6 +31,7 @@ + @@ -84,7 +85,7 @@ @@ -189,30 +190,32 @@

    IntroductionVisit the GitHub releases page. You should see releases like the following:

    Release example

      -
    • Under "Assets", download one of the tar.gz files starting with +
    • Under "Assets", download one of the tar.gz files starting with mwa_hyperdrive;
    • Untar it (e.g. tar -xvf mwa_hyperdrive*.tar.gz); and
    • Run the binary (./hyperdrive).

    If you intend on running hyperdrive on a desktop GPU, then you probably want -the "CUDA-single" release. You can still use the double-precision version on a +the "CUDA-single" release. You can still use the double-precision version on a desktop GPU, but it will be much slower than single-precision. Instructions to install CUDA are on the next page.

    It is possible to run hyperdrive with HIP (i.e. the AMD equivalent to NVIDIA's CUDA), but HIP does not appear to offer static libraries, so no static feature is provided, and users will need to compile hyperdrive themselves with instructions on the next page.

    -
    +
    +

    Note

    -

    +
    +

    The pre-compiled binaries are made by GitHub actions using:

    cargo build --release --locked --no-default-features --features=hdf5-static,cfitsio-static
     

    This means they cannot plot calibration solutions. -"CUDA-double" binaries have the cuda feature and "CUDA-single" binaries have +"CUDA-double" binaries have the cuda feature and "CUDA-single" binaries have the cuda and gpu-single features. CUDA cannot legally be statically linked so a local installation of CUDA is required.

    @@ -220,10 +223,12 @@

    Introduction

    Installing hyperdrive from source code

    Dependencies

    hyperdrive depends on these C libraries:

    -
    +
    +

    cfitsio

    -

    +
    +
    -
    +
    +

    hdf5

    -

    +
    +

    Optional dependencies

    -
    +
    +

    freetype2 (for calibration solutions plotting)

    -

    +
    +
      @@ -283,10 +292,12 @@

      O

    -
    + -
    +

    Installing Rust

    -
    +
    +

    TL;DR

    -

    +
    +
    curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
    @@ -359,20 +374,24 @@ 

    +

    It is possible to compile with more optimisations if you give --profile production to the cargo install command. This may make things a few percent faster, but compilation will take much longer.

    -
    +
    +

    CUDA

    -

    +
    +

    Do you have a CUDA-capable NVIDIA GPU? Ensure you have installed @@ -386,8 +405,8 @@

    cargo install --path . --locked --features=cuda,gpu-single

    -

    If you're using "datacentre" products (e.g. a V100 available on the -Pawsey-hosted supercomputer "garrawarla"), you probably want double-precision +

    If you're using "datacentre" products (e.g. a V100 available on the +Pawsey-hosted supercomputer "garrawarla"), you probably want double-precision floats:

    cargo install --path . --locked --features=cuda
     
    @@ -399,17 +418,19 @@

    +

    Do you have a HIP-capable AMD GPU? Ensure you have installed HIP (instructions are above), and compile with the hip feature (single-precision floats):

    cargo install --path . --locked --features=hip,gpu-single
     
    -

    If you're using "datacentre" products (e.g. the GPUs on the "setonix" +

    If you're using "datacentre" products (e.g. the GPUs on the "setonix" supercomputer), you probably want double-precision floats:

    cargo install --path . --locked --features=hip
     
    @@ -418,10 +439,12 @@

    +

    The aforementioned C libraries can each be compiled by cargo. all-static @@ -431,10 +454,12 @@

    +

    cargo features can be chained in a comma-separated list:

    @@ -442,10 +467,12 @@

    +

    If you're having problems compiling, it's possible you have an older Rust @@ -460,10 +487,12 @@

    +

    hyperdrive used to depend on the ERFA C @@ -471,10 +500,12 @@

    Post installation instructions

    -
    +
    +

    Setting up the beam

    -

    +
    +

    Many hyperdrive functions require the beam code to function. The MWA FEE beam @@ -512,10 +543,12 @@

    +

    hyperdrive itself is split into many subcommands. These are simple to list:

    @@ -537,7 +570,7 @@

    hyperdrive-solutions-plot 0.2.0-alpha.11 -Plot calibration solutions. Only available if compiled with the "plotting" feature. +Plot calibration solutions. Only available if compiled with the "plotting" feature. USAGE: hyperdrive solutions-plot [OPTIONS] [SOLUTIONS_FILES]... @@ -558,16 +591,18 @@

    +

    It's possible to save keystrokes when subcommands aren't ambiguous, e.g. use solutions-p as an alias for solutions-plot:

    hyperdrive solutions-p
    -<help text for "solutions-plot">
    +<help text for "solutions-plot">
     

    This works because there is no other subcommand that solutions-p could refer to. On the other hand, solutions won't be accepted because both @@ -577,8 +612,8 @@

    DI calibration

    -

    Direction-Independent (DI) calibration "corrects" raw telescope data. -hyperdrive achieves this with "sky model calibration". This can work very +

    Direction-Independent (DI) calibration "corrects" raw telescope data. +hyperdrive achieves this with "sky model calibration". This can work very well, but relies on two key assumptions:

    • The sky model is an accurate reflection of the input data; and
    • @@ -615,10 +650,12 @@

      Install hyperdrive if you haven't already.

      -
      +
      +

      Step 1: Obtain data

      -

      +
      +

      Feel free to try your own data, but test data is available in the hyperdrive @@ -633,10 +670,12 @@

      +

      It's very important to use a sky model that corresponds to the data you're @@ -646,10 +685,12 @@

      +

      We're going to run the di-calibrate subcommand of hyperdrive. If you look at @@ -659,10 +700,12 @@

      +

      The above command can be more neatly expressed as:

      @@ -675,10 +718,12 @@

      +

      The command we ran in step 3 should give us information on the input data, the @@ -700,16 +745,18 @@

      +
      -

      A "chanblock" is a frequency unit of calibration; +

      A "chanblock" is a frequency unit of calibration; it may correspond to one or many channels of the input data.

      -

      Calibration is done iteratively; it iterates until the "stop threshold" is -reached, or up to a set number of times. The "stop" and "minimum" thresholds +

      Calibration is done iteratively; it iterates until the "stop threshold" is +reached, or up to a set number of times. The "stop" and "minimum" thresholds are used during convergence. If the stop threshold is reached before the maximum number of iterations, we say that the chanblock has converged well enough that we can stop iterating. However, if we reach the maximum number of iterations, @@ -725,7 +772,7 @@

    @@ -733,10 +780,12 @@

    +

    Don't assume that things will always work! A good indicator of how calibration @@ -748,10 +797,12 @@

    +

    First, we need to know where the solutions were written; this is also reported @@ -762,7 +813,7 @@

    hyperdrive solutions-plot hyperdrive_solutions.fits

    The command should give output like this:

    -
    INFO  Wrote ["hyperdrive_solutions_amps.png", "hyperdrive_solutions_phases.png"]
    +
    INFO  Wrote ["hyperdrive_solutions_amps.png", "hyperdrive_solutions_phases.png"]
     

    These plots should look something like this:

    @@ -775,10 +826,12 @@

    +

    The solutions plots for the full 1090008640 observation look like this:

    @@ -790,13 +843,15 @@

    here.

    -
    +
    +

    Imaging calibrated data

    -

    +
    +
    -

    We have calibration solutions, but not calibrated data. We need to "apply" the +

    We have calibration solutions, but not calibrated data. We need to "apply" the solutions to data to calibrate them:

    hyperdrive solutions-apply \
         -d test_files/1090008640/1090008640_20140721201027_gpubox01_00.fits \
    @@ -816,26 +871,30 @@ 

    +

    When using the full 1090008640 observation, -this is what the same image looks like (note that unlike the above image, "sqrt" +this is what the same image looks like (note that unlike the above image, "sqrt" scaling is used):

    Many more sources are visible, and the noise is much lower. Depending on your -science case, these visibilities might be "science ready".

    +science case, these visibilities might be "science ready".

    Simple usage of DI calibrate

    -
    +
    +

    Info

    -

    +
    +

    DI calibration is done with the di-calibrate subcommand, i.e.

    @@ -859,15 +918,17 @@

    Examples

    -
    +
    +

    Raw MWA data

    -

    +
    +

    A metafits file is always required when reading raw MWA data. mwaf files are optional.

    -

    For "legacy" MWA data:

    +

    For "legacy" MWA data:

    hyperdrive di-calibrate -d *gpubox*.fits *.metafits *.mwaf -s a_good_sky_model.yaml
     

    or for MWAX:

    @@ -875,10 +936,12 @@

    Examples

    -
    +
    +

    Measurement sets

    -

    +
    +

    Note that a metafits may not be required, but is generally a good idea.

    @@ -886,10 +949,12 @@

    Examples

    -
    +
    +

    uvfits

    -

    +
    +

    Note that a metafits may not be required, but is generally a good idea.

    @@ -901,10 +966,12 @@

    Examples

    di-calibrate does not write out calibrated data (visibilities); see solutions-apply. You will need calibration solutions, so refer to the previous pages on DI calibration to get those.

    -
    +
    +

    Tip

    -

    +
    +

    Calibrated visibilities are written out in one of the supported @@ -912,16 +979,18 @@

    Examples

    averaged.

    Varying solutions over time

    -
    +
    +

    Tip

    -

    +
    +

    See this page for information on timeblocks.

    -

    By default, di-calibrate uses only one "timeblock", i.e. all data timesteps +

    By default, di-calibrate uses only one "timeblock", i.e. all data timesteps are averaged together during calibration. This provides good signal-to-noise, but it is possible that calibration is improved by taking time variations into account. This is done with --timesteps-per-timeblock (-t for short).

    @@ -936,7 +1005,7 @@

    Examples

    Implementation

    When multiple timeblocks are to be made, hyperdrive will do a pass of calibration using all timesteps to provide each timeblock's calibration with a -good "initial guess" of what their solutions should be.

    +good "initial guess" of what their solutions should be.

    Usage on garrawarla

    garrawarla is a supercomputer dedicated to MWA activities hosted by the Pawsey @@ -945,7 +1014,7 @@

    Implementation< details how to use hyperdrive there.

    How does it work?

    hyperdrive's direction-independent calibration is based off of a sky model. -That is, data visibilities are compared against "sky model" visibilities, and +That is, data visibilities are compared against "sky model" visibilities, and the differences between the two are used to calculate antenna gains (a.k.a. calibration solutions).

    Here is the algorithm used to determine antenna gains in hyperdrive:

    @@ -954,8 +1023,8 @@

    Implementation<

    Simple usage of solutions apply

    -
    +
    +

    Info

    -

    +
    +

    Use the solutions-apply subcommand, i.e.

    @@ -1020,30 +1091,36 @@

    StefCal? Mi

    Examples

    -
    +
    +

    From raw MWA data

    -

    +
    +
    hyperdrive solutions-apply -d *gpubox*.fits *.metafits *.mwaf -s hyp_sols.fits -o hyp_cal.ms
     
    -
    +
    +

    From an uncalibrated measurement set

    -

    +
    +
    hyperdrive solutions-apply -d *.ms -s hyp_sols.fits -o hyp_cal.ms
     
    -
    +
    +

    From an uncalibrated uvfits

    -

    +
    +
    hyperdrive solutions-apply -d *.uvfits -s hyp_sols.fits -o hyp_cal.ms
    @@ -1052,10 +1129,12 @@ 

    Examples

    Generally the syntax is the same as di-calibrate.

    Plot solutions

    -
    +
    +

    Availability

    -

    +
    +

    Plotting calibration solutions is not available for GitHub-built releases of @@ -1085,7 +1164,7 @@

    Examples

    for each timeblock. Timeblock information is given at the top left, if available.

    Example plots

    -

    Amplitudes ("amps")

    +

    Amplitudes ("amps")

    Phases

    @@ -1093,10 +1172,12 @@

    Phases

    vis-convert reads in visibilities and writes them out, performing whatever transformations were requested on the way (e.g. ignore autos, average to a particular time resolution, flag some tiles, etc.).

    -
    +
    +

    Simple examples

    -

    +
    +
    hyperdrive vis-convert \
    @@ -1113,10 +1194,12 @@ 

    Phases

    Simulate visibilities

    vis-simulate effectively turns a sky-model source list into visibilities.

    -
    +
    +

    Simple example

    -

    +
    +
    hyperdrive vis-simulate \
    @@ -1144,7 +1227,7 @@ 

    Vetoing

    vis-subtract can subtract the sky-model visibilities from calibrated data visibilities and write them out. This can be useful to see how well the sky model agrees with the input data, although direction-dependent effects (e.g. the -ionosphere) may be present and produce "holes" in the visibilities, e.g.:

    +ionosphere) may be present and produce "holes" in the visibilities, e.g.:

    A high-level overview of the steps in vis-subtract are below. Solid lines indicate actions that always happen, dashed lines are optional:

    @@ -1171,20 +1254,24 @@

    Vetoing

    in steps of --step, then for each of these zenith angles, moving from 0 to \( 2 \pi \) in steps of --step for the azimuth. Using a smaller --step will generate many more responses, so be aware that it might take a while.

    -
    +
    +

    CUDA/HIP

    -

    +
    +

    If CUDA or HIP is available to you, the --gpu flag will generate the beam responses on the GPU, vastly decreasing the time taken.

    -
    +
    +

    Python example to plot beam responses

    -

    +
    +
    #!/usr/bin/env python3
    @@ -1192,9 +1279,9 @@ 

    Vetoing

    import numpy as np import matplotlib.pyplot as plt -data = np.genfromtxt(fname="beam_responses.tsv", delimiter="\t", skip_header=0) +data = np.genfromtxt(fname="beam_responses.tsv", delimiter="\t", skip_header=0) -fig, ax = plt.subplots(1, 2, subplot_kw=dict(projection="polar")) +fig, ax = plt.subplots(1, 2, subplot_kw=dict(projection="polar")) p = ax[0].scatter(data[:, 0], data[:, 1], c=data[:, 2]) plt.colorbar(p) p = ax[1].scatter(data[:, 0], data[:, 1], c=np.log10(data[:, 2])) @@ -1238,13 +1325,15 @@

    Sto

    where \( \text{I} \), \( \text{Q} \), \( \text{U} \), \( \text{V} \) are Stokes polarisations and \( i \) is the imaginary unit.

    Supported visibility formats for reading

    -
    +
    +

    Raw MWA data

    -

    +
    +
    -

    Raw "legacy" MWA data comes in "gpubox" files. "MWAX" data comes in a similar +

    Raw "legacy" MWA data comes in "gpubox" files. "MWAX" data comes in a similar format, and *ch???*.fits is a useful glob to identify them. Raw data can be accessed from the ASVO.

    Here are examples of using each of these MWA formats with di-calibrate:

    @@ -1257,10 +1346,12 @@

    Sto page for more info.

    -
    +
    +

    Measurement sets

    -

    +
    +
    hyperdrive di-calibrate -d *.ms *.metafits -s a_good_sky_model.yaml
    @@ -1276,10 +1367,12 @@ 

    Sto below for more info.

    -
    +
    +

    uvfits

    -

    +
    +
    hyperdrive di-calibrate -d *.uvfits *.metafits -s a_good_sky_model.yaml
    @@ -1293,10 +1386,12 @@ 

    Sto here.

    -
    +
    +

    When using a metafits

    -

    +
    +

    When using a metafits file with a uvfits/MS, the tile names in the metafits and @@ -1313,10 +1408,12 @@

    Sto visibility file formats with solutions-apply, but other aspects of hyperdrive are also able to produce these file formats, and all aspects are able to perform averaging and write to multiple outputs.

    -
    +
    hyperdrive solutions-apply \
    @@ -1326,10 +1423,12 @@ 

    Sto

    -
    +
    hyperdrive solutions-apply \
    @@ -1341,10 +1440,12 @@ 

    Sto here.

    -
    +
    +

    Visibility averaging

    -

    +
    +

    When writing out visibilities, they can be averaged in time and frequency. Units @@ -1371,10 +1472,12 @@

    Sto time and frequency.

    -
    +

    All aspects of hyperdrive that can write visibilities can write to multiple @@ -1393,25 +1496,29 @@

    Sto

    Metafits files

    -

    The MWA tracks observation metadata with "metafits" files. Often these accompany -the raw visibilities in a download, but these could be old (such as the "PPD -metafits" files). hyperdrive does not support PPD metafits files; only new +

    The MWA tracks observation metadata with "metafits" files. Often these accompany +the raw visibilities in a download, but these could be old (such as the "PPD +metafits" files). hyperdrive does not support PPD metafits files; only new metafits files should be used.

    This command downloads a new metafits file for the specified observation ID:

    -
    +
    +

    Download MWA metafits file

    -

    +
    +
    -
    OBSID=1090008640; wget "http://ws.mwatelescope.org/metadata/fits?obs_id=${OBSID}" -O "${OBSID}".metafits
    +
    OBSID=1090008640; wget "http://ws.mwatelescope.org/metadata/fits?obs_id=${OBSID}" -O "${OBSID}".metafits
     
    -
    +
    +

    Why should I use a metafits file?

    -

    +
    +

    Measurement sets and uvfits files do not contain MWA-specific information, @@ -1420,10 +1527,12 @@

    Sto uvfits file may also lack dipole delay information.

    -
    +
    +

    Why are new metafits files better?

    -

    +
    +

    The database of MWA metadata can change over time for observations conducted @@ -1436,14 +1545,14 @@

    Sto

    Controlling dipole gains

    -

    If the "TILEDATA" HDU of a metafits contains a "DipAmps" column, each row +

    If the "TILEDATA" HDU of a metafits contains a "DipAmps" column, each row containing 16 double-precision values for bowties in the M&C order, these are -used as the dipole gains in beam calculations. If the "DipAmps" column isn't +used as the dipole gains in beam calculations. If the "DipAmps" column isn't available, the default behaviour is to use gains of 1.0 for all dipoles, except -those that have delays of 32 in the "Delays" column (they will have a gain of +those that have delays of 32 in the "Delays" column (they will have a gain of 0.0, and are considered dead).

    Dipole delays

    -

    A tile's dipole delays control where it is "pointing". Delays are provided as +

    A tile's dipole delays control where it is "pointing". Delays are provided as numbers, and this controls how long a dipole's response is delayed before its response correlated with other dipoles. This effectively allows the MWA to be more sensitive in a particular direction without any physical movement.

    @@ -1463,8 +1572,8 @@

    "dead -dipole", which means that these dipoles should not be used +

    Dipoles cannot be delayed by more than "31". "32" is code for "dead +dipole", which means that these dipoles should not be used when modelling a tile's response.

    Ideal dipole delays

    Most (all?) MWA observations use a single set of delays for all tiles. Dipole @@ -1473,24 +1582,24 @@

    Ideal
  7. In the DELAYS key in HDU 1; and
  8. For each tile in HDU 2.
  9. -

    The delays in HDU 1 are referred to as "ideal" dipole delays. A set of delays -are not ideal if any are "32" (i.e. dead).

    -

    However, the HDU 1 delays may all be "32". This is an indication from the -observatory that this observation is "bad" and should not be used. hyperdrive +

    The delays in HDU 1 are referred to as "ideal" dipole delays. A set of delays +are not ideal if any are "32" (i.e. dead).

    +

    However, the HDU 1 delays may all be "32". This is an indication from the +observatory that this observation is "bad" and should not be used. hyperdrive will proceed with such observations but issue a warning. In this case, the ideal delays are obtained by iterating over all tile delays until each delay is not 32.

    Dead dipoles

    -

    Each MWA tile has 16 "bowties", and each bowtie is made up of two dipoles (one -X, one Y). We refer to a "dead" dipole as one that is not functioning correctly +

    Each MWA tile has 16 "bowties", and each bowtie is made up of two dipoles (one +X, one Y). We refer to a "dead" dipole as one that is not functioning correctly (hopefully not receiving any power at all). This information is used in generating beam responses as part of modelling visibilities. The more accurate the visibilities, the better that calibration performs, so it is important to account for dead dipoles if possible.

    Beam responses are generated with hyperbeam and dead dipole -information is encoded as a "dipole gain" of 1 ("alive") or 0 ("dead"). It is -possible to supply other values for dipole gains with a "DipAmps" column; see +information is encoded as a "dipole gain" of 1 ("alive") or 0 ("dead"). It is +possible to supply other values for dipole gains with a "DipAmps" column; see the metafits page.

    For the relevant functions, dead dipole information can be ignored by supplying a flag --unity-dipole-gains. This sets all dipole gains to 1.

    @@ -1499,8 +1608,8 @@

    Ideal

    See this page for more info on dipole ordering.

    -

    In the image below, you can see the 12th Y dipole is dead for "Tile022". All -other dipoles are "alive".

    +

    In the image below, you can see the 12th Y dipole is dead for "Tile022". All +other dipoles are "alive".

    mwaf flag files

    mwaf files indicate what visibilities should be flagged, and should be made @@ -1512,10 +1621,12 @@

    Ideal

    At the time of writing, hyperdrive only utilises mwaf files when reading visibilities from raw data.

    -
    +

    cotter-produced mwaf files are unreliable because

    @@ -1529,16 +1640,18 @@

    Ideal

    Raw data corrections

    -

    A number of things can be done to "correct" or "pre-process" raw MWA data before +

    A number of things can be done to "correct" or "pre-process" raw MWA data before it is ready for calibration (or other analysis). These tasks are handled by Birli, either as the Birli executable itself, or internally in hyperdrive. cotter used to perform these tasks but it has been superseded by Birli.

    -
    +
    +

    Geometric correction (a.k.a. phase tracking)

    -

    +
    +

    Many MWA observations do not apply a geometric correction despite having a @@ -1549,44 +1662,50 @@

    Ideal on calibration!

    -
    +

    The poly-phase filter bank used by the MWA affects visibilities before they get -saved to disk. Over time, a number of "flavours" of these gains have been used:

    +saved to disk. Over time, a number of "flavours" of these gains have been used:

      -
    • "Jake Jones" (jake; 200 Hz)
    • -
    • "cotter 2014" (cotter2014; 10 kHz)
    • -
    • "RTS empirical" (empirical; 40 kHz)
    • -
    • "Alan Levine" (levine; 40 kHz)
    • +
    • "Jake Jones" (jake; 200 Hz)
    • +
    • "cotter 2014" (cotter2014; 10 kHz)
    • +
    • "RTS empirical" (empirical; 40 kHz)
    • +
    • "Alan Levine" (levine; 40 kHz)
    -

    When correcting raw data, the "Jake Jones" gains are used by default. For each +

    When correcting raw data, the "Jake Jones" gains are used by default. For each flavour, the first item in the parentheses (e.g. cotter2014) indicates what should be supplied to hyperdrive if you want to use those gains instead. There -is also a none "flavour" if you want to disable PFB gain correction.

    +is also a none "flavour" if you want to disable PFB gain correction.

    In CHJ's experience, using different flavours have very little effect on calibration quality.

    Some more information on the PFB can be found here.

    -
    +
    +

    Cable lengths

    -

    +
    +

    Each tile is connected by a cable, and that cable might have a different length to others. This correction aims to better align the signals of each tile.

    -
    +
    +

    Digital gains

    -

    +
    +
    todo!()
    @@ -1594,8 +1713,8 @@ 

    Ideal

    Picket fence observations

    -

    A "picket fence" observation contains more than one "spectral window" (or -"SPW"). That is, not all the frequency channels in an observation are +

    A "picket fence" observation contains more than one "spectral window" (or +"SPW"). That is, not all the frequency channels in an observation are continuous; there's at least one gap somewhere.

    hyperdrive does not currently support picket fence observations, but it will eventually support them properly. However, it is possible to calibrate a @@ -1617,7 +1736,7 @@

    Ideal

    mwalib is the official MWA raw-data-reading library. hyperdrive users usually don't need to concern themselves with it, but mwalib errors may arise.

    -

    mwalib can be quite noisy with log messages (particularly at the "trace" +

    mwalib can be quite noisy with log messages (particularly at the "trace" level); it is possible to suppress these messages by setting an environment variable:

    RUST_LOG=mwalib=error
    @@ -1643,13 +1762,15 @@ 

    Others

    presents its own preferred format (which has no limitations within this software). Each supported format is detailed on the following documentation pages.

    -

    hyperdrive can also convert between formats, although in a "lossy" way; +

    hyperdrive can also convert between formats, although in a "lossy" way; non-hyperdrive formats cannot represent all component and/or flux-density types.

    -
    +
    +

    Supported formats

    -

    +
    +
      @@ -1659,10 +1780,12 @@

      Others

    -
    +
    +

    Conversion

    -

    +
    +

    hyperdrive can convert (as best it can) between different source list formats. @@ -1672,10 +1795,12 @@

    Others

    may need to be specified.

    -
    +
    +

    Verification

    -

    +
    +

    hyperdrive can be given many source lists in order to test that they are @@ -1684,10 +1809,12 @@

    Others

    ...) as well as how many sources and components are within the file.

    -
    +
    +

    Component types

    -

    +
    +

    Each component in a sky model is represented in one of three ways:

    @@ -1699,7 +1826,7 @@

    Others

    Point sources are the simplest. Gaussian sources could be considered the same as point sources, but have details on their structure (major- and minor-axes, position angle). Finally, shapelets are described the same way as Gaussians but -additionally have multiple "shapelet components". Examples of each of these +additionally have multiple "shapelet components". Examples of each of these components can be found on the following documentation pages and in the examples directory.

    @@ -1708,10 +1835,12 @@

    Others

    This page describes supported flux-density types within hyperdrive. The following pages detail their usage within sky-model source lists. This page details how each type is estimated in modelling.

    -
    +
    +

    Power laws and Curved power laws

    -

    +
    +

    Most astrophysical sources are modelled as power laws. These are simply @@ -1720,16 +1849,18 @@

    Others

    spectral index \( \alpha \).

    Curved power laws are formalised in Section 4.1 of Callingham et al. 2017. These are -the same as power laws but with an additional "spectral curvature" parameter \( +the same as power laws but with an additional "spectral curvature" parameter \( q \).

    Both kinds of power law flux-density representations are preferred in hyperdrive.

    -
    +
    +

    Flux density lists

    -

    +
    +

    The list type is simply many instances of a Stokes \( \text{I} \), \( @@ -1764,10 +1895,12 @@

    Others

    directory.

    As most sky-models only include Stokes I, Stokes Q, U and V are not required to be specified. If they are not specified, they are assumed to have values of 0.

    -
    +
    +

    YAML example

    -

    +
    +

    The following are the contents of a valid YAML file. super_sweet_source1 is a @@ -1827,10 +1960,12 @@

    Others

    -
    +
    +

    JSON example

    -

    +
    +

    The following are the contents of a valid JSON file. super_sweet_source1 is a @@ -1838,79 +1973,79 @@

    Others

    super_sweet_source2 has two components: one Gaussian with a power law, and a shapelet with a curved power law.

    {
    -  "super_sweet_source1": [
    +  "super_sweet_source1": [
         {
    -      "ra": 10.0,
    -      "dec": -27.0,
    -      "comp_type": "point",
    -      "flux_type": {
    -        "list": [
    +      "ra": 10.0,
    +      "dec": -27.0,
    +      "comp_type": "point",
    +      "flux_type": {
    +        "list": [
               {
    -            "freq": 150000000.0,
    -            "i": 10.0
    +            "freq": 150000000.0,
    +            "i": 10.0
               },
               {
    -            "freq": 170000000.0,
    -            "i": 5.0,
    -            "q": 1.0,
    -            "u": 2.0,
    -            "v": 3.0
    +            "freq": 170000000.0,
    +            "i": 5.0,
    +            "q": 1.0,
    +            "u": 2.0,
    +            "v": 3.0
               }
             ]
           }
         }
       ],
    -  "super_sweet_source2": [
    +  "super_sweet_source2": [
         {
    -      "ra": 0.0,
    -      "dec": -35.0,
    -      "comp_type": {
    -        "gaussian": {
    -          "maj": 20.0,
    -          "min": 10.0,
    -          "pa": 75.0
    +      "ra": 0.0,
    +      "dec": -35.0,
    +      "comp_type": {
    +        "gaussian": {
    +          "maj": 20.0,
    +          "min": 10.0,
    +          "pa": 75.0
             }
           },
    -      "flux_type": {
    -        "power_law": {
    -          "si": -0.8,
    -          "fd": {
    -            "freq": 170000000.0,
    -            "i": 5.0,
    -            "q": 1.0,
    -            "u": 2.0,
    -            "v": 3.0
    +      "flux_type": {
    +        "power_law": {
    +          "si": -0.8,
    +          "fd": {
    +            "freq": 170000000.0,
    +            "i": 5.0,
    +            "q": 1.0,
    +            "u": 2.0,
    +            "v": 3.0
               }
             }
           }
         },
         {
    -      "ra": 155.0,
    -      "dec": -10.0,
    -      "comp_type": {
    -        "shapelet": {
    -          "maj": 20.0,
    -          "min": 10.0,
    -          "pa": 75.0,
    -          "coeffs": [
    +      "ra": 155.0,
    +      "dec": -10.0,
    +      "comp_type": {
    +        "shapelet": {
    +          "maj": 20.0,
    +          "min": 10.0,
    +          "pa": 75.0,
    +          "coeffs": [
                 {
    -              "n1": 0,
    -              "n2": 1,
    -              "value": 0.5
    +              "n1": 0,
    +              "n2": 1,
    +              "value": 0.5
                 }
               ]
             }
           },
    -      "flux_type": {
    -        "curved_power_law": {
    -          "si": -0.6,
    -          "fd": {
    -            "freq": 150000000.0,
    -            "i": 50.0,
    -            "q": 0.5,
    -            "u": 0.1
    +      "flux_type": {
    +        "curved_power_law": {
    +          "si": -0.6,
    +          "fd": {
    +            "freq": 150000000.0,
    +            "i": 50.0,
    +            "q": 0.5,
    +            "u": 0.1
               },
    -          "q": 0.2
    +          "q": 0.2
             }
           }
         }
    @@ -1929,20 +2064,22 @@ 

    Others

    where RA increases from right to left (i.e. bigger RA values are on the left), position angles rotate counter clockwise. A position angle of 0 has the major axis aligned with the declination axis.

    -

    Flux densities must be specified in the power law or "list" style (i.e. curved +

    Flux densities must be specified in the power law or "list" style (i.e. curved power laws are not supported).

    Source names are allowed to have spaces inside them, because the names are surrounded by quotes. This is fine for reading, but when converting one of these sources to another format, the spaces need to be translated to underscores.

    -
    +
    +

    Example

    -

    +
    +
    skymodel fileformat 1.1
     source {
    -  name "J002549-260211"
    +  name "J002549-260211"
       component {
         type point
         position 0h25m49.2s -26d02m13s
    @@ -1957,7 +2094,7 @@ 

    Others

    } } source { - name "COM000338-1517" + name "COM000338-1517" component { type gaussian position 0h03m38.7844s -15d17m09.7338s @@ -1980,16 +2117,18 @@

    Others

    are in degrees. In an image space where RA increases from right to left (i.e. bigger RA values are on the left), position angles rotate counter clockwise. A position angle of 0 has the major axis aligned with the declination axis.

    -

    All flux densities are specified in the "list" style (i.e. power laws and curved +

    All flux densities are specified in the "list" style (i.e. power laws and curved power laws are not supported).

    Keywords like SOURCE, COMPONENT, POINT etc. must be at the start of a line (i.e. no preceding space).

    -

    RTS sources always have a "base source", which can be thought of as a +

    RTS sources always have a "base source", which can be thought of as a non-optional component or the first component in a collection of components.

    -
    +
    +

    Example

    -

    +
    +

    Taken from srclists, file @@ -2037,7 +2176,7 @@

    Others

    Calibration solutions file formats

    Calibration solutions are Jones matrices that, when applied to raw data, -"calibrate" the visibilities.

    +"calibrate" the visibilities.

    hyperdrive can convert between supported formats (see solutions-convert). Soon it will also be able to apply them (but users can write out calibrated visibilities as part of di-calibrate).

    @@ -2047,8 +2186,8 @@

    Others

  10. RTS format
  11. The hyperdrive calibration solutions format

    -

    Jones matrices are stored in a fits file as an "image" with 4 dimensions -(timeblock, tile, chanblock, float, in that order) in the "SOLUTIONS" HDU (which +

    Jones matrices are stored in a fits file as an "image" with 4 dimensions +(timeblock, tile, chanblock, float, in that order) in the "SOLUTIONS" HDU (which is the second HDU). An element of the solutions is a 64-bit float (a.k.a. double-precision float). The last dimension always has a length of 8; these correspond to the complex gains of the X dipoles (\( g_x \)), the leakage of @@ -2056,10 +2195,10 @@

    Others

    the gains of the Y dipoles (\( g_y \)); these form a complex 2x2 Jones matrix:

    \[ \begin{pmatrix} g_x & D_x \\ D_y & g_y \end{pmatrix} \]

    Tiles are ordered by antenna number, i.e. the second column in the observation's -corresponding metafits files labelled "Antenna". Times and frequencies are +corresponding metafits files labelled "Antenna". Times and frequencies are sorted ascendingly.

    -

    Note that in the context of the MWA, "antenna" and "tile" are used +

    Note that in the context of the MWA, "antenna" and "tile" are used interchangeably.

    Metadata

    @@ -2085,17 +2224,17 @@

    Cal path to this file.

    Raw MWA data corrections

    PFB describes the PFB gains flavour applied to -the raw MWA data. At the time of writing, this flavour is described as "jake", -"cotter2014", "empirical", "levine", or "none".

    -

    D_GAINS is "Y" if the digital +the raw MWA data. At the time of writing, this flavour is described as "jake", +"cotter2014", "empirical", "levine", or "none".

    +

    D_GAINS is "Y" if the digital gains were applied to the raw MWA -data. "N" if they were not.

    -

    CABLELEN is "Y" if the cable length +data. "N" if they were not.

    +

    CABLELEN is "Y" if the cable length corrections were applied to the raw -MWA data. "N" if they were not.

    -

    GEOMETRY is "Y" if the geometric delay +MWA data. "N" if they were not.

    +

    GEOMETRY is "Y" if the geometric delay correction -was applied to the raw MWA data. "N" if they were not.

    +was applied to the raw MWA data. "N" if they were not.

    Others

    MODELLER describes what was used to generate model visibilities in calibration. This is either CPU or details on the CUDA device used, e.g. @@ -2103,18 +2242,18 @@

    Others

    Extra HDUs

    More metadata are contained in HDUs other than the first one (that which contains the metadata keys described above). Other than the first HDU and the -"SOLUTIONS" HDU (HDUs 1 and 2, respectfully), all HDUs and their contents are +"SOLUTIONS" HDU (HDUs 1 and 2, respectfully), all HDUs and their contents are optional.

    TIMEBLOCKS

    See blocks for an explanation of what timeblocks are.

    -

    The "TIMEBLOCKS" HDU is a FITS table with three columns:

    +

    The "TIMEBLOCKS" HDU is a FITS table with three columns:

    1. Start
    2. End
    3. Average

    Each row represents a calibration timeblock, and there must be the same number -of rows as there are timeblocks in the calibration solutions (in the "SOLUTIONS" +of rows as there are timeblocks in the calibration solutions (in the "SOLUTIONS" HDU). Each of these times is a centroid GPS timestamp.

    It is possible to have one or multiple columns without data; cfitsio will write zeros for values, but hyperdrive will ignore columns with all zeros.

    @@ -2125,7 +2264,7 @@

    TIMEBLOCKS

    timesteps in that timeblock are used, then the average time could be 12.666 or 13.333.

    TILES

    -

    The "TILES" HDU is a FITS table with up to five columns:

    +

    The "TILES" HDU is a FITS table with up to five columns:

    1. Antenna
    2. Flag
    3. @@ -2134,7 +2273,7 @@

      TILES

    4. DipoleDelays

    Antenna is the 0-N antenna index (where N is the total number of antennas in -the observation). These indices match the "Antenna" column of an MWA +the observation). These indices match the "Antenna" column of an MWA metafits file.

    Flag is a boolean indicating whether an antenna was flagged for calibration (1) or not (0).

    @@ -2147,7 +2286,7 @@

    TILES

    There are 16 values per tile.

    CHANBLOCKS

    See blocks for an explanation of what chanblocks are.

    -

    The "CHANBLOCKS" HDU is a FITS table with up to three columns:

    +

    The "CHANBLOCKS" HDU is a FITS table with up to three columns:

    1. Index
    2. Flag
    3. @@ -2163,16 +2302,16 @@

      CHANBLOCKS

      If any of the frequencies is an NaN, then hyperdrive will not use the Freq column.

      RESULTS (Calibration results)

      -

      The "RESULTS" HDU is a FITS image with two dimensions -- timeblock and +

      The "RESULTS" HDU is a FITS image with two dimensions -- timeblock and chanblock, in that order -- that describe the precision to which a chanblock converged for that timeblock (as double-precision floats). If a chanblock was flagged, NaN is provided for its precision. NaN is also listed for chanblocks that completely failed to calibrate.

      These calibration precisions must have the same number of timeblocks and -chanblocks described by the calibration solutions (in the "SOLUTIONS" HDU).

      +chanblocks described by the calibration solutions (in the "SOLUTIONS" HDU).

      BASELINES

      -

      The "BASELINES" HDU is a FITS image with one dimension. The values of the -"image" (let's call it an array) are the double-precision float baseline weights +

      The "BASELINES" HDU is a FITS image with one dimension. The values of the +"image" (let's call it an array) are the double-precision float baseline weights used in calibration (controlled by UVW minimum and maximum cutoffs). The length of the array is the total number of baselines (i.e. flagged and unflagged). Flagged baselines have weights of NaN, e.g. baseline 0 is between antennas 0 and @@ -2181,10 +2320,12 @@

      BASELINES

      These baseline weights must have a non-NaN value for all tiles in the observation (e.g. if there are 128 tiles in the calibration solutions, then there must be 8128 baseline weights).

      -
      +
      +

      Python code for reading

      -

      +
      +

      A full example of reading and plotting solutions is @@ -2194,36 +2335,36 @@

      BASELINES

      from astropy.io import fits -f = fits.open("hyperdrive_solutions.fits") -sols = f["SOLUTIONS"].data +f = fits.open("hyperdrive_solutions.fits") +sols = f["SOLUTIONS"].data num_timeblocks, num_tiles, num_chanblocks, _ = sols.shape -obsid = f[0].header["OBSID"] -pfb_flavour = f[0].header["PFB"] -start_times = f[0].header["S_TIMES"] +obsid = f[0].header["OBSID"] +pfb_flavour = f[0].header["PFB"] +start_times = f[0].header["S_TIMES"] -tile_names = [tile["TileName"] for tile in f["TILES"].data] -tile_flags = [tile["Flag"] for tile in f["TILES"].data] +tile_names = [tile["TileName"] for tile in f["TILES"].data] +tile_flags = [tile["Flag"] for tile in f["TILES"].data] -freqs = [chan["FREQ"] for chan in f["CHANBLOCKS"].data] +freqs = [chan["FREQ"] for chan in f["CHANBLOCKS"].data] -cal_precisions_for_timeblock_0 = f["RESULTS"].data[0] +cal_precisions_for_timeblock_0 = f["RESULTS"].data[0]

    The André Offringa (ao) calibration solutions format

    This format is output by calibrate and is documented in mwa-reduce as -follows. Note that the startTime and endTime should be populated with "AIPS -time", although calibrate appears to always write 0 for these. hyperdrive +follows. Note that the startTime and endTime should be populated with "AIPS +time", although calibrate appears to always write 0 for these. hyperdrive instead opts to write the centroid GPS times here (the end time is the last timestep inclusive).

    Tiles are ordered by antenna number, i.e. the second column in the observation's -corresponding metafits files labelled "antenna". Times and frequencies are +corresponding metafits files labelled "antenna". Times and frequencies are sorted ascendingly.

    mwa-reduce documentation

    | Bytes  |  Description |
     |-------:|:-------------|
    -|  0- 7  |  string intro ; 8-byte null terminated string "MWAOCAL" |
    +|  0- 7  |  string intro ; 8-byte null terminated string "MWAOCAL" |
     |  8-11  |  int fileType ; always 0, reserved for indicating something other than complex Jones solutions |
     | 12-15  |  int structureType ; always 0, reserved for indicating different ordering |
     | 16-19  |  int intervalCount ; Number of solution intervals in file |
    @@ -2245,17 +2386,19 @@ 

    The RTS calibration solutions format

    This format is extremely complicated and therefore its usage is discouraged. However, it is possible to convert RTS solutions to one of the other supported formats; a metafits file is required, and the directory containing the solutions (i.e. DI_JonesMatrices and BandpassCalibration files) is supplied:

    -
    +
    +

    Converting RTS solutions to another format

    -

    +
    +
    hyperdrive solutions-convert /path/to/rts/solutions/ rts-as-hyp-solutions.fits -m /path/to/obs.metafits
    @@ -2266,10 +2409,12 @@ 

    +

    I (CHJ) spent a very long time trying to make the writing of RTS solutions @@ -2319,10 +2464,12 @@

    Errors

    The following pages go into further detail of how visibilities are modelled in hyperdrive.

    Measurement equation

    -
    +
    +

    Note

    -

    +
    +

    A lot of this content was taken from Jack Line's @@ -2358,10 +2505,12 @@

    Errors

    belongs to a source with multiple components, and that the overall flux density of that source at any frequency is positive. A source with a negative flux density is not physical.

    -
    +
    +

    Power laws and Curved power laws

    -

    +
    +

    Both power-law and curved-power-law sources have a spectral index (\( \alpha @@ -2402,13 +2551,15 @@

    Errors

    No estimation is required when \( \nu \) is equal to any of the list frequencies \( \nu_i \).

    -
    +
    +

    Concerns on list types

    -

    +
    +
    -

    When estimating flux densities from a list, it is feared that the "jagged" shape +

    When estimating flux densities from a list, it is feared that the "jagged" shape of a component's spectral energy distribution introduces artefacts into an EoR power spectrum.

    It is relatively expensive to estimate flux densities from a list type. For all @@ -2421,38 +2572,38 @@

    ITRF -frame (internally we refer to this as "geocentric XYZ"). There's also a -"geodetic XYZ" frame; an example of this is WGS +frame (internally we refer to this as "geocentric XYZ"). There's also a +"geodetic XYZ" frame; an example of this is WGS 84 (which we assume everywhere when converting, as it's the current best ellipsoid). Finally, -there's also an "East North Height" coordinate system.

    +there's also an "East North Height" coordinate system.

    To calculate UVW baseline coordinates, geodetic XYZ coordinates are required1. Therefore, various coordinate conversions are required to obtain UVWs. The conversion between all of these systems is briefly described below. The relevant code lives within Marlu.

    -

    ITRF and "geocentric XYZ"

    +

    ITRF and "geocentric XYZ"

    As the name implies, this coordinate system uses the centre of the Earth as a reference. To convert between geocentric and geodetic, an array position is -required (i.e. the "average" location on the Earth of the instrument collecting +required (i.e. the "average" location on the Earth of the instrument collecting visibilities). When all antenna positions are geocentric, the array position is given by the mean antenna position.

    -

    Measurement sets indicate the usage of ITRF with the "MEASURE_REFERENCE" keyword -attached to the POSITION column of an ANTENNA table (value "ITRF").

    +

    Measurement sets indicate the usage of ITRF with the "MEASURE_REFERENCE" keyword +attached to the POSITION column of an ANTENNA table (value "ITRF").

    The uvfits standard states -that only supported frame is "ITRF", and hyperdrive assumes that only ITRF is +that only supported frame is "ITRF", and hyperdrive assumes that only ITRF is used. However, CASA/casacore seem to write out antenna positions incorrectly; the positions look like what you would find in an equivalent measurement set. The incorrect behaviour is detected and accounted for.

    -

    "Geodetic XYZ"

    +

    "Geodetic XYZ"

    This coordinate system is similar to geocentric, but uses an array position as its reference.

    -

    Measurement sets support the WGS 84 frame, again with the "MEASURE_REFERENCE" -keyword attached to the POSITION column of an ANTENNA table (value "WGS84"). +

    Measurement sets support the WGS 84 frame, again with the "MEASURE_REFERENCE" +keyword attached to the POSITION column of an ANTENNA table (value "WGS84"). However, hyperdrive currently does not check if geodetic positions are used; it instead just assumes geocentric.

    -

    When read literally, the antenna positions in a uvfits file ("STABXYZ" column of -the "AIPS AN" HDU) should be geodetic, not counting the aforementioned +

    When read literally, the antenna positions in a uvfits file ("STABXYZ" column of +the "AIPS AN" HDU) should be geodetic, not counting the aforementioned casacore bug.

    East North Height (ENH)

    MWA tiles positions are listed in metafits files with @@ -2501,7 +2652,7 @@

    UVWs

    Note that this is a UVW coordinate for an antenna. To get the proper baseline UVW, a difference between two antennas' UVWs needs to be taken. The order of -this subtraction is important; hyperdrive uses the "antenna1 - antenna2" +this subtraction is important; hyperdrive uses the "antenna1 - antenna2" convention. Software that reads data may need to conjugate visibilities if this convention is different.

    Further reading

    @@ -2533,7 +2684,7 @@

    More explan

    A lot of good, easy-to-read information is here.

    UTC keeps track with TAI but only through the aid of leap seconds (both are -"atomic time frames"). UT1 is the "actual time", but the Earth's rate of +"atomic time frames"). UT1 is the "actual time", but the Earth's rate of rotation is difficult to measure and predict. DUT1 is not allowed to be more than -0.9 or 0.9 seconds; a leap second is introduced before that threshold is reached.

    diff --git a/searcher.js b/searcher.js index d2b0aeed..dc03e0a0 100644 --- a/searcher.js +++ b/searcher.js @@ -316,7 +316,7 @@ window.search = window.search || {}; // Eventhandler for keyevents on `document` function globalKeyHandler(e) { - if (e.altKey || e.ctrlKey || e.metaKey || e.shiftKey || e.target.type === 'textarea' || e.target.type === 'text') { return; } + if (e.altKey || e.ctrlKey || e.metaKey || e.shiftKey || e.target.type === 'textarea' || e.target.type === 'text' || !hasFocus() && /^(?:input|select|textarea)$/i.test(e.target.nodeName)) { return; } if (e.keyCode === ESCAPE_KEYCODE) { e.preventDefault(); diff --git a/user/beam.html b/user/beam.html index ebfc82fd..b4a3c901 100644 --- a/user/beam.html +++ b/user/beam.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -155,20 +156,24 @@

    Get bea in steps of --step, then for each of these zenith angles, moving from 0 to \( 2 \pi \) in steps of --step for the azimuth. Using a smaller --step will generate many more responses, so be aware that it might take a while.

    -
    +

    If CUDA or HIP is available to you, the --gpu flag will generate the beam responses on the GPU, vastly decreasing the time taken.

    -
    +
    +

    Python example to plot beam responses

    -

    +
    +
    #!/usr/bin/env python3
    @@ -176,9 +181,9 @@ 

    Get bea import numpy as np import matplotlib.pyplot as plt -data = np.genfromtxt(fname="beam_responses.tsv", delimiter="\t", skip_header=0) +data = np.genfromtxt(fname="beam_responses.tsv", delimiter="\t", skip_header=0) -fig, ax = plt.subplots(1, 2, subplot_kw=dict(projection="polar")) +fig, ax = plt.subplots(1, 2, subplot_kw=dict(projection="polar")) p = ax[0].scatter(data[:, 0], data[:, 1], c=data[:, 2]) plt.colorbar(p) p = ax[1].scatter(data[:, 0], data[:, 1], c=np.log10(data[:, 2])) diff --git a/user/di_cal/advanced/time_varying.html b/user/di_cal/advanced/time_varying.html index 15d73e63..0f7970c9 100644 --- a/user/di_cal/advanced/time_varying.html +++ b/user/di_cal/advanced/time_varying.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -148,16 +149,18 @@

    mwa_hyperdrive documentation

    Varying solutions over time

    -
    +
    +

    Tip

    -

    +
    +

    See this page for information on timeblocks.

    -

    By default, di-calibrate uses only one "timeblock", i.e. all data timesteps +

    By default, di-calibrate uses only one "timeblock", i.e. all data timesteps are averaged together during calibration. This provides good signal-to-noise, but it is possible that calibration is improved by taking time variations into account. This is done with --timesteps-per-timeblock (-t for short).

    @@ -172,7 +175,7 @@

    Implementation

    When multiple timeblocks are to be made, hyperdrive will do a pass of calibration using all timesteps to provide each timeblock's calibration with a -good "initial guess" of what their solutions should be.

    +good "initial guess" of what their solutions should be.

    diff --git a/user/di_cal/garrawarla.html b/user/di_cal/garrawarla.html index 46a06b39..b11ce96b 100644 --- a/user/di_cal/garrawarla.html +++ b/user/di_cal/garrawarla.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ diff --git a/user/di_cal/how_does_it_work.html b/user/di_cal/how_does_it_work.html index 1966cfc7..4cd38ef0 100644 --- a/user/di_cal/how_does_it_work.html +++ b/user/di_cal/how_does_it_work.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -149,7 +150,7 @@

    mwa_hyperdrive documentation

    How does it work?

    hyperdrive's direction-independent calibration is based off of a sky model. -That is, data visibilities are compared against "sky model" visibilities, and +That is, data visibilities are compared against "sky model" visibilities, and the differences between the two are used to calculate antenna gains (a.k.a. calibration solutions).

    Here is the algorithm used to determine antenna gains in hyperdrive:

    @@ -158,8 +159,8 @@

    How does it

    diff --git a/user/di_cal/intro.html b/user/di_cal/intro.html index 25e0e079..49507964 100644 --- a/user/di_cal/intro.html +++ b/user/di_cal/intro.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -148,8 +149,8 @@

    mwa_hyperdrive documentation

    DI calibration

    -

    Direction-Independent (DI) calibration "corrects" raw telescope data. -hyperdrive achieves this with "sky model calibration". This can work very +

    Direction-Independent (DI) calibration "corrects" raw telescope data. +hyperdrive achieves this with "sky model calibration". This can work very well, but relies on two key assumptions:

    -
    +
    +

    Measurement sets

    -

    +
    +

    Note that a metafits may not be required, but is generally a good idea.

    @@ -202,10 +209,12 @@

    Examples

    -
    +
    +

    uvfits

    -

    +
    +

    Note that a metafits may not be required, but is generally a good idea.

    diff --git a/user/di_cal/tutorial.html b/user/di_cal/tutorial.html index f1542558..8bce8186 100644 --- a/user/di_cal/tutorial.html +++ b/user/di_cal/tutorial.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -152,10 +153,12 @@

    Install hyperdrive if you haven't already.

    -
    +
    +

    Step 1: Obtain data

    -

    +
    +

    Feel free to try your own data, but test data is available in the hyperdrive @@ -170,10 +173,12 @@

    +

    It's very important to use a sky model that corresponds to the data you're @@ -183,10 +188,12 @@

    +

    We're going to run the di-calibrate subcommand of hyperdrive. If you look at @@ -196,10 +203,12 @@

    +

    The above command can be more neatly expressed as:

    @@ -212,10 +221,12 @@

    +

    The command we ran in step 3 should give us information on the input data, the @@ -237,16 +248,18 @@

    +
    -

    A "chanblock" is a frequency unit of calibration; +

    A "chanblock" is a frequency unit of calibration; it may correspond to one or many channels of the input data.

    -

    Calibration is done iteratively; it iterates until the "stop threshold" is -reached, or up to a set number of times. The "stop" and "minimum" thresholds +

    Calibration is done iteratively; it iterates until the "stop threshold" is +reached, or up to a set number of times. The "stop" and "minimum" thresholds are used during convergence. If the stop threshold is reached before the maximum number of iterations, we say that the chanblock has converged well enough that we can stop iterating. However, if we reach the maximum number of iterations, @@ -262,7 +275,7 @@

    @@ -270,10 +283,12 @@

    +

    Don't assume that things will always work! A good indicator of how calibration @@ -285,10 +300,12 @@

    +

    First, we need to know where the solutions were written; this is also reported @@ -299,7 +316,7 @@

    hyperdrive solutions-plot hyperdrive_solutions.fits

    The command should give output like this:

    -
    INFO  Wrote ["hyperdrive_solutions_amps.png", "hyperdrive_solutions_phases.png"]
    +
    INFO  Wrote ["hyperdrive_solutions_amps.png", "hyperdrive_solutions_phases.png"]
     

    These plots should look something like this:

    @@ -312,10 +329,12 @@

    +

    The solutions plots for the full 1090008640 observation look like this:

    @@ -327,13 +346,15 @@

    here.

    -
    +
    +

    Imaging calibrated data

    -

    +
    +
    -

    We have calibration solutions, but not calibrated data. We need to "apply" the +

    We have calibration solutions, but not calibrated data. We need to "apply" the solutions to data to calibrate them:

    hyperdrive solutions-apply \
         -d test_files/1090008640/1090008640_20140721201027_gpubox01_00.fits \
    @@ -353,19 +374,21 @@ 

    +

    When using the full 1090008640 observation, -this is what the same image looks like (note that unlike the above image, "sqrt" +this is what the same image looks like (note that unlike the above image, "sqrt" scaling is used):

    Many more sources are visible, and the noise is much lower. Depending on your -science case, these visibilities might be "science ready".

    +science case, these visibilities might be "science ready".

    diff --git a/user/help.html b/user/help.html index 48a7d897..bc7eac2c 100644 --- a/user/help.html +++ b/user/help.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -161,10 +162,12 @@

    Getting start them after hyperdrive. Each subcommand accepts --help (as well as -h). Detailed usage information on each subcommand can be seen in the table of contents of this book. More information on subcommands as a concept is below.

    -
    +

    hyperdrive itself is split into many subcommands. These are simple to list:

    @@ -186,7 +189,7 @@

    Getting start hyperdrive solutions-plot --help

    hyperdrive-solutions-plot 0.2.0-alpha.11
    -Plot calibration solutions. Only available if compiled with the "plotting" feature.
    +Plot calibration solutions. Only available if compiled with the "plotting" feature.
     
     USAGE:
         hyperdrive solutions-plot [OPTIONS] [SOLUTIONS_FILES]...
    @@ -207,16 +210,18 @@ 

    Getting start

    -
    +

    It's possible to save keystrokes when subcommands aren't ambiguous, e.g. use solutions-p as an alias for solutions-plot:

    hyperdrive solutions-p
    -<help text for "solutions-plot">
    +<help text for "solutions-plot">
     

    This works because there is no other subcommand that solutions-p could refer to. On the other hand, solutions won't be accepted because both diff --git a/user/intro.html b/user/intro.html index 7289142a..994e4199 100644 --- a/user/intro.html +++ b/user/intro.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@

    diff --git a/user/plotting.html b/user/plotting.html index 1073e5ac..67ba4005 100644 --- a/user/plotting.html +++ b/user/plotting.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -148,10 +149,12 @@

    mwa_hyperdrive documentation

    Plot solutions

    -
    +
    +

    Availability

    -

    +
    +

    Plotting calibration solutions is not available for GitHub-built releases of @@ -181,7 +184,7 @@

    Plot solutions< for each timeblock. Timeblock information is given at the top left, if available.

    Example plots

    -

    Amplitudes ("amps")

    +

    Amplitudes ("amps")

    Phases

    diff --git a/user/solutions_apply/intro.html b/user/solutions_apply/intro.html index 0520bd69..9a55f9f9 100644 --- a/user/solutions_apply/intro.html +++ b/user/solutions_apply/intro.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ diff --git a/user/solutions_apply/simple.html b/user/solutions_apply/simple.html index 185a5239..65ff5b51 100644 --- a/user/solutions_apply/simple.html +++ b/user/solutions_apply/simple.html @@ -30,6 +30,7 @@ + @@ -83,7 +84,7 @@ @@ -148,10 +149,12 @@

    mwa_hyperdrive documentation

    Simple usage of solutions apply

    -
    +
    +

    Info

    -

    +
    +

    Use the solutions-apply subcommand, i.e.

    @@ -173,30 +176,36 @@

    Examples

    -
    +
    +

    From raw MWA data

    -

    +
    +
    hyperdrive solutions-apply -d *gpubox*.fits *.metafits *.mwaf -s hyp_sols.fits -o hyp_cal.ms
     
    -
    +
    +

    From an uncalibrated measurement set

    -

    +
    +
    hyperdrive solutions-apply -d *.ms -s hyp_sols.fits -o hyp_cal.ms
     
    -
    +
    +

    From an uncalibrated uvfits

    -

    +
    +
    hyperdrive solutions-apply -d *.uvfits -s hyp_sols.fits -o hyp_cal.ms
    diff --git a/user/vis_convert/intro.html b/user/vis_convert/intro.html
    index e118bd4e..572bc07f 100644
    --- a/user/vis_convert/intro.html
    +++ b/user/vis_convert/intro.html
    @@ -30,6 +30,7 @@
     
             
             
    +        
     
             
             
    @@ -83,7 +84,7 @@
     
             
    @@ -151,10 +152,12 @@ 

    Con

    vis-convert reads in visibilities and writes them out, performing whatever transformations were requested on the way (e.g. ignore autos, average to a particular time resolution, flag some tiles, etc.).

    -
    +
    hyperdrive vis-convert \
    diff --git a/user/vis_simulate/intro.html b/user/vis_simulate/intro.html
    index cc59df43..7cd1f4b7 100644
    --- a/user/vis_simulate/intro.html
    +++ b/user/vis_simulate/intro.html
    @@ -30,6 +30,7 @@
     
             
             
    +        
     
             
             
    @@ -83,7 +84,7 @@
     
             
    @@ -149,10 +150,12 @@ 

    mwa_hyperdrive documentation

    Simulate visibilities

    vis-simulate effectively turns a sky-model source list into visibilities.

    -
    +
    +

    Simple example

    -

    +
    +
    hyperdrive vis-simulate \
    diff --git a/user/vis_subtract/intro.html b/user/vis_subtract/intro.html
    index 037b00c4..dba3efb1 100644
    --- a/user/vis_subtract/intro.html
    +++ b/user/vis_subtract/intro.html
    @@ -30,6 +30,7 @@
     
             
             
    +        
     
             
             
    @@ -83,7 +84,7 @@
     
             
    @@ -151,7 +152,7 @@ 

    S

    vis-subtract can subtract the sky-model visibilities from calibrated data visibilities and write them out. This can be useful to see how well the sky model agrees with the input data, although direction-dependent effects (e.g. the -ionosphere) may be present and produce "holes" in the visibilities, e.g.:

    +ionosphere) may be present and produce "holes" in the visibilities, e.g.:

    A high-level overview of the steps in vis-subtract are below. Solid lines indicate actions that always happen, dashed lines are optional: