diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 86fb5589..448fa060 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-10-29T05:16:04","documenter_version":"1.1.2"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-10-29T15:06:46","documenter_version":"1.1.2"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index 16c41297..bd42e62f 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,20 +1,20 @@ -Comrade API · Comrade.jl

Comrade API

Contents

Index

Model Definitions

Calibration Models

Comrade.corruptFunction
corrupt(vis, j1, j2)

Corrupts the model coherency matrices with the Jones matrices j1 for station 1 and j2 for station 2.

source
Comrade.CalTableType
struct CalTable{T, G<:(AbstractVecOrMat)}

A Tabes of calibration quantities. The columns of the table are the telescope station codes. The rows are the calibration quantities at a specific time stamp. This user should not use this struct directly. Instead that should call caltable.

source
Comrade.caltableMethod
caltable(g::JonesCache, jterms::AbstractVector)

Convert the JonesCache g and recovered Jones/corruption elements jterms into a CalTable which satisfies the Tables.jl interface.

Example

ct = caltable(gcache, gains)
+Comrade API · Comrade.jl

Comrade API

Contents

Index

Model Definitions

Calibration Models

Comrade.corruptFunction
corrupt(vis, j1, j2)

Corrupts the model coherency matrices with the Jones matrices j1 for station 1 and j2 for station 2.

source
Comrade.CalTableType
struct CalTable{T, G<:(AbstractVecOrMat)}

A Tabes of calibration quantities. The columns of the table are the telescope station codes. The rows are the calibration quantities at a specific time stamp. This user should not use this struct directly. Instead that should call caltable.

source
Comrade.caltableMethod
caltable(g::JonesCache, jterms::AbstractVector)

Convert the JonesCache g and recovered Jones/corruption elements jterms into a CalTable which satisfies the Tables.jl interface.

Example

ct = caltable(gcache, gains)
 
 # Access a particular station (here ALMA)
 ct[:AA]
 ct.AA
 
 # Access a the first row
-ct[1, :]
source
Comrade.caltableMethod
caltable(obs::EHTObservation, gains::AbstractVector)

Create a calibration table for the observations obs with gains. This returns a CalTable object that satisfies the Tables.jl interface. This table is very similar to the DataFrames interface.

Example

ct = caltable(obs, gains)
+ct[1, :]
source
Comrade.caltableMethod
caltable(obs::EHTObservation, gains::AbstractVector)

Create a calibration table for the observations obs with gains. This returns a CalTable object that satisfies the Tables.jl interface. This table is very similar to the DataFrames interface.

Example

ct = caltable(obs, gains)
 
 # Access a particular station (here ALMA)
 ct[:AA]
 ct.AA
 
 # Access a the first row
-ct[1, :]
source
Comrade.DesignMatrixType
struct DesignMatrix{X, M<:AbstractArray{X, 2}, T, S} <: AbstractArray{X, 2}

Internal type that holds the gain design matrices for visibility corruption.

source
Comrade.JonesCacheType
struct JonesCache{D1, D2, S, Sc, R} <: Comrade.AbstractJonesCache

Holds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope.

Fields

  • m1: Design matrix for the first station
  • m2: Design matrix for the second station
  • seg: Segmentation schemes for this cache
  • schema: Gain Schema
  • references: List of Reference stations
source
Comrade.ResponseCacheType
struct ResponseCache{M, B<:PolBasis} <: Comrade.AbstractJonesCache

Holds various transformations that move from the measured telescope basis to the chosen on sky reference basis.

Fields

  • T1: Transform matrices for the first stations
  • T2: Transform matrices for the second stations
  • refbasis: Reference polarization basis
source
Comrade.JonesModelType
JonesModel(jones::JonesPairs, refbasis = CirBasis())
-JonesModel(jones::JonesPairs, tcache::ResponseCache)

Constructs the intrument corruption model using pairs of jones matrices jones and a reference basis

source
Comrade.VLBIModelType
VLBIModel(skymodel, instrumentmodel)

Constructs a VLBIModel from a jones pairs that describe the intrument model and the model which describes the on-sky polarized visibilities. The third argument can either be the tcache that converts from the model coherency basis to the instrumental basis, or just the refbasis that will be used when constructing the model coherency matrices.

source
Comrade.CalPriorType
CalPrior(dists, cache::JonesCache, reference=:none)

Creates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.

Example

For the 2017 observations of M87 a common CalPrior call is:

julia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),
+ct[1, :]
source
Comrade.DesignMatrixType
struct DesignMatrix{X, M<:AbstractArray{X, 2}, T, S} <: AbstractArray{X, 2}

Internal type that holds the gain design matrices for visibility corruption.

source
Comrade.JonesCacheType
struct JonesCache{D1, D2, S, Sc, R} <: Comrade.AbstractJonesCache

Holds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope.

Fields

  • m1: Design matrix for the first station
  • m2: Design matrix for the second station
  • seg: Segmentation schemes for this cache
  • schema: Gain Schema
  • references: List of Reference stations
source
Comrade.ResponseCacheType
struct ResponseCache{M, B<:PolBasis} <: Comrade.AbstractJonesCache

Holds various transformations that move from the measured telescope basis to the chosen on sky reference basis.

Fields

  • T1: Transform matrices for the first stations
  • T2: Transform matrices for the second stations
  • refbasis: Reference polarization basis
source
Comrade.JonesModelType
JonesModel(jones::JonesPairs, refbasis = CirBasis())
+JonesModel(jones::JonesPairs, tcache::ResponseCache)

Constructs the intrument corruption model using pairs of jones matrices jones and a reference basis

source
Comrade.VLBIModelType
VLBIModel(skymodel, instrumentmodel)

Constructs a VLBIModel from a jones pairs that describe the intrument model and the model which describes the on-sky polarized visibilities. The third argument can either be the tcache that converts from the model coherency basis to the instrumental basis, or just the refbasis that will be used when constructing the model coherency matrices.

source
Comrade.CalPriorType
CalPrior(dists, cache::JonesCache, reference=:none)

Creates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.

Example

For the 2017 observations of M87 a common CalPrior call is:

julia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),
                    AP = LogNormal(0.0, 0.1),
                    JC = LogNormal(0.0, 0.1),
                    SM = LogNormal(0.0, 0.1),
@@ -24,8 +24,8 @@
                 ), cache)
 
 julia> x = rand(gdist)
-julia> logdensityof(gdist, x)
source
CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)

Constructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have

dist0 = (AA = Normal(0.0, 1.0), )
-distt = (AA = Normal(0.0, 0.1), )

then the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from

g2 = g1 + ϵ1

where ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.

source
Comrade.CalPriorMethod
CalPrior(dists, cache::JonesCache, reference=:none)

Creates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.

Example

For the 2017 observations of M87 a common CalPrior call is:

julia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),
+julia> logdensityof(gdist, x)
source
CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)

Constructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have

dist0 = (AA = Normal(0.0, 1.0), )
+distt = (AA = Normal(0.0, 0.1), )

then the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from

g2 = g1 + ϵ1

where ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.

source
Comrade.CalPriorMethod
CalPrior(dists, cache::JonesCache, reference=:none)

Creates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.

Example

For the 2017 observations of M87 a common CalPrior call is:

julia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),
                    AP = LogNormal(0.0, 0.1),
                    JC = LogNormal(0.0, 0.1),
                    SM = LogNormal(0.0, 0.1),
@@ -35,26 +35,26 @@
                 ), cache)
 
 julia> x = rand(gdist)
-julia> logdensityof(gdist, x)
source
Comrade.CalPriorMethod
CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)

Constructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have

dist0 = (AA = Normal(0.0, 1.0), )
-distt = (AA = Normal(0.0, 0.1), )

then the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from

g2 = g1 + ϵ1

where ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.

source
Comrade.RIMEModelType
abstract type RIMEModel <: ComradeBase.AbstractModel

Abstract type that encompasses all RIME style corruptions.

source
Comrade.IntegSegType
struct IntegSeg{S} <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a correlation integration.

source
Comrade.ScanSegType
struct ScanSeg{S} <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a scan.

Warning

Currently we do not explicity track the telescope scans. This will be fixed in a future version. Right now ScanSeg and TrackSeg are the same

source
Comrade.TrackSegType
struct TrackSeg <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a track, i.e., the observation "night".

source
Comrade.FixedSegType
struct FixedSeg{T} <: Comrade.ObsSegmentation

Enforces that the station calibraton value will have a fixed value. This is most commonly used when enforcing a reference station for gain phases.

source
Comrade.jonescacheMethod
jonescache(obs::EHTObservation, segmentation::ObsSegmentation)
+julia> logdensityof(gdist, x)
source
Comrade.CalPriorMethod
CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)

Constructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have

dist0 = (AA = Normal(0.0, 1.0), )
+distt = (AA = Normal(0.0, 0.1), )

then the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from

g2 = g1 + ϵ1

where ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.

source
Comrade.RIMEModelType
abstract type RIMEModel <: ComradeBase.AbstractModel

Abstract type that encompasses all RIME style corruptions.

source
Comrade.IntegSegType
struct IntegSeg{S} <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a correlation integration.

source
Comrade.ScanSegType
struct ScanSeg{S} <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a scan.

Warning

Currently we do not explicity track the telescope scans. This will be fixed in a future version. Right now ScanSeg and TrackSeg are the same

source
Comrade.TrackSegType
struct TrackSeg <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a track, i.e., the observation "night".

source
Comrade.FixedSegType
struct FixedSeg{T} <: Comrade.ObsSegmentation

Enforces that the station calibraton value will have a fixed value. This is most commonly used when enforcing a reference station for gain phases.

source
Comrade.jonescacheMethod
jonescache(obs::EHTObservation, segmentation::ObsSegmentation)
 jonescache(obs::EHTObservatoin, segmentation::NamedTuple)

Constructs a JonesCache from a given observation obs using the segmentation scheme segmentation. If segmentation is a named tuple it is assumed that each symbol in the named tuple corresponds to a segmentation for thes sites in obs.

Example

# coh is a EHTObservation
 julia> jonescache(coh, ScanSeg())
 julia> segs = (AA = ScanSeg(), AP = TrachSeg(), AZ=FixedSegSeg())
-julia> jonescache(coh, segs)
source
Comrade.SingleReferenceType
SingleReference(site::Symbol, val::Number)

Use a single site as a reference. The station gain will be set equal to val.

source
Comrade.RandomReferenceType
RandomReference(val::Number)

For each timestamp select a random reference station whose station gain will be set to val.

Notes

This is useful when there isn't a single site available for all scans and you want to split up the choice of reference site. We recommend only using this option for Stokes I fitting.

source
Comrade.SEFDReferenceType
SiteOrderReference(val::Number, sefd_index = 1)

Selects the reference site based on the SEFD of each telescope, where the smallest SEFD is preferentially selected. The reference gain is set to val and the user can select to use the n lowest SEFD site by passing sefd_index = n.

Notes

This is done on a per-scan basis so if a site is missing from a scan the next highest SEFD site will be used.

source
Comrade.jonesStokesFunction
jonesStokes(g1::AbstractArray, gcache::AbstractJonesCache)
-jonesStokes(f, g1::AbstractArray, gcache::AbstractJonesCache)

Construct the Jones Pairs for the stokes I image only. That is, we only need to pass a single vector corresponding to the gain for the stokes I visibility. This is for when you only want to image Stokes I. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.

Warning

In the future this functionality may be removed when stokes I fitting is replaced with the more correct trace(coherency), i.e. RR+LL for a circular basis.

source
Comrade.jonesGFunction
jonesG(g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)
+julia> jonescache(coh, segs)
source
Comrade.SingleReferenceType
SingleReference(site::Symbol, val::Number)

Use a single site as a reference. The station gain will be set equal to val.

source
Comrade.RandomReferenceType
RandomReference(val::Number)

For each timestamp select a random reference station whose station gain will be set to val.

Notes

This is useful when there isn't a single site available for all scans and you want to split up the choice of reference site. We recommend only using this option for Stokes I fitting.

source
Comrade.SEFDReferenceType
SiteOrderReference(val::Number, sefd_index = 1)

Selects the reference site based on the SEFD of each telescope, where the smallest SEFD is preferentially selected. The reference gain is set to val and the user can select to use the n lowest SEFD site by passing sefd_index = n.

Notes

This is done on a per-scan basis so if a site is missing from a scan the next highest SEFD site will be used.

source
Comrade.jonesStokesFunction
jonesStokes(g1::AbstractArray, gcache::AbstractJonesCache)
+jonesStokes(f, g1::AbstractArray, gcache::AbstractJonesCache)

Construct the Jones Pairs for the stokes I image only. That is, we only need to pass a single vector corresponding to the gain for the stokes I visibility. This is for when you only want to image Stokes I. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.

Warning

In the future this functionality may be removed when stokes I fitting is replaced with the more correct trace(coherency), i.e. RR+LL for a circular basis.

source
Comrade.jonesGFunction
jonesG(g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)
 jonesG(f, g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)

Constructs the pairs Jones G matrices for each pair of stations. The g1 are the gains for the first polarization basis and g2 are the gains for the other polarization. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.

The layout for each matrix is as follows:

    g1 0
-    0  g2
source
Comrade.jonesDFunction
jonesD(d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)
+    0  g2
source
Comrade.jonesDFunction
jonesD(d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)
 jonesD(f, d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)

Constructs the pairs Jones D matrices for each pair of stations. The d1 are the d-termsfor the first polarization basis and d2 are the d-terms for the other polarization. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if d1 and d2 are the log-dterms then f=exp will convert them into the dterms.

The layout for each matrix is as follows:

    1  d1
-    d2 1
source
Comrade.jonesTFunction
jonesT(tcache::ResponseCache)

Returns a JonesPair of matrices that transform from the model coherency matrices basis to the on-sky coherency basis, this includes the feed rotation and choice of polarization feeds.

source
Base.mapMethod
map(f, args::JonesPairs...) -> JonesPairs

Maps over a set of JonesPairs applying the function f to each element. This returns a collected JonesPair. This us useful for more advanced operations on Jones matrices.

Examples

map(G, D, F) do g, d, f
+    d2 1
source
Comrade.jonesTFunction
jonesT(tcache::ResponseCache)

Returns a JonesPair of matrices that transform from the model coherency matrices basis to the on-sky coherency basis, this includes the feed rotation and choice of polarization feeds.

source
Base.mapMethod
map(f, args::JonesPairs...) -> JonesPairs

Maps over a set of JonesPairs applying the function f to each element. This returns a collected JonesPair. This us useful for more advanced operations on Jones matrices.

Examples

map(G, D, F) do g, d, f
     return f'*exp.(g)*d*f
-end
source
Comrade.caltableFunction
caltable(args...)

Creates a calibration table from a set of arguments. The specific arguments depend on what calibration you are applying.

source
Comrade.JonesPairsType
struct JonesPairs{T, M1<:AbstractArray{T, 1}, M2<:AbstractArray{T, 1}}

Holds the pairs of Jones matrices for the first and second station of a baseline.

Fields

  • m1: Vector of jones matrices for station 1
  • m2: Vector of jones matrices for station 2
source
Comrade.GainSchemaType
GainSchema(sites, times)

Constructs a schema for the gains of an observation. The sites and times correspond to the specific site and time for each gain that will be modeled.

source
Comrade.SegmentedJonesCacheType
struct SegmentedJonesCache{D, S<:Comrade.ObsSegmentation, ST, Ti} <: Comrade.AbstractJonesCache

Holds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope. This uses a segmented decomposition so that the gain at a single timestamp is the sum of the previous gains. In this formulation the gains parameters are the segmented gain offsets from timestamp to timestamp

Fields

  • m1: Design matrix for the first station
  • m2: Design matrix for the second station
  • seg: Segmentation scheme for this cache
  • stations: station codes
  • times: times
source

Models

For the description of the model API see VLBISkyModels.

Data Types

Comrade.extract_tableFunction
extract_table(obs, dataproducts::VLBIDataProducts)

Extract an Comrade.EHTObservation table of data products dataproducts. To pass additional keyword for the data products you can pass them as keyword arguments to the data product type. For a list of potential data products see subtypes(Comrade.VLBIDataProducts).

Example

julia> dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3.0), ClosurePhases(;snrcut=3.0, cut_trivial=true))
+end
source
Comrade.caltableFunction
caltable(args...)

Creates a calibration table from a set of arguments. The specific arguments depend on what calibration you are applying.

source
Comrade.JonesPairsType
struct JonesPairs{T, M1<:AbstractArray{T, 1}, M2<:AbstractArray{T, 1}}

Holds the pairs of Jones matrices for the first and second station of a baseline.

Fields

  • m1: Vector of jones matrices for station 1
  • m2: Vector of jones matrices for station 2
source
Comrade.GainSchemaType
GainSchema(sites, times)

Constructs a schema for the gains of an observation. The sites and times correspond to the specific site and time for each gain that will be modeled.

source
Comrade.SegmentedJonesCacheType
struct SegmentedJonesCache{D, S<:Comrade.ObsSegmentation, ST, Ti} <: Comrade.AbstractJonesCache

Holds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope. This uses a segmented decomposition so that the gain at a single timestamp is the sum of the previous gains. In this formulation the gains parameters are the segmented gain offsets from timestamp to timestamp

Fields

  • m1: Design matrix for the first station
  • m2: Design matrix for the second station
  • seg: Segmentation scheme for this cache
  • stations: station codes
  • times: times
source

Models

For the description of the model API see VLBISkyModels.

Data Types

Comrade.extract_tableFunction
extract_table(obs, dataproducts::VLBIDataProducts)

Extract an Comrade.EHTObservation table of data products dataproducts. To pass additional keyword for the data products you can pass them as keyword arguments to the data product type. For a list of potential data products see subtypes(Comrade.VLBIDataProducts).

Example

julia> dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3.0), ClosurePhases(;snrcut=3.0, cut_trivial=true))
 julia> dcoh = extract_table(obs, Coherencies())
-julia> dvis = extract_table(obs, VisibilityAmplitudes())
source
Comrade.ComplexVisibilitiesType
ComplexVisibilities(;kwargs...)

Type to specify to extract the complex visibilities table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

Any keyword arguments are ignored for now. Use eht-imaging directly to modify the data.

source
Comrade.VisibilityAmplitudesType
ComplexVisibilities(;kwargs...)

Type to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_amp command for obsdata.

source
Comrade.ClosurePhasesType
ClosuresPhases(;kwargs...)

Type to specify to extract the closure phase table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:

  • count: How the closures are formed, the available options are "min-correct", "min", "max"

Warning

The count keyword argument is treated specially in Comrade. The default option is "min-correct" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count="min" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.

source
Comrade.LogClosureAmplitudesType
LogClosureAmplitudes(;kwargs...)

Type to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:

  • count: How the closures are formed, the available options are "min-correct", "min", "max"

Returns an EHTObservation with log-closure amp. datums

Warning

The count keyword argument is treated specially in Comrade. The default option is "min-correct" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count="min" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.

source
Comrade.CoherenciesType
Coherencies(;kwargs...)

Type to specify to extract the coherency matrices table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

Any keyword arguments are ignored for now. Use eht-imaging directly to modify the data.

source
Comrade.baselinesFunction
baselines(CP::EHTClosurePhaseDatum)

Returns the baselines used for a single closure phase datum

source
baselines(CP::EHTLogClosureAmplitudeDatum)

Returns the baselines used for a single closure phase datum

source
baselines(scan::Scan)

Return the baselines for each datum in a scan

source
Comrade.ComplexVisibilitiesType
ComplexVisibilities(;kwargs...)

Type to specify to extract the complex visibilities table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

Any keyword arguments are ignored for now. Use eht-imaging directly to modify the data.

source
Comrade.VisibilityAmplitudesType
ComplexVisibilities(;kwargs...)

Type to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_amp command for obsdata.

source
Comrade.ClosurePhasesType
ClosuresPhases(;kwargs...)

Type to specify to extract the closure phase table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:

  • count: How the closures are formed, the available options are "min-correct", "min", "max"

Warning

The count keyword argument is treated specially in Comrade. The default option is "min-correct" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count="min" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.

source
Comrade.LogClosureAmplitudesType
LogClosureAmplitudes(;kwargs...)

Type to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:

  • count: How the closures are formed, the available options are "min-correct", "min", "max"

Returns an EHTObservation with log-closure amp. datums

Warning

The count keyword argument is treated specially in Comrade. The default option is "min-correct" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count="min" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.

source
Comrade.CoherenciesType
Coherencies(;kwargs...)

Type to specify to extract the coherency matrices table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

Any keyword arguments are ignored for now. Use eht-imaging directly to modify the data.

source
Comrade.baselinesFunction
baselines(CP::EHTClosurePhaseDatum)

Returns the baselines used for a single closure phase datum

source
baselines(CP::EHTLogClosureAmplitudeDatum)

Returns the baselines used for a single closure phase datum

source
baselines(scan::Scan)

Return the baselines for each datum in a scan

source
ComradeBase.closure_phaseMethod
closure_phase(D1::EHTVisibilityDatum,
               D2::EHTVisibilityDatum,
               D3::EHTVisibilityDatum
-              )

Computes the closure phase of the three visibility datums.

Notes

We currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.

source
Comrade.getdataFunction
getdata(obs::EHTObservation, s::Symbol)

Pass-through function that gets the array of s from the EHTObservation. For example say you want the times of all measurement then

getdata(obs, :time)
source
Comrade.scantableFunction
scantable(obs::EHTObservation)

Reorganizes the observation into a table of scans, where scan are defined by unique timestamps. To access the data you can use scalar indexing

Example

st = scantable(obs)
+              )

Computes the closure phase of the three visibility datums.

Notes

We currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.

source
Comrade.getdataFunction
getdata(obs::EHTObservation, s::Symbol)

Pass-through function that gets the array of s from the EHTObservation. For example say you want the times of all measurement then

getdata(obs, :time)
source
Comrade.scantableFunction
scantable(obs::EHTObservation)

Reorganizes the observation into a table of scans, where scan are defined by unique timestamps. To access the data you can use scalar indexing

Example

st = scantable(obs)
 # Grab the first scan
 scan1 = st[1]
 
@@ -62,18 +62,18 @@
 scan1[1]
 
 # grab e.g. the baselines
-scan1[:baseline]
source
Comrade.stationsFunction
stations(d::EHTObservation)

Get all the stations in a observation. The result is a vector of symbols.

source
stations(g::CalTable)

Return the stations in the calibration table

source
Comrade.uvpositionsFunction
uvpositions(datum::AbstractVisibilityDatum)

Get the uvp positions of an inferometric datum.

source
Comrade.ArrayConfigurationType
abstract type ArrayConfiguration

This defined the abstract type for an array configuration. Namely, baseline times, SEFD's, bandwidth, observation frequencies, etc.

source
Comrade.ClosureConfigType
struct ClosureConfig{A, D} <: Comrade.ArrayConfiguration

Array config file for closure quantities. This stores the design matrix designmat that transforms from visibilties to closure products.

Fields

  • ac: Array configuration for visibilities

  • designmat: Closure design matrix

source
Comrade.EHTObservationType
struct EHTObservation{F, T<:Comrade.AbstractInterferometryDatum{F}, S<:(StructArrays.StructArray{T<:Comrade.AbstractInterferometryDatum{F}}), A, N} <: Comrade.Observation{F}

The main data product type in Comrade this stores the data which can be a StructArray of any AbstractInterferometryDatum type.

Fields

  • data: StructArray of data productts
  • config: Array config holds ancillary information about array
  • mjd: modified julia date of the observation
  • ra: RA of the observation in J2000 (deg)
  • dec: DEC of the observation in J2000 (deg)
  • bandwidth: bandwidth of the observation (Hz)
  • source: Common source name
  • timetype: Time zone used.
source
Comrade.EHTArrayConfigurationType
struct EHTArrayConfiguration{F, T, S, D<:AbstractArray} <: Comrade.ArrayConfiguration

Stores all the non-visibility data products for an EHT array. This is useful when evaluating model visibilities.

Fields

  • bandwidth: Observing bandwith (Hz)
  • tarr: Telescope array file
  • scans: Scan times
  • data: A struct array of ArrayBaselineDatum holding time, freq, u, v, baselines.
source
Comrade.EHTCoherencyDatumType
struct EHTCoherencyDatum{S, B1, B2, M<:(StaticArraysCore.SArray{Tuple{2, 2}, Complex{S}, 2}), E<:(StaticArraysCore.SArray{Tuple{2, 2}, S, 2})} <: Comrade.AbstractInterferometryDatum{S}

A Datum for a single coherency matrix

Fields

  • measurement: coherency matrix, with entries in Jy
  • error: visibility uncertainty matrix, with entries in Jy
  • U: x-direction baseline length, in λ
  • V: y-direction baseline length, in λ
  • T: Timestamp, in hours
  • F: Frequency, in Hz
  • baseline: station baseline codes
  • polbasis: polarization basis for each station
source
Comrade.EHTVisibilityDatumType
struct EHTVisibilityDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}

A struct holding the information for a single measured complex visibility.

FIELDS

  • measurement: Complex Vis. measurement (Jy)
  • error: error of the complex vis (Jy)
  • U: u position of the data point in λ
  • V: v position of the data point in λ
  • T: time of the data point in (Hr)
  • F: frequency of the data point (Hz)
  • baseline: station baseline codes
source
Comrade.EHTVisibilityAmplitudeDatumType
struct EHTVisibilityAmplitudeDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}

A struct holding the information for a single measured visibility amplitude.

FIELDS

  • measurement: amplitude (Jy)
  • error: error of the visibility amplitude (Jy)
  • U: u position of the data point in λ
  • V: v position of the data point in λ
  • T: time of the data point in (Hr)
  • F: frequency of the data point (Hz)
  • baseline: station baseline codes
source
Comrade.EHTLogClosureAmplitudeDatumType
struct EHTLogClosureAmplitudeDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}

A Datum for a single log closure amplitude.

  • measurement: log-closure amplitude
  • error: log-closure amplitude error in the high-snr limit
  • U1: u (λ) of first station
  • V1: v (λ) of first station
  • U2: u (λ) of second station
  • V2: v (λ) of second station
  • U3: u (λ) of third station
  • V3: v (λ) of third station
  • U4: u (λ) of fourth station
  • V4: v (λ) of fourth station
  • T: Measured time of closure phase in hours
  • F: Measured frequency of closure phase in Hz
  • quadrangle: station codes for the quadrangle
source
Comrade.EHTClosurePhaseDatumType
struct EHTClosurePhaseDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}

A Datum for a single closure phase.

Fields

  • measurement: closure phase (rad)
  • error: error of the closure phase assuming the high-snr limit
  • U1: u (λ) of first station
  • V1: v (λ) of first station
  • U2: u (λ) of second station
  • V2: v (λ) of second station
  • U3: u (λ) of third station
  • V3: v (λ) of third station
  • T: Measured time of closure phase in hours
  • F: Measured frequency of closure phase in Hz
  • triangle: station baselines used
source
Comrade.ScanType
struct Scan{T, I, S}

Composite type that holds information for a single scan of the telescope.

Fields

  • time: Scan time
  • index: Scan indices which are (scan index, data start index, data end index)
  • scan: Scan data usually a StructArray of a <:AbstractVisibilityDatum
source
Comrade.ScanTableType
struct ScanTable{O<:Union{Comrade.ArrayConfiguration, Comrade.Observation}, T, S}

Wraps EHTObservation in a table that separates the observation into scans. This implements the table interface. You can access scans by directly indexing into the table. This will create a view into the table not copying the data.

Example

julia> st = scantable(obs)
+scan1[:baseline]
source
Comrade.stationsFunction
stations(d::EHTObservation)

Get all the stations in a observation. The result is a vector of symbols.

source
stations(g::CalTable)

Return the stations in the calibration table

source
Comrade.uvpositionsFunction
uvpositions(datum::AbstractVisibilityDatum)

Get the uvp positions of an inferometric datum.

source
Comrade.ArrayConfigurationType
abstract type ArrayConfiguration

This defined the abstract type for an array configuration. Namely, baseline times, SEFD's, bandwidth, observation frequencies, etc.

source
Comrade.ClosureConfigType
struct ClosureConfig{A, D} <: Comrade.ArrayConfiguration

Array config file for closure quantities. This stores the design matrix designmat that transforms from visibilties to closure products.

Fields

  • ac: Array configuration for visibilities

  • designmat: Closure design matrix

source
Comrade.EHTObservationType
struct EHTObservation{F, T<:Comrade.AbstractInterferometryDatum{F}, S<:(StructArrays.StructArray{T<:Comrade.AbstractInterferometryDatum{F}}), A, N} <: Comrade.Observation{F}

The main data product type in Comrade this stores the data which can be a StructArray of any AbstractInterferometryDatum type.

Fields

  • data: StructArray of data productts
  • config: Array config holds ancillary information about array
  • mjd: modified julia date of the observation
  • ra: RA of the observation in J2000 (deg)
  • dec: DEC of the observation in J2000 (deg)
  • bandwidth: bandwidth of the observation (Hz)
  • source: Common source name
  • timetype: Time zone used.
source
Comrade.EHTArrayConfigurationType
struct EHTArrayConfiguration{F, T, S, D<:AbstractArray} <: Comrade.ArrayConfiguration

Stores all the non-visibility data products for an EHT array. This is useful when evaluating model visibilities.

Fields

  • bandwidth: Observing bandwith (Hz)
  • tarr: Telescope array file
  • scans: Scan times
  • data: A struct array of ArrayBaselineDatum holding time, freq, u, v, baselines.
source
Comrade.EHTCoherencyDatumType
struct EHTCoherencyDatum{S, B1, B2, M<:(StaticArraysCore.SArray{Tuple{2, 2}, Complex{S}, 2}), E<:(StaticArraysCore.SArray{Tuple{2, 2}, S, 2})} <: Comrade.AbstractInterferometryDatum{S}

A Datum for a single coherency matrix

Fields

  • measurement: coherency matrix, with entries in Jy
  • error: visibility uncertainty matrix, with entries in Jy
  • U: x-direction baseline length, in λ
  • V: y-direction baseline length, in λ
  • T: Timestamp, in hours
  • F: Frequency, in Hz
  • baseline: station baseline codes
  • polbasis: polarization basis for each station
source
Comrade.EHTVisibilityDatumType
struct EHTVisibilityDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}

A struct holding the information for a single measured complex visibility.

FIELDS

  • measurement: Complex Vis. measurement (Jy)
  • error: error of the complex vis (Jy)
  • U: u position of the data point in λ
  • V: v position of the data point in λ
  • T: time of the data point in (Hr)
  • F: frequency of the data point (Hz)
  • baseline: station baseline codes
source
Comrade.EHTVisibilityAmplitudeDatumType
struct EHTVisibilityAmplitudeDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}

A struct holding the information for a single measured visibility amplitude.

FIELDS

  • measurement: amplitude (Jy)
  • error: error of the visibility amplitude (Jy)
  • U: u position of the data point in λ
  • V: v position of the data point in λ
  • T: time of the data point in (Hr)
  • F: frequency of the data point (Hz)
  • baseline: station baseline codes
source
Comrade.EHTLogClosureAmplitudeDatumType
struct EHTLogClosureAmplitudeDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}

A Datum for a single log closure amplitude.

  • measurement: log-closure amplitude
  • error: log-closure amplitude error in the high-snr limit
  • U1: u (λ) of first station
  • V1: v (λ) of first station
  • U2: u (λ) of second station
  • V2: v (λ) of second station
  • U3: u (λ) of third station
  • V3: v (λ) of third station
  • U4: u (λ) of fourth station
  • V4: v (λ) of fourth station
  • T: Measured time of closure phase in hours
  • F: Measured frequency of closure phase in Hz
  • quadrangle: station codes for the quadrangle
source
Comrade.EHTClosurePhaseDatumType
struct EHTClosurePhaseDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}

A Datum for a single closure phase.

Fields

  • measurement: closure phase (rad)
  • error: error of the closure phase assuming the high-snr limit
  • U1: u (λ) of first station
  • V1: v (λ) of first station
  • U2: u (λ) of second station
  • V2: v (λ) of second station
  • U3: u (λ) of third station
  • V3: v (λ) of third station
  • T: Measured time of closure phase in hours
  • F: Measured frequency of closure phase in Hz
  • triangle: station baselines used
source
Comrade.ScanType
struct Scan{T, I, S}

Composite type that holds information for a single scan of the telescope.

Fields

  • time: Scan time
  • index: Scan indices which are (scan index, data start index, data end index)
  • scan: Scan data usually a StructArray of a <:AbstractVisibilityDatum
source
Comrade.ScanTableType
struct ScanTable{O<:Union{Comrade.ArrayConfiguration, Comrade.Observation}, T, S}

Wraps EHTObservation in a table that separates the observation into scans. This implements the table interface. You can access scans by directly indexing into the table. This will create a view into the table not copying the data.

Example

julia> st = scantable(obs)
 julia> st[begin] # grab first scan
-julia> st[end]   # grab last scan
source

Model Cache

VLBISkyModels.NFFTAlgMethod
NFFTAlg(obs::EHTObservation; kwargs...)

Create an algorithm object using the non-unform Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.

The possible optional arguments are given in the NFFTAlg struct.

source
VLBISkyModels.NFFTAlgMethod
NFFTAlg(ac::ArrayConfiguration; kwargs...)

Create an algorithm object using the non-unform Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.

The optional arguments are: padfac specifies how much to pad the image by, and m is an internal variable for NFFT.jl.

source
VLBISkyModels.DFTAlgMethod
DFTAlg(obs::EHTObservation)

Create an algorithm object using the direct Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.

source
VLBISkyModels.DFTAlgMethod
DFTAlg(ac::ArrayConfiguration)

Create an algorithm object using the direct Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.

source

Bayesian Tools

Posterior Constructions

HypercubeTransform.ascubeFunction
ascube(post::Posterior)

Construct a flattened version of the posterior where the parameters are transformed to live in (0, 1), i.e. the unit hypercube.

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = ascube(post)
+julia> st[end]   # grab last scan
source

Model Cache

VLBISkyModels.NFFTAlgMethod
NFFTAlg(obs::EHTObservation; kwargs...)

Create an algorithm object using the non-unform Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.

The possible optional arguments are given in the NFFTAlg struct.

source
VLBISkyModels.NFFTAlgMethod
NFFTAlg(ac::ArrayConfiguration; kwargs...)

Create an algorithm object using the non-unform Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.

The optional arguments are: padfac specifies how much to pad the image by, and m is an internal variable for NFFT.jl.

source
VLBISkyModels.DFTAlgMethod
DFTAlg(obs::EHTObservation)

Create an algorithm object using the direct Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.

source
VLBISkyModels.DFTAlgMethod
DFTAlg(ac::ArrayConfiguration)

Create an algorithm object using the direct Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.

source

Bayesian Tools

Posterior Constructions

HypercubeTransform.ascubeFunction
ascube(post::Posterior)

Construct a flattened version of the posterior where the parameters are transformed to live in (0, 1), i.e. the unit hypercube.

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = ascube(post)
 julia> x0 = prior_sample(tpost)
-julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical NestedSampling methods, i.e. ComradeNested. For the transformation to unconstrained space see asflat

source
HypercubeTransform.asflatFunction
asflat(post::Posterior)

Construct a flattened version of the posterior where the parameters are transformed to live in (-∞, ∞).

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = ascube(post)
+julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical NestedSampling methods, i.e. ComradeNested. For the transformation to unconstrained space see asflat

source
HypercubeTransform.asflatFunction
asflat(post::Posterior)

Construct a flattened version of the posterior where the parameters are transformed to live in (-∞, ∞).

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = ascube(post)
 julia> x0 = prior_sample(tpost)
-julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube

source
ParameterHandling.flattenFunction
flatten(post::Posterior)

Construct a flattened version of the posterior but do not transform to any space, i.e. use the support specified by the prior.

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = flatten(post)
+julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube

source
ParameterHandling.flattenFunction
flatten(post::Posterior)

Construct a flattened version of the posterior but do not transform to any space, i.e. use the support specified by the prior.

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = flatten(post)
 julia> x0 = prior_sample(tpost)
-julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube

source
Comrade.prior_sampleFunction
prior_sample([rng::AbstractRandom], post::Posterior, args...)

Samples the prior distribution from the posterior. The args... are forwarded to the Base.rand method.

source
prior_sample([rng::AbstractRandom], post::Posterior)

Returns a single sample from the prior distribution.

source
Comrade.likelihoodFunction
likelihood(d::ConditionedLikelihood, μ)

Returns the likelihood of the model, with parameters μ. That is, we return the distribution of the data given the model parameters μ. This is an actual probability distribution.

source
Comrade.simulate_observationFunction
simulate_observation([rng::Random.AbstractRNG], post::Posterior, θ)

Create a simulated observation using the posterior and its data post using the parameter values θ. In Bayesian terminology this is a draw from the posterior predictive distribution.

source
Comrade.dataproductsFunction
dataproducts(d::RadioLikelihood)

Returns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.

source
dataproducts(d::Posterior)

Returns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.

source
Comrade.skymodelFunction
skymodel(post::RadioLikelihood, θ)

Returns the sky model or image of a posterior using the parameter valuesθ

source
skymodel(post::Posterior, θ)

Returns the sky model or image of a posterior using the parameter valuesθ

source
Comrade.instrumentmodelFunction
skymodel(lklhd::RadioLikelihood, θ)

Returns the instrument model of a lklhderior using the parameter valuesθ

source
skymodel(post::Posterior, θ)

Returns the instrument model of a posterior using the parameter valuesθ

source
Comrade.vlbimodelFunction
vlbimodel(post::Posterior, θ)

Returns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ

source
vlbimodel(post::Posterior, θ)

Returns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ

source
StatsBase.sampleMethod
sample(post::Posterior, sampler::S, args...; init_params=nothing, kwargs...)

Sample a posterior post using the sampler. You can optionally pass the starting location of the sampler using init_params, otherwise a random draw from the prior will be used.

source
TransformVariables.transformFunction
transform(posterior::TransformedPosterior, x)

Transforms the value x from the transformed space (e.g. unit hypercube if using ascube) to parameter space which is usually encoded as a NamedTuple.

For the inverse transform see inverse

source
Comrade.MultiRadioLikelihoodType
MultiRadioLikelihood(lklhd1, lklhd2, ...)

Combines multiple likelihoods into one object that is useful for fitting multiple days/frequencies.

julia> lklhd1 = RadioLikelihood(dcphase1, dlcamp1)
+julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube

source
Comrade.prior_sampleFunction
prior_sample([rng::AbstractRandom], post::Posterior, args...)

Samples the prior distribution from the posterior. The args... are forwarded to the Base.rand method.

source
prior_sample([rng::AbstractRandom], post::Posterior)

Returns a single sample from the prior distribution.

source
Comrade.likelihoodFunction
likelihood(d::ConditionedLikelihood, μ)

Returns the likelihood of the model, with parameters μ. That is, we return the distribution of the data given the model parameters μ. This is an actual probability distribution.

source
Comrade.simulate_observationFunction
simulate_observation([rng::Random.AbstractRNG], post::Posterior, θ)

Create a simulated observation using the posterior and its data post using the parameter values θ. In Bayesian terminology this is a draw from the posterior predictive distribution.

source
Comrade.dataproductsFunction
dataproducts(d::RadioLikelihood)

Returns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.

source
dataproducts(d::Posterior)

Returns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.

source
Comrade.skymodelFunction
skymodel(post::RadioLikelihood, θ)

Returns the sky model or image of a posterior using the parameter valuesθ

source
skymodel(post::Posterior, θ)

Returns the sky model or image of a posterior using the parameter valuesθ

source
Comrade.instrumentmodelFunction
skymodel(lklhd::RadioLikelihood, θ)

Returns the instrument model of a lklhderior using the parameter valuesθ

source
skymodel(post::Posterior, θ)

Returns the instrument model of a posterior using the parameter valuesθ

source
Comrade.vlbimodelFunction
vlbimodel(post::Posterior, θ)

Returns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ

source
vlbimodel(post::Posterior, θ)

Returns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ

source
StatsBase.sampleMethod
sample(post::Posterior, sampler::S, args...; init_params=nothing, kwargs...)

Sample a posterior post using the sampler. You can optionally pass the starting location of the sampler using init_params, otherwise a random draw from the prior will be used.

source
TransformVariables.transformFunction
transform(posterior::TransformedPosterior, x)

Transforms the value x from the transformed space (e.g. unit hypercube if using ascube) to parameter space which is usually encoded as a NamedTuple.

For the inverse transform see inverse

source
Comrade.MultiRadioLikelihoodType
MultiRadioLikelihood(lklhd1, lklhd2, ...)

Combines multiple likelihoods into one object that is useful for fitting multiple days/frequencies.

julia> lklhd1 = RadioLikelihood(dcphase1, dlcamp1)
 julia> lklhd2 = RadioLikelihood(dcphase2, dlcamp2)
-julia> MultiRadioLikelihood(lklhd1, lklhd2)
source
Comrade.PosteriorType
Posterior(lklhd, prior)

Creates a Posterior density that follows obeys DensityInterface. The lklhd object is expected to be a VLB object. For instance, these can be created using RadioLikelihood. prior

Notes

Since this function obeys DensityInterface you can evaluate it with

julia> ℓ = logdensityof(post)
-julia> ℓ(x)

or using the 2-argument version directly

julia> logdensityof(post, x)

where post::Posterior.

To generate random draws from the prior see the prior_sample function.

source
Comrade.TransformedPosteriorType
struct TransformedPosterior{P<:Posterior, T} <: Comrade.AbstractPosterior

A transformed version of a Posterior object. This is an internal type that an end user shouldn't have to directly construct. To construct a transformed posterior see the asflat, ascube, and flatten docstrings.

source
Comrade.RadioLikelihoodType
RadioLikelihood(skymodel, instumentmodel, dataproducts::EHTObservation...;
+julia> MultiRadioLikelihood(lklhd1, lklhd2)
source
Comrade.PosteriorType
Posterior(lklhd, prior)

Creates a Posterior density that follows obeys DensityInterface. The lklhd object is expected to be a VLB object. For instance, these can be created using RadioLikelihood. prior

Notes

Since this function obeys DensityInterface you can evaluate it with

julia> ℓ = logdensityof(post)
+julia> ℓ(x)

or using the 2-argument version directly

julia> logdensityof(post, x)

where post::Posterior.

To generate random draws from the prior see the prior_sample function.

source
Comrade.TransformedPosteriorType
struct TransformedPosterior{P<:Posterior, T} <: Comrade.AbstractPosterior

A transformed version of a Posterior object. This is an internal type that an end user shouldn't have to directly construct. To construct a transformed posterior see the asflat, ascube, and flatten docstrings.

source
Comrade.RadioLikelihoodType
RadioLikelihood(skymodel, instumentmodel, dataproducts::EHTObservation...;
                 skymeta=nothing,
                 instrumentmeta=nothing)

Creates a RadioLikelihood using the skymodel its related metadata skymeta and the instrumentmodel and its metadata instumentmeta. . The model is a function that converts from parameters θ to a Comrade AbstractModel which can be used to compute visibilities and a set of metadata that is used by model to compute the model.

Warning

The model itself must be a two argument function where the first argument is the set of model parameters and the second is a container that holds all the additional information needed to construct the model. An example of this is when the model needs some precomputed cache to define the model.

Example

dlcamp, dcphase = extract_table(obs, LogClosureAmplitude(), ClosurePhases())
 cache = create_cache(NFFTAlg(dlcamp), IntensityMap(zeros(128,128), μas2rad(100.0), μas2rad(100.0)))
@@ -98,7 +98,7 @@
 
 RadioLikelihood(skymodel, instrumentmodel, dataproducts::EHTObservation...;
                  skymeta=(;cache,),
-                 instrumentmeta=(;gcache))
source
RadioLikelihood(skymodel, dataproducts::EHTObservation...; skymeta=nothing)

Forms a radio likelihood from a set of data products using only a sky model. This intrinsically assumes that the instrument model is not required since it is perfect. This is useful when fitting closure quantities which are independent of the instrument.

If you want to form a likelihood from multiple arrays such as when fitting different wavelengths or days, you can combine them using MultiRadioLikelihood

Example

julia> RadioLikelihood(skymodel, dcphase, dlcamp)
source
Comrade.IsFlatType
struct IsFlat

Specifies that the sampling algorithm usually expects a uncontrained transform

source
Comrade.IsCubeType
struct IsCube

Specifies that the sampling algorithm usually expects a hypercube transform

source

Sampler Tools

Comrade.samplertypeFunction
samplertype(::Type)

Sampler type specifies whether to use a unit hypercube or unconstrained transformation.

source

Misc

Comrade.station_tupleFunction
station_tuple(stations, default; reference=nothing kwargs...)
+                 instrumentmeta=(;gcache))
source
RadioLikelihood(skymodel, dataproducts::EHTObservation...; skymeta=nothing)

Forms a radio likelihood from a set of data products using only a sky model. This intrinsically assumes that the instrument model is not required since it is perfect. This is useful when fitting closure quantities which are independent of the instrument.

If you want to form a likelihood from multiple arrays such as when fitting different wavelengths or days, you can combine them using MultiRadioLikelihood

Example

julia> RadioLikelihood(skymodel, dcphase, dlcamp)
source
Comrade.IsFlatType
struct IsFlat

Specifies that the sampling algorithm usually expects a uncontrained transform

source
Comrade.IsCubeType
struct IsCube

Specifies that the sampling algorithm usually expects a hypercube transform

source

Sampler Tools

Comrade.samplertypeFunction
samplertype(::Type)

Sampler type specifies whether to use a unit hypercube or unconstrained transformation.

source

Misc

Comrade.station_tupleFunction
station_tuple(stations, default; reference=nothing kwargs...)
 station_tuple(obs::EHTObservation, default; reference=nothing, kwargs...)

Convienence function that will construct a NamedTuple of objects whose names are the stations in the observation obs or explicitly in the argument stations. The NamedTuple will be filled with default if no kwargs are defined otherwise each kwarg (key, value) pair denotes a station and value pair.

Optionally the user can specify a reference station that will be dropped from the tuple. This is useful for selecting a reference station for gain phases

Examples

julia> stations = (:AA, :AP, :LM, :PV)
 julia> station_tuple(stations, ScanSeg())
 (AA = ScanSeg(), AP = ScanSeg(), LM = ScanSeg(), PV = ScanSeg())
@@ -107,4 +107,4 @@
 julia> station_tuple(stations, ScanSeg(); AA = FixedSeg(1.0), PV = TrackSeg())
 (AA = FixedSeg(1.0), AP = ScanSeg(), LM = ScanSeg(), PV = TrackSeg())
 julia> station_tuple(stations, Normal(0.0, 0.1); reference=:AA, LM = Normal(0.0, 1.0))
-(AP = Normal(0.0, 0.1), LM = Normal(0.0, 1.0), PV = Normal(0.0, 0.1))
source
Comrade.dirty_imageFunction
dirty_image(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T

Computes the dirty image of the complex visibilities assuming a field of view of fov and number of pixels npix using the complex visibilities found in the observation obs.

The dirty image is the inverse Fourier transform of the measured visibilties assuming every other visibility is zero.

source
Comrade.dirty_beamFunction
dirty_beam(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T

Computes the dirty beam of the complex visibilities assuming a field of view of fov and number of pixels npix using baseline coverage found in obs.

The dirty beam is the inverse Fourier transform of the (u,v) coverage assuming every visibility is unity and everywhere else is zero.

source
Comrade.beamsizeFunction
beamsize(ac::ArrayConfiguration)

Calculate the approximate beam size of the array ac as the inverse of the longest baseline distance.

source
beamsize(obs::EHTObservation)

Calculate the approximate beam size of the observation obs as the inverse of the longest baseline distance.

source

Internal (Not Public API)

Comrade.extract_FRsFunction
extract_FRs

Extracts the feed rotation Jones matrices (returned as a JonesPair) from an EHT observation obs.

Warning

eht-imaging can sometimes pre-rotate the coherency matrices. As a result the field rotation can sometimes be applied twice. To compensate for this we have added a ehtim_fr_convention which will fix this.

source
ComradeBase._visibilities!Function
_visibilities!(model::AbstractModel, args...)

Internal method used for trait dispatch and unpacking of args arguments in visibilities!

Warn

Not part of the public API so it may change at any moment.

source
ComradeBase._visibilitiesFunction
_visibilities(model::AbstractModel, args...)

Internal method used for trait dispatch and unpacking of args arguments in visibilities

Warn

Not part of the public API so it may change at any moment.

source

eht-imaging interface (Internal)

Comrade.extract_ampFunction
extract_amp(obs; kwargs...)

Extracts the visibility amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_cphaseFunction
extract_cphase(obs; kwargs...)

Extracts the closure phases from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_lcampFunction
extract_lcamp(obs; kwargs...)

Extracts the log-closure amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_visFunction
extract_vis(obs; kwargs...)

Extracts the stokes I complex visibilities from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_coherencyFunction
extract_coherency(obs; kwargs...)

Extracts the full coherency matrix from an observation. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
+(AP = Normal(0.0, 0.1), LM = Normal(0.0, 1.0), PV = Normal(0.0, 0.1))
source
Comrade.dirty_imageFunction
dirty_image(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T

Computes the dirty image of the complex visibilities assuming a field of view of fov and number of pixels npix using the complex visibilities found in the observation obs.

The dirty image is the inverse Fourier transform of the measured visibilties assuming every other visibility is zero.

source
Comrade.dirty_beamFunction
dirty_beam(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T

Computes the dirty beam of the complex visibilities assuming a field of view of fov and number of pixels npix using baseline coverage found in obs.

The dirty beam is the inverse Fourier transform of the (u,v) coverage assuming every visibility is unity and everywhere else is zero.

source
Comrade.beamsizeFunction
beamsize(ac::ArrayConfiguration)

Calculate the approximate beam size of the array ac as the inverse of the longest baseline distance.

source
beamsize(obs::EHTObservation)

Calculate the approximate beam size of the observation obs as the inverse of the longest baseline distance.

source

Internal (Not Public API)

Comrade.extract_FRsFunction
extract_FRs

Extracts the feed rotation Jones matrices (returned as a JonesPair) from an EHT observation obs.

Warning

eht-imaging can sometimes pre-rotate the coherency matrices. As a result the field rotation can sometimes be applied twice. To compensate for this we have added a ehtim_fr_convention which will fix this.

source
ComradeBase._visibilities!Function
_visibilities!(model::AbstractModel, args...)

Internal method used for trait dispatch and unpacking of args arguments in visibilities!

Warn

Not part of the public API so it may change at any moment.

source
ComradeBase._visibilitiesFunction
_visibilities(model::AbstractModel, args...)

Internal method used for trait dispatch and unpacking of args arguments in visibilities

Warn

Not part of the public API so it may change at any moment.

source

eht-imaging interface (Internal)

Comrade.extract_ampFunction
extract_amp(obs; kwargs...)

Extracts the visibility amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_cphaseFunction
extract_cphase(obs; kwargs...)

Extracts the closure phases from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_lcampFunction
extract_lcamp(obs; kwargs...)

Extracts the log-closure amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_visFunction
extract_vis(obs; kwargs...)

Extracts the stokes I complex visibilities from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_coherencyFunction
extract_coherency(obs; kwargs...)

Extracts the full coherency matrix from an observation. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
diff --git a/dev/base_api/index.html b/dev/base_api/index.html index 8cb9e05f..7f17d407 100644 --- a/dev/base_api/index.html +++ b/dev/base_api/index.html @@ -1,17 +1,17 @@ ComradeBase API · Comrade.jl

ComradeBase API

Contents

Index

Model API

ComradeBase.fluxFunction
flux(im::IntensityMap)
-flux(img::StokesIntensityMap)

Computes the flux of a intensity map

source
ComradeBase.visibilityFunction
visibility(d::EHTVisibilityDatum)

Return the complex visibility of the visibility datum

source
visibility(mimg, p)

Computes the complex visibility of model m at coordinates p. p corresponds to the coordinates of the model. These need to have the properties U, V and sometimes Ti for time and Fr for frequency.

Notes

If you want to compute the visibilities at a large number of positions consider using the visibilities.

source
ComradeBase.visibilitiesFunction
visibilities(model::AbstractModel, args...)

Computes the complex visibilities at the locations given by args...

source
ComradeBase.visibilities!Function
visibilities!(vis::AbstractArray, model::AbstractModel, args...)

Computes the complex visibilities vis in place at the locations given by args...

source
ComradeBase.intensitymap!Function
intensitymap!(buffer::AbstractDimArray, model::AbstractModel)

Computes the intensity map of model by modifying the buffer

source
ComradeBase.IntensityMapType
IntensityMap(data::AbstractArray, dims::NamedTuple)
+flux(img::StokesIntensityMap)

Computes the flux of a intensity map

source
ComradeBase.visibilityFunction
visibility(d::EHTVisibilityDatum)

Return the complex visibility of the visibility datum

source
visibility(mimg, p)

Computes the complex visibility of model m at coordinates p. p corresponds to the coordinates of the model. These need to have the properties U, V and sometimes Ti for time and Fr for frequency.

Notes

If you want to compute the visibilities at a large number of positions consider using the visibilities.

source
ComradeBase.visibilitiesFunction
visibilities(model::AbstractModel, args...)

Computes the complex visibilities at the locations given by args...

source
ComradeBase.visibilities!Function
visibilities!(vis::AbstractArray, model::AbstractModel, args...)

Computes the complex visibilities vis in place at the locations given by args...

source
ComradeBase.intensitymap!Function
intensitymap!(buffer::AbstractDimArray, model::AbstractModel)

Computes the intensity map of model by modifying the buffer

source
ComradeBase.IntensityMapType
IntensityMap(data::AbstractArray, dims::NamedTuple)
 IntensityMap(data::AbstractArray, grid::AbstractDims)

Constructs an intensitymap using the image dimensions given by dims. This returns a KeyedArray with keys given by an ImageDimensions object.

dims = (X=range(-10.0, 10.0, length=100), Y = range(-10.0, 10.0, length=100),
         T = [0.1, 0.2, 0.5, 0.9, 1.0], F = [230e9, 345e9]
         )
-imgk = IntensityMap(rand(100,100,5,1), dims)
source
ComradeBase.amplitudeMethod
amplitude(model, p)

Computes the visibility amplitude of model m at the coordinate p. The coordinate p is expected to have the properties U, V, and sometimes Ti and Fr.

If you want to compute the amplitudes at a large number of positions consider using the amplitudes function.

source
ComradeBase.amplitudesFunction
amplitudes(m::AbstractModel, u::AbstractArray, v::AbstractArray)

Computes the visibility amplitudes of the model m at the coordinates p. The coordinates p are expected to have the properties U, V, and sometimes Ti and Fr.

source
ComradeBase.bispectrumFunction
bispectrum(d1::T, d2::T, d3::T) where {T<:EHTVisibilityDatum}

Finds the bispectrum of three visibilities. We will assume these form closed triangles, i.e. the phase of the bispectrum is a closure phase.

source
bispectrum(model, p1, p2, p3)

Computes the complex bispectrum of model m at the uv-triangle p1 -> p2 -> p3

If you want to compute the bispectrum over a number of triangles consider using the bispectra function.

source
ComradeBase.bispectraFunction
bispectra(m, p1, p2, p3)

Computes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.

source
ComradeBase.amplitudeMethod
amplitude(model, p)

Computes the visibility amplitude of model m at the coordinate p. The coordinate p is expected to have the properties U, V, and sometimes Ti and Fr.

If you want to compute the amplitudes at a large number of positions consider using the amplitudes function.

source
ComradeBase.amplitudesFunction
amplitudes(m::AbstractModel, u::AbstractArray, v::AbstractArray)

Computes the visibility amplitudes of the model m at the coordinates p. The coordinates p are expected to have the properties U, V, and sometimes Ti and Fr.

source
ComradeBase.bispectrumFunction
bispectrum(d1::T, d2::T, d3::T) where {T<:EHTVisibilityDatum}

Finds the bispectrum of three visibilities. We will assume these form closed triangles, i.e. the phase of the bispectrum is a closure phase.

source
bispectrum(model, p1, p2, p3)

Computes the complex bispectrum of model m at the uv-triangle p1 -> p2 -> p3

If you want to compute the bispectrum over a number of triangles consider using the bispectra function.

source
ComradeBase.bispectraFunction
bispectra(m, p1, p2, p3)

Computes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.

source
ComradeBase.closure_phaseFunction
closure_phase(D1::EHTVisibilityDatum,
               D2::EHTVisibilityDatum,
               D3::EHTVisibilityDatum
-              )

Computes the closure phase of the three visibility datums.

Notes

We currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.

source
closure_phase(model, p1, p2, p3, p4)

Computes the closure phase of model m at the uv-triangle u1,v1 -> u2,v2 -> u3,v3

If you want to compute closure phases over a number of triangles consider using the closure_phases function.

source
ComradeBase.closure_phasesFunction
closure_phases(m::AbstractModel, ac::ClosureConfig)

Computes the closure phases of the model m using the array configuration ac.

Notes

This is faster than the closure_phases(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]

source
closure_phases(vis::AbstractArray, ac::ArrayConfiguration)

Compute the closure phases for a set of visibilities and an array configuration

Notes

This uses a closure design matrix for the computation.

source
closure_phases(m,
+              )

Computes the closure phase of the three visibility datums.

Notes

We currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.

source
closure_phase(model, p1, p2, p3, p4)

Computes the closure phase of model m at the uv-triangle u1,v1 -> u2,v2 -> u3,v3

If you want to compute closure phases over a number of triangles consider using the closure_phases function.

source
ComradeBase.closure_phasesFunction
closure_phases(m::AbstractModel, ac::ClosureConfig)

Computes the closure phases of the model m using the array configuration ac.

Notes

This is faster than the closure_phases(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]

source
closure_phases(vis::AbstractArray, ac::ArrayConfiguration)

Compute the closure phases for a set of visibilities and an array configuration

Notes

This uses a closure design matrix for the computation.

source
closure_phases(m,
                p1::AbstractArray
                p2::AbstractArray
                p3::AbstractArray
-               )

Computes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.

source
ComradeBase.logclosure_amplitudeFunction
logclosure_amplitude(model, p1, p2, p3, p4)

Computes the log-closure amplitude of model m at the uv-quadrangle u1,v1 -> u2,v2 -> u3,v3 -> u4,v4 using the formula

\[C = \log\left|\frac{V(u1,v1)V(u2,v2)}{V(u3,v3)V(u4,v4)}\right|\]

If you want to compute log closure amplitudes over a number of triangles consider using the logclosure_amplitudes function.

source
ComradeBase.logclosure_amplitudesFunction
logclosure_amplitudes(m::AbstractModel, ac::ClosureConfig)

Computes the log closure amplitudes of the model m using the array configuration ac.

Notes

This is faster than the logclosure_amplitudes(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]

source
logclosure_amplitudes(vis::AbstractArray, ac::ArrayConfiguration)

Compute the log-closure amplitudes for a set of visibilities and an array configuration

Notes

This uses a closure design matrix for the computation.

source
logclosure_amplitudes(m::AbstractModel,
+               )

Computes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.

source
ComradeBase.logclosure_amplitudeFunction
logclosure_amplitude(model, p1, p2, p3, p4)

Computes the log-closure amplitude of model m at the uv-quadrangle u1,v1 -> u2,v2 -> u3,v3 -> u4,v4 using the formula

\[C = \log\left|\frac{V(u1,v1)V(u2,v2)}{V(u3,v3)V(u4,v4)}\right|\]

If you want to compute log closure amplitudes over a number of triangles consider using the logclosure_amplitudes function.

source
ComradeBase.logclosure_amplitudesFunction
logclosure_amplitudes(m::AbstractModel, ac::ClosureConfig)

Computes the log closure amplitudes of the model m using the array configuration ac.

Notes

This is faster than the logclosure_amplitudes(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]

source
logclosure_amplitudes(vis::AbstractArray, ac::ArrayConfiguration)

Compute the log-closure amplitudes for a set of visibilities and an array configuration

Notes

This uses a closure design matrix for the computation.

source
logclosure_amplitudes(m::AbstractModel,
                       p1,
                       p2,
                       p3,
@@ -27,4 +27,4 @@
 imagepixels(img::IntensityMapTypes)

Returns a abstract spatial dimension with the image pixels locations X and Y.

source
ComradeBase.GriddedKeysType
struct GriddedKeys{N, G, Hd<:ComradeBase.AbstractHeader, T} <: ComradeBase.AbstractDims{N, T}

This struct holds the dimensions that the EHT expect. The first type parameter N defines the names of each dimension. These names are usually one of - (:X, :Y, :T, :F) - (:X, :Y, :F, :T) - (:X, :Y) # spatial only where :X,:Y are the RA and DEC spatial dimensions respectively, :T is the the time direction and :F is the frequency direction.

Fieldnames

  • dims

  • header

Notes

Warning it is rare you need to access this constructor directly. Instead use the direct IntensityMap function.

source
ComradeBase.axisdimsFunction
axisdims(img::IntensityMap)

Returns the keys of the IntensityMap as the actual internal AbstractDims object.

source
ComradeBase.stokesFunction
stokes(m::AbstractPolarizedModel, p::Symbol)

Extract the specific stokes component p from the polarized model m

source
ComradeBase.imagegridFunction
imagegrid(k::IntensityMap)

Returns the grid the IntensityMap is defined as. Note that this is unallocating since it lazily computes the grid. The grid is an example of a KeyedArray and works similarly. This is useful for broadcasting a model across an abritrary grid.

source
ComradeBase.fieldofviewFunction
fieldofview(img::IntensityMap)
 fieldofview(img::IntensityMapTypes)

Returns a named tuple with the field of view of the image.

source
ComradeBase.pixelsizesFunction
pixelsizes(img::IntensityMap)
 pixelsizes(img::IntensityMapTypes)

Returns a named tuple with the spatial pixel sizes of the image.

source
ComradeBase.phasecenterFunction
phasecenter(img::IntensityMap)
-phasecenter(img::StokesIntensitymap)

Computes the phase center of an intensity map. Note this is the pixels that is in the middle of the image.

source
ComradeBase.centroidFunction
centroid(im::AbstractIntensityMap)

Computes the image centroid aka the center of light of the image.

For polarized maps we return the centroid for Stokes I only.

source
ComradeBase.second_momentFunction
second_moment(im::AbstractIntensityMap; center=true)

Computes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.

For polarized maps we return the second moment for Stokes I only.

source
second_moment(im::AbstractIntensityMap; center=true)

Computes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.

source
ComradeBase.headerFunction
header(g::AbstractDims)

Returns the headerinformation of the dimensions g

source
header(img::IntensityMap)

Retrieves the header of an IntensityMap

source
ComradeBase.MinimalHeaderType
MinimalHeader{T}

A minimal header type for ancillary image information.

Fields

  • source: Common source name
  • ra: Right ascension of the image in degrees (J2000)
  • dec: Declination of the image in degrees (J2000)
  • mjd: Modified Julian Date in days
  • frequency: Frequency of the image in Hz
source
ComradeBase.loadFunction
ComradeBase.load(fitsfile::String, IntensityMap)

This loads in a fits file that is more robust to the various imaging algorithms in the EHT, i.e. is works with clean, smili, eht-imaging. The function returns an tuple with an intensitymap and a second named tuple with ancillary information about the image, like the source name, location, mjd, and radio frequency.

source
ComradeBase.saveFunction
ComradeBase.save(file::String, img::IntensityMap, obs)

Saves an image to a fits file. You can optionally pass an EHTObservation so that ancillary information will be added.

source

Polarization

ComradeBase.AbstractPolarizedModelType
abstract type AbstractPolarizedModel <: ComradeBase.AbstractModel

Type the classifies a model as being intrinsically polarized. This means that any call to visibility must return a StokesParams to denote the full stokes polarization of the model.

source
  • 1Blackburn L., et al "Closure Statistics in Interferometric Data" ApJ 2020
  • 1Blackburn L., et al "Closure Statistics in Interferometric Data" ApJ 2020
+phasecenter(img::StokesIntensitymap)

Computes the phase center of an intensity map. Note this is the pixels that is in the middle of the image.

source
ComradeBase.centroidFunction
centroid(im::AbstractIntensityMap)

Computes the image centroid aka the center of light of the image.

For polarized maps we return the centroid for Stokes I only.

source
ComradeBase.second_momentFunction
second_moment(im::AbstractIntensityMap; center=true)

Computes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.

For polarized maps we return the second moment for Stokes I only.

source
second_moment(im::AbstractIntensityMap; center=true)

Computes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.

source
ComradeBase.headerFunction
header(g::AbstractDims)

Returns the headerinformation of the dimensions g

source
header(img::IntensityMap)

Retrieves the header of an IntensityMap

source
ComradeBase.NoHeaderType
NoHeader
source
ComradeBase.MinimalHeaderType
MinimalHeader{T}

A minimal header type for ancillary image information.

Fields

  • source: Common source name
  • ra: Right ascension of the image in degrees (J2000)
  • dec: Declination of the image in degrees (J2000)
  • mjd: Modified Julian Date in days
  • frequency: Frequency of the image in Hz
source
ComradeBase.loadFunction
ComradeBase.load(fitsfile::String, IntensityMap)

This loads in a fits file that is more robust to the various imaging algorithms in the EHT, i.e. is works with clean, smili, eht-imaging. The function returns an tuple with an intensitymap and a second named tuple with ancillary information about the image, like the source name, location, mjd, and radio frequency.

source
ComradeBase.saveFunction
ComradeBase.save(file::String, img::IntensityMap, obs)

Saves an image to a fits file. You can optionally pass an EHTObservation so that ancillary information will be added.

source

Polarization

ComradeBase.AbstractPolarizedModelType
abstract type AbstractPolarizedModel <: ComradeBase.AbstractModel

Type the classifies a model as being intrinsically polarized. This means that any call to visibility must return a StokesParams to denote the full stokes polarization of the model.

source
diff --git a/dev/benchmarks/index.html b/dev/benchmarks/index.html index d046c2d8..2e47f25f 100644 --- a/dev/benchmarks/index.html +++ b/dev/benchmarks/index.html @@ -182,4 +182,4 @@ @benchmark fobj($pinit) # Now we benchmark the gradient -@benchmark gfobj($pinit)
+@benchmark gfobj($pinit)
diff --git a/dev/conventions/index.html b/dev/conventions/index.html index 0be40eea..b21c7809 100644 --- a/dev/conventions/index.html +++ b/dev/conventions/index.html @@ -29,4 +29,4 @@ \begin{pmatrix} \tilde{I} + \tilde{Q} & \tilde{U} + i\tilde{V}\\ \tilde{U} - i\tilde{V} & \tilde{I} - \tilde{Q} - \end{pmatrix}.\]

where e.g., $\left<XY^*\right> = 2\left<v_{pX}v^*_{pY}\right>$.

+ \end{pmatrix}.\]

where e.g., $\left<XY^*\right> = 2\left<v_{pX}v^*_{pY}\right>$.

diff --git a/dev/examples/data/index.html b/dev/examples/data/index.html index 10b37168..edf433cb 100644 --- a/dev/examples/data/index.html +++ b/dev/examples/data/index.html @@ -21,4 +21,4 @@ pcp = plot(cphase) plc = plot(lcamp) -plot(pv, pa, pcp, plc; layout=l)
<< @example-block not executed in draft mode >>

And also the coherency matrices

plot(coh)
<< @example-block not executed in draft mode >>

This page was generated using Literate.jl.

+plot(pv, pa, pcp, plc; layout=l)
<< @example-block not executed in draft mode >>

And also the coherency matrices

plot(coh)
<< @example-block not executed in draft mode >>

This page was generated using Literate.jl.

diff --git a/dev/examples/geometric_modeling/index.html b/dev/examples/geometric_modeling/index.html index ec10ecc7..17fe2d9a 100644 --- a/dev/examples/geometric_modeling/index.html +++ b/dev/examples/geometric_modeling/index.html @@ -61,4 +61,4 @@ compute_ess(x::AbstractVector{<:Number}) = ess_rhat(x) compute_ess(x::AbstractVector{<:Tuple}) = map(ess_rhat, Tables.columns(x)) compute_ess(x::Tuple) = map(compute_ess, x) -essrhat = compute_ess(Tables.columns(chain))
<< @example-block not executed in draft mode >>

Here, the first value is the ESS, and the second is the R̂. Note that we typically want R̂ < 1.01 for all parameters, but you should also be running the problem at least four times from four different starting locations. In the future we will write an extension that works with Arviz.jl.

In our example here, we see that we have an ESS > 100 for all parameters and the R̂ < 1.01 meaning that our MCMC chain is a reasonable approximation of the posterior. For more diagnostics, see MCMCDiagnosticTools.jl.


This page was generated using Literate.jl.

+essrhat = compute_ess(Tables.columns(chain))
<< @example-block not executed in draft mode >>

Here, the first value is the ESS, and the second is the R̂. Note that we typically want R̂ < 1.01 for all parameters, but you should also be running the problem at least four times from four different starting locations. In the future we will write an extension that works with Arviz.jl.

In our example here, we see that we have an ESS > 100 for all parameters and the R̂ < 1.01 meaning that our MCMC chain is a reasonable approximation of the posterior. For more diagnostics, see MCMCDiagnosticTools.jl.


This page was generated using Literate.jl.

diff --git a/dev/examples/hybrid_imaging/index.html b/dev/examples/hybrid_imaging/index.html index 68cf33fd..13f0e57d 100644 --- a/dev/examples/hybrid_imaging/index.html +++ b/dev/examples/hybrid_imaging/index.html @@ -111,4 +111,4 @@ Threads: 1 on 32 virtual cores Environment: JULIA_EDITOR = code - JULIA_NUM_THREADS = 1

This page was generated using Literate.jl.

+ JULIA_NUM_THREADS = 1

This page was generated using Literate.jl.

diff --git a/dev/examples/imaging_closures/index.html b/dev/examples/imaging_closures/index.html index 8da95528..b2d10bc3 100644 --- a/dev/examples/imaging_closures/index.html +++ b/dev/examples/imaging_closures/index.html @@ -74,4 +74,4 @@ residual!(p, vlbimodel(post, s), dcphase) end ylabel!("|Closure Phase Res.|"); -p
<< @example-block not executed in draft mode >>

And viola, you have a quick and preliminary image of M87 fitting only closure products. For a publication-level version we would recommend

  1. Running the chain longer and multiple times to properly assess things like ESS and R̂ (see Geometric Modeling of EHT Data)
  2. Fitting gains. Typically gain amplitudes are good to 10-20% for the EHT not the infinite uncertainty closures implicitly assume
  3. Making sure the posterior is unimodal (hint for this example it isn't!). The EHT image posteriors can be pretty complicated, so typically you want to use a sampler that can deal with multi-modal posteriors. Check out the package Pigeons.jl for an in-development package that should easily enable this type of sampling.

This page was generated using Literate.jl.

+p
<< @example-block not executed in draft mode >>

And viola, you have a quick and preliminary image of M87 fitting only closure products. For a publication-level version we would recommend

  1. Running the chain longer and multiple times to properly assess things like ESS and R̂ (see Geometric Modeling of EHT Data)
  2. Fitting gains. Typically gain amplitudes are good to 10-20% for the EHT not the infinite uncertainty closures implicitly assume
  3. Making sure the posterior is unimodal (hint for this example it isn't!). The EHT image posteriors can be pretty complicated, so typically you want to use a sampler that can deal with multi-modal posteriors. Check out the package Pigeons.jl for an in-development package that should easily enable this type of sampling.

This page was generated using Literate.jl.

diff --git a/dev/examples/imaging_pol/index.html b/dev/examples/imaging_pol/index.html index dfc5c116..adb85881 100644 --- a/dev/examples/imaging_pol/index.html +++ b/dev/examples/imaging_pol/index.html @@ -124,4 +124,4 @@ Threads: 1 on 32 virtual cores Environment: JULIA_EDITOR = code - JULIA_NUM_THREADS = 1

This page was generated using Literate.jl.

+ JULIA_NUM_THREADS = 1

This page was generated using Literate.jl.

diff --git a/dev/examples/imaging_vis/index.html b/dev/examples/imaging_vis/index.html index 3583c878..3651d407 100644 --- a/dev/examples/imaging_vis/index.html +++ b/dev/examples/imaging_vis/index.html @@ -95,4 +95,4 @@ for s in sample(chain, 10) residual!(p, vlbimodel(post, s), dvis) end -p
<< @example-block not executed in draft mode >>

And viola, you have just finished making a preliminary image and instrument model reconstruction. In reality, you should run the sample step for many more MCMC steps to get a reliable estimate for the reconstructed image and instrument model parameters.


This page was generated using Literate.jl.

+p
<< @example-block not executed in draft mode >>

And viola, you have just finished making a preliminary image and instrument model reconstruction. In reality, you should run the sample step for many more MCMC steps to get a reliable estimate for the reconstructed image and instrument model parameters.


This page was generated using Literate.jl.

diff --git a/dev/index.html b/dev/index.html index 8ebe00cc..382faf36 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · Comrade.jl

Comrade

Comrade is a Bayesian differentiable modular modeling framework for use with very long baseline interferometry. The goal is to allow the user to easily combine and modify a set of primitive models to construct complicated source structures. The benefit of this approach is that it is straightforward to construct different source models out of these primitives. Namely, an end-user does not have to create a separate source "model" every time they change the model specification. Additionally, most models currently implemented are differentiable with at Zygote and sometimes ForwardDiff[2]. This allows for gradient accelerated optimization and sampling (e.g., HMC) to be used with little effort by the end user. To sample from the posterior, we provide a somewhat barebones interface since, most of the time, and we don't require the additional features offered by most PPLs. Additionally, the overhead introduced by PPLs tends to be rather large. In the future, we may revisit this as Julia's PPL ecosystem matures.

Note

The primitives the Comrade defines, however, would allow for it to be easily included in PPLs like Turing.

Our tutorial section currently has a large number of examples. The simplest example is fitting simple geometric models to the 2017 M87 data and is detailed in the Geometric Modeling of EHT Data tutorial. We also include "non-parametric" modeling or imaging examples in Imaging a Black Hole using only Closure Quantities, and Stokes I Simultaneous Image and Instrument Modeling. There is also an introduction to hybrid geometric and image modeling in Hybrid Imaging of a Black Hole, which combines physically motivated geometric modeling with the flexibility of image-based models.

As of 0.7, Comrade also can simultaneously reconstruct polarized image models and instrument corruptions through the RIME[1] formalism. A short example explaining these features can be found in Polarized Image and Instrumental Modeling.

Contributing

This repository has recently moved to ColPrac. If you would like to contribute please feel free to open a issue or pull-request.

a Dual number overload. As a result we recommend using Zygote which does work and often is similarly performant (reverse 3-6x slower compared to the forward pass).

Requirements

The minimum Julia version we require is 1.7. In the future we may increase this as Julia advances.

References

  • 2As of 0.9 Comrade switched to using full covariance closures. As a result this requires a sparse cholesky solve in the likelihood evaluation which requires
  • 1Hamaker J.P and Bregman J.D. and Sault R.J. Understanding radio polarimetry. I. Mathematical foundations ADS.
+Home · Comrade.jl

Comrade

Comrade is a Bayesian differentiable modular modeling framework for use with very long baseline interferometry. The goal is to allow the user to easily combine and modify a set of primitive models to construct complicated source structures. The benefit of this approach is that it is straightforward to construct different source models out of these primitives. Namely, an end-user does not have to create a separate source "model" every time they change the model specification. Additionally, most models currently implemented are differentiable with at Zygote and sometimes ForwardDiff[2]. This allows for gradient accelerated optimization and sampling (e.g., HMC) to be used with little effort by the end user. To sample from the posterior, we provide a somewhat barebones interface since, most of the time, and we don't require the additional features offered by most PPLs. Additionally, the overhead introduced by PPLs tends to be rather large. In the future, we may revisit this as Julia's PPL ecosystem matures.

Note

The primitives the Comrade defines, however, would allow for it to be easily included in PPLs like Turing.

Our tutorial section currently has a large number of examples. The simplest example is fitting simple geometric models to the 2017 M87 data and is detailed in the Geometric Modeling of EHT Data tutorial. We also include "non-parametric" modeling or imaging examples in Imaging a Black Hole using only Closure Quantities, and Stokes I Simultaneous Image and Instrument Modeling. There is also an introduction to hybrid geometric and image modeling in Hybrid Imaging of a Black Hole, which combines physically motivated geometric modeling with the flexibility of image-based models.

As of 0.7, Comrade also can simultaneously reconstruct polarized image models and instrument corruptions through the RIME[1] formalism. A short example explaining these features can be found in Polarized Image and Instrumental Modeling.

Contributing

This repository has recently moved to ColPrac. If you would like to contribute please feel free to open a issue or pull-request.

a Dual number overload. As a result we recommend using Zygote which does work and often is similarly performant (reverse 3-6x slower compared to the forward pass).

Requirements

The minimum Julia version we require is 1.7. In the future we may increase this as Julia advances.

References

  • 2As of 0.9 Comrade switched to using full covariance closures. As a result this requires a sparse cholesky solve in the likelihood evaluation which requires
  • 1Hamaker J.P and Bregman J.D. and Sault R.J. Understanding radio polarimetry. I. Mathematical foundations ADS.
diff --git a/dev/interface/index.html b/dev/interface/index.html index c300af21..efdbb67b 100644 --- a/dev/interface/index.html +++ b/dev/interface/index.html @@ -1,2 +1,2 @@ -Model Interface · Comrade.jl
+Model Interface · Comrade.jl
diff --git a/dev/libs/adaptmcmc/index.html b/dev/libs/adaptmcmc/index.html index 25371d77..3a6ac68d 100644 --- a/dev/libs/adaptmcmc/index.html +++ b/dev/libs/adaptmcmc/index.html @@ -14,4 +14,4 @@ fulladapt = true, acc_sw = 0.234, all_levels = false - )

Create an AdaptMCMC.jl sampler. This sampler uses the AdaptiveMCMC.jl package to sample from the posterior. Namely, this is a parallel tempering algorithm with an adaptive exploration and tempering sampler. For more information please see [https://github.com/mvihola/AdaptiveMCMC.jl].

The arguments of the function are:

source
StatsBase.sampleFunction
sample(post::Posterior, sampler::AdaptMCMC, nsamples, burnin=nsamples÷2, args...; init_params=nothing, kwargs...)

Sample the posterior post using the AdaptMCMC sampler. This will produce nsamples with the first burnin steps removed. The init_params indicate where to start the sampler from and it is expected to be a NamedTuple of parameters.

Possible additional kwargs are:

  • thin::Int = 1: which says to save only every thin sample to memory
  • rng: Specify a random number generator (default uses GLOBAL_RNG)

This return a tuple where:

  • First element are the chains from the sampler. If all_levels=false the only the unit temperature (posterior) chain is returned
  • Second element is the additional ancilliary information about the samples including the loglikelihood logl, sampler state state, average exploration kernel acceptance rate accexp for each tempering level, and average temperate swap acceptance rates accswp for each tempering level.
source
+ )

Create an AdaptMCMC.jl sampler. This sampler uses the AdaptiveMCMC.jl package to sample from the posterior. Namely, this is a parallel tempering algorithm with an adaptive exploration and tempering sampler. For more information please see [https://github.com/mvihola/AdaptiveMCMC.jl].

The arguments of the function are:

source
StatsBase.sampleFunction
sample(post::Posterior, sampler::AdaptMCMC, nsamples, burnin=nsamples÷2, args...; init_params=nothing, kwargs...)

Sample the posterior post using the AdaptMCMC sampler. This will produce nsamples with the first burnin steps removed. The init_params indicate where to start the sampler from and it is expected to be a NamedTuple of parameters.

Possible additional kwargs are:

  • thin::Int = 1: which says to save only every thin sample to memory
  • rng: Specify a random number generator (default uses GLOBAL_RNG)

This return a tuple where:

  • First element are the chains from the sampler. If all_levels=false the only the unit temperature (posterior) chain is returned
  • Second element is the additional ancilliary information about the samples including the loglikelihood logl, sampler state state, average exploration kernel acceptance rate accexp for each tempering level, and average temperate swap acceptance rates accswp for each tempering level.
source
diff --git a/dev/libs/ahmc/index.html b/dev/libs/ahmc/index.html index fb55409b..23f0b38d 100644 --- a/dev/libs/ahmc/index.html +++ b/dev/libs/ahmc/index.html @@ -9,16 +9,16 @@ metric = DiagEuclideanMetric(dimension(post)) smplr = AHMC(metric=metric, autodiff=Val(:Zygote)) -samples, stats = sample(post, smplr, 2_000; nadapts=1_000)

API

ComradeAHMC.AHMCType
AHMC

Creates a sampler that uses the AdvancedHMC framework to construct an Hamiltonian Monte Carlo NUTS sampler.

The user must specify the metric they want to use. Typically we recommend DiagEuclideanMetric as a reasonable starting place. The other options are chosen to match the Stan languages defaults and should provide a good starting point. Please see the AdvancedHMC docs for more information.

Notes

For autodiff the must provide a Val(::Symbol) that specifies the AD backend. Currently, we use LogDensityProblemsAD.

Fields

  • metric: AdvancedHMC metric to use
  • integrator: AdvancedHMC integrator Defaults to AdvancedHMC.Leapfrog
  • trajectory: HMC trajectory sampler Defaults to AdvancedHMC.MultinomialTS
  • termination: HMC termination condition Defaults to AdvancedHMC.StrictGeneralisedNoUTurn
  • adaptor: Adaptation strategy for mass matrix and stepsize Defaults to AdvancedHMC.StanHMCAdaptor
  • targetacc: Target acceptance rate for all trajectories on the tree Defaults to 0.85
  • init_buffer: The number of steps for the initial tuning phase. Defaults to 75 which is the Stan default
  • term_buffer: The number of steps for the final fast step size adaptation Default if 50 which is the Stan default
  • window_size: The number of steps to tune the covariance before the first doubling Default is 25 which is the Stan default
  • autodiff: autodiff backend see LogDensitProblemsAD.jl for possible backends. The default is Zygote which is appropriate for high dimensional problems.
source
ComradeAHMC.DiskStoreType
Disk

Type that specifies to save the HMC results to disk.

Fields

  • name: Path of the directory where the results will be saved. If the path does not exist it will be automatically created.
  • stride: The output stride, i.e. every stride steps the MCMC output will be dumped to disk.
source
ComradeAHMC.MemoryStoreType
Memory

Stored the HMC samplers in memory or ram.

source
ComradeAHMC.load_tableFunction
load_table(out::DiskOutput, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table="samples")
-load_table(out::String, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table="samples")

The the results from a HMC run saved to disk. To read in the output the user can either pass the resulting out object, or the path to the directory that the results were saved, i.e. the path specified in DiskStore.

Arguments

  • out::Union{String, DiskOutput}: If out is a string is must point to the direct that the DiskStore pointed to. Otherwise it is what is directly returned from sample.
  • indices: The indices of the that you want to load into memory. The default is to load the entire table.

Keyword Arguments

  • table: A string specifying the table you wish to read in. There are two options: "samples" which corresponds to the actual MCMC chain, and stats which corresponds to additional information about the sampler, e.g., the log density of each sample and tree statistics.
source
StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior,
+samples, stats = sample(post, smplr, 2_000; nadapts=1_000)

API

ComradeAHMC.AHMCType
AHMC

Creates a sampler that uses the AdvancedHMC framework to construct an Hamiltonian Monte Carlo NUTS sampler.

The user must specify the metric they want to use. Typically we recommend DiagEuclideanMetric as a reasonable starting place. The other options are chosen to match the Stan languages defaults and should provide a good starting point. Please see the AdvancedHMC docs for more information.

Notes

For autodiff the must provide a Val(::Symbol) that specifies the AD backend. Currently, we use LogDensityProblemsAD.

Fields

  • metric: AdvancedHMC metric to use
  • integrator: AdvancedHMC integrator Defaults to AdvancedHMC.Leapfrog
  • trajectory: HMC trajectory sampler Defaults to AdvancedHMC.MultinomialTS
  • termination: HMC termination condition Defaults to AdvancedHMC.StrictGeneralisedNoUTurn
  • adaptor: Adaptation strategy for mass matrix and stepsize Defaults to AdvancedHMC.StanHMCAdaptor
  • targetacc: Target acceptance rate for all trajectories on the tree Defaults to 0.85
  • init_buffer: The number of steps for the initial tuning phase. Defaults to 75 which is the Stan default
  • term_buffer: The number of steps for the final fast step size adaptation Default if 50 which is the Stan default
  • window_size: The number of steps to tune the covariance before the first doubling Default is 25 which is the Stan default
  • autodiff: autodiff backend see LogDensitProblemsAD.jl for possible backends. The default is Zygote which is appropriate for high dimensional problems.
source
ComradeAHMC.DiskStoreType
Disk

Type that specifies to save the HMC results to disk.

Fields

  • name: Path of the directory where the results will be saved. If the path does not exist it will be automatically created.
  • stride: The output stride, i.e. every stride steps the MCMC output will be dumped to disk.
source
ComradeAHMC.load_tableFunction
load_table(out::DiskOutput, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table="samples")
+load_table(out::String, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table="samples")

The the results from a HMC run saved to disk. To read in the output the user can either pass the resulting out object, or the path to the directory that the results were saved, i.e. the path specified in DiskStore.

Arguments

  • out::Union{String, DiskOutput}: If out is a string is must point to the direct that the DiskStore pointed to. Otherwise it is what is directly returned from sample.
  • indices: The indices of the that you want to load into memory. The default is to load the entire table.

Keyword Arguments

  • table: A string specifying the table you wish to read in. There are two options: "samples" which corresponds to the actual MCMC chain, and stats which corresponds to additional information about the sampler, e.g., the log density of each sample and tree statistics.
source
StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior,
                     sampler::AHMC,
                     nsamples;
                     init_params=nothing,
                     saveto::Union{Memory, Disk}=Memory(),
-                    kwargs...)

Samples the posterior post using the AdvancedHMC sampler specified by AHMC. This will run the sampler for nsamples.

To initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.

With saveto the user can optionally specify whether to store the samples in memory MemoryStore or save directly to disk with DiskStore(filename, stride). The stride controls how often t he samples are dumped to disk.

For possible kwargs please see the AdvancedHMC.jl docs

This returns a tuple where the first element is a TypedTable of the MCMC samples in parameter space and the second argument is a set of ancilliary information about the sampler.

Notes

This will automatically transform the posterior to the flattened unconstrained space.

source
StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior,
+                    kwargs...)

Samples the posterior post using the AdvancedHMC sampler specified by AHMC. This will run the sampler for nsamples.

To initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.

With saveto the user can optionally specify whether to store the samples in memory MemoryStore or save directly to disk with DiskStore(filename, stride). The stride controls how often t he samples are dumped to disk.

For possible kwargs please see the AdvancedHMC.jl docs

This returns a tuple where the first element is a TypedTable of the MCMC samples in parameter space and the second argument is a set of ancilliary information about the sampler.

Notes

This will automatically transform the posterior to the flattened unconstrained space.

source
StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior,
                     sampler::AHMC,
                     parallel::AbstractMCMC.AbstractMCMCEnsemble,
                     nsamples,
                     nchainsl;
                     init_params=nothing,
-                    kwargs...)

Samples the posterior post using the AdvancedHMC sampler specified by AHMC. This will sample nchains copies of the posterior using the parallel scheme. Each chain will be sampled for nsamples.

To initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.

For possible kwargs please see the AdvancedHMC.jl docs

This returns a tuple where the first element is nchains of TypedTable's each which contains the MCMC samples of one of the parallel chain and the second argument is a set of ancilliary information about each set of samples.

Notes

This will automatically transform the posterior to the flattened unconstrained space.

source
+ kwargs...)

Samples the posterior post using the AdvancedHMC sampler specified by AHMC. This will sample nchains copies of the posterior using the parallel scheme. Each chain will be sampled for nsamples.

To initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.

For possible kwargs please see the AdvancedHMC.jl docs

This returns a tuple where the first element is nchains of TypedTable's each which contains the MCMC samples of one of the parallel chain and the second argument is a set of ancilliary information about each set of samples.

Notes

This will automatically transform the posterior to the flattened unconstrained space.

source diff --git a/dev/libs/dynesty/index.html b/dev/libs/dynesty/index.html index 6eabf1bd..a3d59266 100644 --- a/dev/libs/dynesty/index.html +++ b/dev/libs/dynesty/index.html @@ -15,4 +15,4 @@ equal_weight_chain = ComradeDynesty.equalresample(samples, 10_000)

API

StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior, smplr::Dynesty.NestedSampler, args...; kwargs...)
 AbstractMCMC.sample(post::Comrade.Posterior, smplr::Dynesty.DynamicNestedSampler, args...; kwargs...)

Sample the posterior post using Dynesty.jl NestedSampler/DynamicNestedSampler sampler. The args/kwargs are forwarded to Dynesty for more information see its docs

This returns a tuple where the first element are the weighted samples from dynesty in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights. The final element of the tuple is the original dynesty output file.

To create equally weighted samples the user can use

using StatsBase
 chain, stats = sample(post, NestedSampler(dimension(post), 1000))
-equal_weighted_chain = sample(chain, Weights(stats.weights), 10_000)
source
+equal_weighted_chain = sample(chain, Weights(stats.weights), 10_000)source diff --git a/dev/libs/nested/index.html b/dev/libs/nested/index.html index 0beee378..e79cb342 100644 --- a/dev/libs/nested/index.html +++ b/dev/libs/nested/index.html @@ -12,4 +12,4 @@ # Optionally resample the chain to create an equal weighted output using StatsBase -equal_weight_chain = ComradeNested.equalresample(samples, 10_000)

API

StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior, smplr::Nested, args...; kwargs...)

Sample the posterior post using NestedSamplers.jl Nested sampler. The args/kwargs are forwarded to NestedSampler for more information see its docs

This returns a tuple where the first element are the weighted samples from NestedSamplers in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights.

To create equally weighted samples the user can use ```julia using StatsBase chain, stats = sample(post, NestedSampler(dimension(post), 1000)) equalweightedchain = sample(chain, Weights(stats.weights), 10_000)

source
+equal_weight_chain = ComradeNested.equalresample(samples, 10_000)

API

StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior, smplr::Nested, args...; kwargs...)

Sample the posterior post using NestedSamplers.jl Nested sampler. The args/kwargs are forwarded to NestedSampler for more information see its docs

This returns a tuple where the first element are the weighted samples from NestedSamplers in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights.

To create equally weighted samples the user can use ```julia using StatsBase chain, stats = sample(post, NestedSampler(dimension(post), 1000)) equalweightedchain = sample(chain, Weights(stats.weights), 10_000)

source
diff --git a/dev/libs/optimization/index.html b/dev/libs/optimization/index.html index f2672089..b71a6f71 100644 --- a/dev/libs/optimization/index.html +++ b/dev/libs/optimization/index.html @@ -13,4 +13,4 @@ prob = OptimizationProblem(fflat, prior_sample(asflat(post)), nothing) # Now solve! Here we use LBFGS -sol = solve(prob, LBFGS(); g_tol=1e-2)

API

ComradeOptimization.laplaceMethod
laplace(prob, opt, args...; kwargs...)

Compute the Laplace or Quadratic approximation to the prob or posterior. The args and kwargs are passed the the SciMLBase.solve function. This will return a Distributions.MvNormal object that approximates the posterior in the transformed space.

Note the quadratic approximation is in the space of the transformed posterior not the usual parameter space. This is better for constrained problems where we may run up against a boundary.

source
SciMLBase.OptimizationFunctionMethod
SciMLBase.OptimizationFunction(post::Posterior, args...; kwargs...)

Constructs a OptimizationFunction from a Comrade.TransformedPosterior object. Note that a user must transform the posterior first. This is so we know which space is most amenable to optimization.

source
+sol = solve(prob, LBFGS(); g_tol=1e-2)

API

ComradeOptimization.laplaceMethod
laplace(prob, opt, args...; kwargs...)

Compute the Laplace or Quadratic approximation to the prob or posterior. The args and kwargs are passed the the SciMLBase.solve function. This will return a Distributions.MvNormal object that approximates the posterior in the transformed space.

Note the quadratic approximation is in the space of the transformed posterior not the usual parameter space. This is better for constrained problems where we may run up against a boundary.

source
SciMLBase.OptimizationFunctionMethod
SciMLBase.OptimizationFunction(post::Posterior, args...; kwargs...)

Constructs a OptimizationFunction from a Comrade.TransformedPosterior object. Note that a user must transform the posterior first. This is so we know which space is most amenable to optimization.

source
diff --git a/dev/vlbi_imaging_problem/index.html b/dev/vlbi_imaging_problem/index.html index b059bd09..2012643b 100644 --- a/dev/vlbi_imaging_problem/index.html +++ b/dev/vlbi_imaging_problem/index.html @@ -1,2 +1,2 @@ -Introduction to the VLBI Imaging Problem · Comrade.jl

Introduction to the VLBI Imaging Problem

Very-long baseline interferometry (VLBI) is capable of taking the highest resolution images in the world, achieving angular resolutions of ~20 μas. In 2019, the first-ever image of a black hole was produced by the Event Horizon Telescope (EHT). However, while the EHT has unprecedented resolution, it is also a sparse interferometer. As a result, the sampling in the uv or Fourier space of the image is incomplete. This incompleteness makes the imaging problem uncertain. Namely, infinitely many images are possible, given the data. Comrade is a imaging/modeling package that aims to quantify this uncertainty using Bayesian inference.

If we denote visibilities by V and the image structure/model by I, Comrade will then compute the posterior or the probability of an image given the visibility data or in an equation

\[p(I|V) = \frac{p(V|I)p(I)}{p(V)}.\]

Here $p(V|I)$ is known as the likelihood and describes the probability distribution of the data given some image I. The prior $p(I)$ encodes prior knowledge of the image structure. This prior includes distributions of model parameters and even the model itself. Finally, the denominator $p(V)$ is a normalization term and is known as the marginal likelihood or evidence and can be used to assess how well particular models fit the data.

Therefore, we must specify the likelihood and prior to construct our posterior. Below we provide a brief description of the likelihoods and models/priors that Comrade uses. However, if the user wants to see how everything works first, they should check out the Geometric Modeling of EHT Data tutorial.

Likelihood

Following TMS[TMS], we note that the likelihood for a single complex visibility at baseline $u_{ij}, v_{ij}$ is

\[p(V_{ij} | I) = (2\pi \sigma^2_{ij})^{-1/2}\exp\left(-\frac{| V_{ij} - g_ig_j^*\tilde{I}_{ij}(I)|^2}{2\sigma^2_{ij}}\right).\]

In this equation, $\tilde{I}$ is the Fourier transform of the image $I$, and $g_{i,j}$ are complex numbers known as gains. The gains arise due to atmospheric and telescope effects and corrupt the incoming signal. Therefore, if a user attempts to model the complex visibilities, they must also model the complex gains. An example showing how to model gains in Comrade can be found in Stokes I Simultaneous Image and Instrument Modeling.

Modeling the gains can be computationally expensive, especially if our image model is simple. For instance, in Comrade, we have a wide variety of geometric models. These models tend to have a small number of parameters and are simple to evaluate. Solving for gains then drastically increases the amount of time it takes to sample the posterior. As a result, part of the typical EHT analysis[M87P6][SgrAP4] instead uses closure products as its data. The two forms of closure products are:

  • Closure Phases,
  • Log-Closure Amplitudes.

Closure Phases $\psi$ are constructed by selecting three baselines $(i,j,k)$ and finding the argument of the bispectrum:

\[ \psi_{ijk} = \arg V_{ij}V_{jk}V_{ki}.\]

Similar log-closure amplitudes are found by selecting four baselines $(i,j,k,l)$ and forming the closure amplitudes:

\[ A_{ijkl} = \frac{ |V_{ij}||V_{kl}|}{|V_{jk}||V_{li}|}.\]

Instead of directly fitting closure amplitudes, it turns out that the statistically better-behaved data product is the log-closure amplitude.

The benefit of fitting closure products is that they are independent of complex gains, so we can leave them out when modeling the data. However, the downside is that they effectively put uniform improper priors on the gains[Blackburn], meaning that we often throw away information about the telescope's performance. On the other hand, we can then view closure fitting as a very conservative estimate about what image structures are consistent with the data. Another downside of using closure products is that their likelihoods are complex. In the high-signal-to-noise limit, however, they do reduce to Gaussian likelihoods, and this is the limit we are usually in for the EHT. For the explicit likelihood Comrade uses, we refer the reader to appendix F in paper IV of the first Sgr A* EHT publications[SgrAP4]. The computational implementation of these likelihoods can be found in VLBILikelihoods.jl.

Prior Model

Comrade has included a large number of possible models (see Comrade API for a list). These can be broken down into two categories:

  1. Parametric or geometric models
  2. Non-parametric or image models

Comrade's geometric model interface is built using VLBISkyModels and is different from other EHT modeling packages because we don't directly provide fully formed models. Instead, we offer simple geometric models, which we call primitives. These primitive models can then be modified and combined to form complicated image structures. For more information, we refer the reader to the VLBISkyModels docs.

Additionally, we include an interface to Bayesian imaging methods, where we directly fit a rasterized image to the data. These models are highly flexible and assume very little about the image structure. In that sense, these methods are an excellent way to explore the data first and see what kinds of image structures are consistent with observations. For an example of how to fit an image model to closure products, we refer the reader to the other tutorial included in the docs.

References

  • TMSThompson, A., Moran, J., Swenson, G. (2017). Interferometry and Synthesis in Radio Astronomy (Third). Springer Cham
  • M87P6Event Horizon Telescope Collaboration, (2022). First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole. ApJL 875 L6 doi
  • SgrAP4Event Horizon Telescope Collaboration, (2022). First Sagittarius A* Event Horizon Telscope Results. IV. Variability, Morphology, and Black Hole Mass. ApJL 930 L15 arXiv
  • BlackburnBlackburn, L., et. al. (2020). Closure statistics in interferometric data. ApJ, 894(1), 31.
+Introduction to the VLBI Imaging Problem · Comrade.jl

Introduction to the VLBI Imaging Problem

Very-long baseline interferometry (VLBI) is capable of taking the highest resolution images in the world, achieving angular resolutions of ~20 μas. In 2019, the first-ever image of a black hole was produced by the Event Horizon Telescope (EHT). However, while the EHT has unprecedented resolution, it is also a sparse interferometer. As a result, the sampling in the uv or Fourier space of the image is incomplete. This incompleteness makes the imaging problem uncertain. Namely, infinitely many images are possible, given the data. Comrade is a imaging/modeling package that aims to quantify this uncertainty using Bayesian inference.

If we denote visibilities by V and the image structure/model by I, Comrade will then compute the posterior or the probability of an image given the visibility data or in an equation

\[p(I|V) = \frac{p(V|I)p(I)}{p(V)}.\]

Here $p(V|I)$ is known as the likelihood and describes the probability distribution of the data given some image I. The prior $p(I)$ encodes prior knowledge of the image structure. This prior includes distributions of model parameters and even the model itself. Finally, the denominator $p(V)$ is a normalization term and is known as the marginal likelihood or evidence and can be used to assess how well particular models fit the data.

Therefore, we must specify the likelihood and prior to construct our posterior. Below we provide a brief description of the likelihoods and models/priors that Comrade uses. However, if the user wants to see how everything works first, they should check out the Geometric Modeling of EHT Data tutorial.

Likelihood

Following TMS[TMS], we note that the likelihood for a single complex visibility at baseline $u_{ij}, v_{ij}$ is

\[p(V_{ij} | I) = (2\pi \sigma^2_{ij})^{-1/2}\exp\left(-\frac{| V_{ij} - g_ig_j^*\tilde{I}_{ij}(I)|^2}{2\sigma^2_{ij}}\right).\]

In this equation, $\tilde{I}$ is the Fourier transform of the image $I$, and $g_{i,j}$ are complex numbers known as gains. The gains arise due to atmospheric and telescope effects and corrupt the incoming signal. Therefore, if a user attempts to model the complex visibilities, they must also model the complex gains. An example showing how to model gains in Comrade can be found in Stokes I Simultaneous Image and Instrument Modeling.

Modeling the gains can be computationally expensive, especially if our image model is simple. For instance, in Comrade, we have a wide variety of geometric models. These models tend to have a small number of parameters and are simple to evaluate. Solving for gains then drastically increases the amount of time it takes to sample the posterior. As a result, part of the typical EHT analysis[M87P6][SgrAP4] instead uses closure products as its data. The two forms of closure products are:

  • Closure Phases,
  • Log-Closure Amplitudes.

Closure Phases $\psi$ are constructed by selecting three baselines $(i,j,k)$ and finding the argument of the bispectrum:

\[ \psi_{ijk} = \arg V_{ij}V_{jk}V_{ki}.\]

Similar log-closure amplitudes are found by selecting four baselines $(i,j,k,l)$ and forming the closure amplitudes:

\[ A_{ijkl} = \frac{ |V_{ij}||V_{kl}|}{|V_{jk}||V_{li}|}.\]

Instead of directly fitting closure amplitudes, it turns out that the statistically better-behaved data product is the log-closure amplitude.

The benefit of fitting closure products is that they are independent of complex gains, so we can leave them out when modeling the data. However, the downside is that they effectively put uniform improper priors on the gains[Blackburn], meaning that we often throw away information about the telescope's performance. On the other hand, we can then view closure fitting as a very conservative estimate about what image structures are consistent with the data. Another downside of using closure products is that their likelihoods are complex. In the high-signal-to-noise limit, however, they do reduce to Gaussian likelihoods, and this is the limit we are usually in for the EHT. For the explicit likelihood Comrade uses, we refer the reader to appendix F in paper IV of the first Sgr A* EHT publications[SgrAP4]. The computational implementation of these likelihoods can be found in VLBILikelihoods.jl.

Prior Model

Comrade has included a large number of possible models (see Comrade API for a list). These can be broken down into two categories:

  1. Parametric or geometric models
  2. Non-parametric or image models

Comrade's geometric model interface is built using VLBISkyModels and is different from other EHT modeling packages because we don't directly provide fully formed models. Instead, we offer simple geometric models, which we call primitives. These primitive models can then be modified and combined to form complicated image structures. For more information, we refer the reader to the VLBISkyModels docs.

Additionally, we include an interface to Bayesian imaging methods, where we directly fit a rasterized image to the data. These models are highly flexible and assume very little about the image structure. In that sense, these methods are an excellent way to explore the data first and see what kinds of image structures are consistent with observations. For an example of how to fit an image model to closure products, we refer the reader to the other tutorial included in the docs.

References

  • TMSThompson, A., Moran, J., Swenson, G. (2017). Interferometry and Synthesis in Radio Astronomy (Third). Springer Cham
  • M87P6Event Horizon Telescope Collaboration, (2022). First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole. ApJL 875 L6 doi
  • SgrAP4Event Horizon Telescope Collaboration, (2022). First Sagittarius A* Event Horizon Telscope Results. IV. Variability, Morphology, and Black Hole Mass. ApJL 930 L15 arXiv
  • BlackburnBlackburn, L., et. al. (2020). Closure statistics in interferometric data. ApJ, 894(1), 31.