From f0bf83aabbb942e881d35a0ed2e8c816dfd4a416 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Sun, 29 Oct 2023 17:24:36 +0000 Subject: [PATCH] build based on 175e2c9 --- dev/.documenter-siteinfo.json | 2 +- dev/api/index.html | 52 +++++++++++----------- dev/base_api/index.html | 14 +++--- dev/benchmarks/index.html | 2 +- dev/conventions/index.html | 2 +- dev/examples/data/index.html | 2 +- dev/examples/geometric_modeling/index.html | 9 ++-- dev/examples/hybrid_imaging/index.html | 2 +- dev/examples/imaging_closures/index.html | 2 +- dev/examples/imaging_pol/index.html | 2 +- dev/examples/imaging_vis/index.html | 2 +- dev/index.html | 2 +- dev/interface/index.html | 2 +- dev/libs/adaptmcmc/index.html | 2 +- dev/libs/ahmc/index.html | 8 ++-- dev/libs/dynesty/index.html | 2 +- dev/libs/nested/index.html | 2 +- dev/libs/optimization/index.html | 2 +- dev/search_index.js | 2 +- dev/vlbi_imaging_problem/index.html | 2 +- 20 files changed, 58 insertions(+), 57 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 448fa060..9c03212e 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-10-29T15:06:46","documenter_version":"1.1.2"}} \ No newline at end of file +{"documenter":{"julia_version":"1.9.3","generation_timestamp":"2023-10-29T17:24:14","documenter_version":"1.1.2"}} \ No newline at end of file diff --git a/dev/api/index.html b/dev/api/index.html index bd42e62f..d789dbe6 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,20 +1,20 @@ -Comrade API · Comrade.jl

Comrade API

Contents

Index

Model Definitions

Calibration Models

Comrade.corruptFunction
corrupt(vis, j1, j2)

Corrupts the model coherency matrices with the Jones matrices j1 for station 1 and j2 for station 2.

source
Comrade.CalTableType
struct CalTable{T, G<:(AbstractVecOrMat)}

A Tabes of calibration quantities. The columns of the table are the telescope station codes. The rows are the calibration quantities at a specific time stamp. This user should not use this struct directly. Instead that should call caltable.

source
Comrade.caltableMethod
caltable(g::JonesCache, jterms::AbstractVector)

Convert the JonesCache g and recovered Jones/corruption elements jterms into a CalTable which satisfies the Tables.jl interface.

Example

ct = caltable(gcache, gains)
+Comrade API · Comrade.jl

Comrade API

Contents

Index

Model Definitions

Calibration Models

Comrade.corruptFunction
corrupt(vis, j1, j2)

Corrupts the model coherency matrices with the Jones matrices j1 for station 1 and j2 for station 2.

source
Comrade.CalTableType
struct CalTable{T, G<:(AbstractVecOrMat)}

A Tabes of calibration quantities. The columns of the table are the telescope station codes. The rows are the calibration quantities at a specific time stamp. This user should not use this struct directly. Instead that should call caltable.

source
Comrade.caltableMethod
caltable(g::JonesCache, jterms::AbstractVector)

Convert the JonesCache g and recovered Jones/corruption elements jterms into a CalTable which satisfies the Tables.jl interface.

Example

ct = caltable(gcache, gains)
 
 # Access a particular station (here ALMA)
 ct[:AA]
 ct.AA
 
 # Access a the first row
-ct[1, :]
source
Comrade.caltableMethod
caltable(obs::EHTObservation, gains::AbstractVector)

Create a calibration table for the observations obs with gains. This returns a CalTable object that satisfies the Tables.jl interface. This table is very similar to the DataFrames interface.

Example

ct = caltable(obs, gains)
+ct[1, :]
source
Comrade.caltableMethod
caltable(obs::EHTObservation, gains::AbstractVector)

Create a calibration table for the observations obs with gains. This returns a CalTable object that satisfies the Tables.jl interface. This table is very similar to the DataFrames interface.

Example

ct = caltable(obs, gains)
 
 # Access a particular station (here ALMA)
 ct[:AA]
 ct.AA
 
 # Access a the first row
-ct[1, :]
source
Comrade.DesignMatrixType
struct DesignMatrix{X, M<:AbstractArray{X, 2}, T, S} <: AbstractArray{X, 2}

Internal type that holds the gain design matrices for visibility corruption.

source
Comrade.JonesCacheType
struct JonesCache{D1, D2, S, Sc, R} <: Comrade.AbstractJonesCache

Holds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope.

Fields

  • m1: Design matrix for the first station
  • m2: Design matrix for the second station
  • seg: Segmentation schemes for this cache
  • schema: Gain Schema
  • references: List of Reference stations
source
Comrade.ResponseCacheType
struct ResponseCache{M, B<:PolBasis} <: Comrade.AbstractJonesCache

Holds various transformations that move from the measured telescope basis to the chosen on sky reference basis.

Fields

  • T1: Transform matrices for the first stations
  • T2: Transform matrices for the second stations
  • refbasis: Reference polarization basis
source
Comrade.JonesModelType
JonesModel(jones::JonesPairs, refbasis = CirBasis())
-JonesModel(jones::JonesPairs, tcache::ResponseCache)

Constructs the intrument corruption model using pairs of jones matrices jones and a reference basis

source
Comrade.VLBIModelType
VLBIModel(skymodel, instrumentmodel)

Constructs a VLBIModel from a jones pairs that describe the intrument model and the model which describes the on-sky polarized visibilities. The third argument can either be the tcache that converts from the model coherency basis to the instrumental basis, or just the refbasis that will be used when constructing the model coherency matrices.

source
Comrade.CalPriorType
CalPrior(dists, cache::JonesCache, reference=:none)

Creates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.

Example

For the 2017 observations of M87 a common CalPrior call is:

julia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),
+ct[1, :]
source
Comrade.DesignMatrixType
struct DesignMatrix{X, M<:AbstractArray{X, 2}, T, S} <: AbstractArray{X, 2}

Internal type that holds the gain design matrices for visibility corruption.

source
Comrade.JonesCacheType
struct JonesCache{D1, D2, S, Sc, R} <: Comrade.AbstractJonesCache

Holds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope.

Fields

  • m1: Design matrix for the first station
  • m2: Design matrix for the second station
  • seg: Segmentation schemes for this cache
  • schema: Gain Schema
  • references: List of Reference stations
source
Comrade.ResponseCacheType
struct ResponseCache{M, B<:PolBasis} <: Comrade.AbstractJonesCache

Holds various transformations that move from the measured telescope basis to the chosen on sky reference basis.

Fields

  • T1: Transform matrices for the first stations
  • T2: Transform matrices for the second stations
  • refbasis: Reference polarization basis
source
Comrade.JonesModelType
JonesModel(jones::JonesPairs, refbasis = CirBasis())
+JonesModel(jones::JonesPairs, tcache::ResponseCache)

Constructs the intrument corruption model using pairs of jones matrices jones and a reference basis

source
Comrade.VLBIModelType
VLBIModel(skymodel, instrumentmodel)

Constructs a VLBIModel from a jones pairs that describe the intrument model and the model which describes the on-sky polarized visibilities. The third argument can either be the tcache that converts from the model coherency basis to the instrumental basis, or just the refbasis that will be used when constructing the model coherency matrices.

source
Comrade.CalPriorType
CalPrior(dists, cache::JonesCache, reference=:none)

Creates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.

Example

For the 2017 observations of M87 a common CalPrior call is:

julia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),
                    AP = LogNormal(0.0, 0.1),
                    JC = LogNormal(0.0, 0.1),
                    SM = LogNormal(0.0, 0.1),
@@ -24,8 +24,8 @@
                 ), cache)
 
 julia> x = rand(gdist)
-julia> logdensityof(gdist, x)
source
CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)

Constructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have

dist0 = (AA = Normal(0.0, 1.0), )
-distt = (AA = Normal(0.0, 0.1), )

then the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from

g2 = g1 + ϵ1

where ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.

source
Comrade.CalPriorMethod
CalPrior(dists, cache::JonesCache, reference=:none)

Creates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.

Example

For the 2017 observations of M87 a common CalPrior call is:

julia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),
+julia> logdensityof(gdist, x)
source
CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)

Constructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have

dist0 = (AA = Normal(0.0, 1.0), )
+distt = (AA = Normal(0.0, 0.1), )

then the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from

g2 = g1 + ϵ1

where ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.

source
Comrade.CalPriorMethod
CalPrior(dists, cache::JonesCache, reference=:none)

Creates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.

Example

For the 2017 observations of M87 a common CalPrior call is:

julia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),
                    AP = LogNormal(0.0, 0.1),
                    JC = LogNormal(0.0, 0.1),
                    SM = LogNormal(0.0, 0.1),
@@ -35,26 +35,26 @@
                 ), cache)
 
 julia> x = rand(gdist)
-julia> logdensityof(gdist, x)
source
Comrade.CalPriorMethod
CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)

Constructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have

dist0 = (AA = Normal(0.0, 1.0), )
-distt = (AA = Normal(0.0, 0.1), )

then the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from

g2 = g1 + ϵ1

where ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.

source
Comrade.RIMEModelType
abstract type RIMEModel <: ComradeBase.AbstractModel

Abstract type that encompasses all RIME style corruptions.

source
Comrade.IntegSegType
struct IntegSeg{S} <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a correlation integration.

source
Comrade.ScanSegType
struct ScanSeg{S} <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a scan.

Warning

Currently we do not explicity track the telescope scans. This will be fixed in a future version. Right now ScanSeg and TrackSeg are the same

source
Comrade.TrackSegType
struct TrackSeg <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a track, i.e., the observation "night".

source
Comrade.FixedSegType
struct FixedSeg{T} <: Comrade.ObsSegmentation

Enforces that the station calibraton value will have a fixed value. This is most commonly used when enforcing a reference station for gain phases.

source
Comrade.jonescacheMethod
jonescache(obs::EHTObservation, segmentation::ObsSegmentation)
+julia> logdensityof(gdist, x)
source
Comrade.CalPriorMethod
CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)

Constructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have

dist0 = (AA = Normal(0.0, 1.0), )
+distt = (AA = Normal(0.0, 0.1), )

then the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from

g2 = g1 + ϵ1

where ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.

source
Comrade.RIMEModelType
abstract type RIMEModel <: ComradeBase.AbstractModel

Abstract type that encompasses all RIME style corruptions.

source
Comrade.IntegSegType
struct IntegSeg{S} <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a correlation integration.

source
Comrade.ScanSegType
struct ScanSeg{S} <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a scan.

Warning

Currently we do not explicity track the telescope scans. This will be fixed in a future version. Right now ScanSeg and TrackSeg are the same

source
Comrade.TrackSegType
struct TrackSeg <: Comrade.ObsSegmentation

Data segmentation such that the quantity is constant over a track, i.e., the observation "night".

source
Comrade.FixedSegType
struct FixedSeg{T} <: Comrade.ObsSegmentation

Enforces that the station calibraton value will have a fixed value. This is most commonly used when enforcing a reference station for gain phases.

source
Comrade.jonescacheMethod
jonescache(obs::EHTObservation, segmentation::ObsSegmentation)
 jonescache(obs::EHTObservatoin, segmentation::NamedTuple)

Constructs a JonesCache from a given observation obs using the segmentation scheme segmentation. If segmentation is a named tuple it is assumed that each symbol in the named tuple corresponds to a segmentation for thes sites in obs.

Example

# coh is a EHTObservation
 julia> jonescache(coh, ScanSeg())
 julia> segs = (AA = ScanSeg(), AP = TrachSeg(), AZ=FixedSegSeg())
-julia> jonescache(coh, segs)
source
Comrade.SingleReferenceType
SingleReference(site::Symbol, val::Number)

Use a single site as a reference. The station gain will be set equal to val.

source
Comrade.RandomReferenceType
RandomReference(val::Number)

For each timestamp select a random reference station whose station gain will be set to val.

Notes

This is useful when there isn't a single site available for all scans and you want to split up the choice of reference site. We recommend only using this option for Stokes I fitting.

source
Comrade.SEFDReferenceType
SiteOrderReference(val::Number, sefd_index = 1)

Selects the reference site based on the SEFD of each telescope, where the smallest SEFD is preferentially selected. The reference gain is set to val and the user can select to use the n lowest SEFD site by passing sefd_index = n.

Notes

This is done on a per-scan basis so if a site is missing from a scan the next highest SEFD site will be used.

source
Comrade.jonesStokesFunction
jonesStokes(g1::AbstractArray, gcache::AbstractJonesCache)
-jonesStokes(f, g1::AbstractArray, gcache::AbstractJonesCache)

Construct the Jones Pairs for the stokes I image only. That is, we only need to pass a single vector corresponding to the gain for the stokes I visibility. This is for when you only want to image Stokes I. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.

Warning

In the future this functionality may be removed when stokes I fitting is replaced with the more correct trace(coherency), i.e. RR+LL for a circular basis.

source
Comrade.jonesGFunction
jonesG(g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)
+julia> jonescache(coh, segs)
source
Comrade.SingleReferenceType
SingleReference(site::Symbol, val::Number)

Use a single site as a reference. The station gain will be set equal to val.

source
Comrade.RandomReferenceType
RandomReference(val::Number)

For each timestamp select a random reference station whose station gain will be set to val.

Notes

This is useful when there isn't a single site available for all scans and you want to split up the choice of reference site. We recommend only using this option for Stokes I fitting.

source
Comrade.SEFDReferenceType
SiteOrderReference(val::Number, sefd_index = 1)

Selects the reference site based on the SEFD of each telescope, where the smallest SEFD is preferentially selected. The reference gain is set to val and the user can select to use the n lowest SEFD site by passing sefd_index = n.

Notes

This is done on a per-scan basis so if a site is missing from a scan the next highest SEFD site will be used.

source
Comrade.jonesStokesFunction
jonesStokes(g1::AbstractArray, gcache::AbstractJonesCache)
+jonesStokes(f, g1::AbstractArray, gcache::AbstractJonesCache)

Construct the Jones Pairs for the stokes I image only. That is, we only need to pass a single vector corresponding to the gain for the stokes I visibility. This is for when you only want to image Stokes I. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.

Warning

In the future this functionality may be removed when stokes I fitting is replaced with the more correct trace(coherency), i.e. RR+LL for a circular basis.

source
Comrade.jonesGFunction
jonesG(g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)
 jonesG(f, g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)

Constructs the pairs Jones G matrices for each pair of stations. The g1 are the gains for the first polarization basis and g2 are the gains for the other polarization. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.

The layout for each matrix is as follows:

    g1 0
-    0  g2
source
Comrade.jonesDFunction
jonesD(d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)
+    0  g2
source
Comrade.jonesDFunction
jonesD(d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)
 jonesD(f, d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)

Constructs the pairs Jones D matrices for each pair of stations. The d1 are the d-termsfor the first polarization basis and d2 are the d-terms for the other polarization. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if d1 and d2 are the log-dterms then f=exp will convert them into the dterms.

The layout for each matrix is as follows:

    1  d1
-    d2 1
source
Comrade.jonesTFunction
jonesT(tcache::ResponseCache)

Returns a JonesPair of matrices that transform from the model coherency matrices basis to the on-sky coherency basis, this includes the feed rotation and choice of polarization feeds.

source
Base.mapMethod
map(f, args::JonesPairs...) -> JonesPairs

Maps over a set of JonesPairs applying the function f to each element. This returns a collected JonesPair. This us useful for more advanced operations on Jones matrices.

Examples

map(G, D, F) do g, d, f
+    d2 1
source
Comrade.jonesTFunction
jonesT(tcache::ResponseCache)

Returns a JonesPair of matrices that transform from the model coherency matrices basis to the on-sky coherency basis, this includes the feed rotation and choice of polarization feeds.

source
Base.mapMethod
map(f, args::JonesPairs...) -> JonesPairs

Maps over a set of JonesPairs applying the function f to each element. This returns a collected JonesPair. This us useful for more advanced operations on Jones matrices.

Examples

map(G, D, F) do g, d, f
     return f'*exp.(g)*d*f
-end
source
Comrade.caltableFunction
caltable(args...)

Creates a calibration table from a set of arguments. The specific arguments depend on what calibration you are applying.

source
Comrade.JonesPairsType
struct JonesPairs{T, M1<:AbstractArray{T, 1}, M2<:AbstractArray{T, 1}}

Holds the pairs of Jones matrices for the first and second station of a baseline.

Fields

  • m1: Vector of jones matrices for station 1
  • m2: Vector of jones matrices for station 2
source
Comrade.GainSchemaType
GainSchema(sites, times)

Constructs a schema for the gains of an observation. The sites and times correspond to the specific site and time for each gain that will be modeled.

source
Comrade.SegmentedJonesCacheType
struct SegmentedJonesCache{D, S<:Comrade.ObsSegmentation, ST, Ti} <: Comrade.AbstractJonesCache

Holds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope. This uses a segmented decomposition so that the gain at a single timestamp is the sum of the previous gains. In this formulation the gains parameters are the segmented gain offsets from timestamp to timestamp

Fields

  • m1: Design matrix for the first station
  • m2: Design matrix for the second station
  • seg: Segmentation scheme for this cache
  • stations: station codes
  • times: times
source

Models

For the description of the model API see VLBISkyModels.

Data Types

Comrade.extract_tableFunction
extract_table(obs, dataproducts::VLBIDataProducts)

Extract an Comrade.EHTObservation table of data products dataproducts. To pass additional keyword for the data products you can pass them as keyword arguments to the data product type. For a list of potential data products see subtypes(Comrade.VLBIDataProducts).

Example

julia> dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3.0), ClosurePhases(;snrcut=3.0, cut_trivial=true))
+end
source
Comrade.caltableFunction
caltable(args...)

Creates a calibration table from a set of arguments. The specific arguments depend on what calibration you are applying.

source
Comrade.JonesPairsType
struct JonesPairs{T, M1<:AbstractArray{T, 1}, M2<:AbstractArray{T, 1}}

Holds the pairs of Jones matrices for the first and second station of a baseline.

Fields

  • m1: Vector of jones matrices for station 1
  • m2: Vector of jones matrices for station 2
source
Comrade.GainSchemaType
GainSchema(sites, times)

Constructs a schema for the gains of an observation. The sites and times correspond to the specific site and time for each gain that will be modeled.

source
Comrade.SegmentedJonesCacheType
struct SegmentedJonesCache{D, S<:Comrade.ObsSegmentation, ST, Ti} <: Comrade.AbstractJonesCache

Holds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope. This uses a segmented decomposition so that the gain at a single timestamp is the sum of the previous gains. In this formulation the gains parameters are the segmented gain offsets from timestamp to timestamp

Fields

  • m1: Design matrix for the first station
  • m2: Design matrix for the second station
  • seg: Segmentation scheme for this cache
  • stations: station codes
  • times: times
source

Models

For the description of the model API see VLBISkyModels.

Data Types

Comrade.extract_tableFunction
extract_table(obs, dataproducts::VLBIDataProducts)

Extract an Comrade.EHTObservation table of data products dataproducts. To pass additional keyword for the data products you can pass them as keyword arguments to the data product type. For a list of potential data products see subtypes(Comrade.VLBIDataProducts).

Example

julia> dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3.0), ClosurePhases(;snrcut=3.0, cut_trivial=true))
 julia> dcoh = extract_table(obs, Coherencies())
-julia> dvis = extract_table(obs, VisibilityAmplitudes())
source
Comrade.ComplexVisibilitiesType
ComplexVisibilities(;kwargs...)

Type to specify to extract the complex visibilities table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

Any keyword arguments are ignored for now. Use eht-imaging directly to modify the data.

source
Comrade.VisibilityAmplitudesType
ComplexVisibilities(;kwargs...)

Type to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_amp command for obsdata.

source
Comrade.ClosurePhasesType
ClosuresPhases(;kwargs...)

Type to specify to extract the closure phase table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:

  • count: How the closures are formed, the available options are "min-correct", "min", "max"

Warning

The count keyword argument is treated specially in Comrade. The default option is "min-correct" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count="min" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.

source
Comrade.LogClosureAmplitudesType
LogClosureAmplitudes(;kwargs...)

Type to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:

  • count: How the closures are formed, the available options are "min-correct", "min", "max"

Returns an EHTObservation with log-closure amp. datums

Warning

The count keyword argument is treated specially in Comrade. The default option is "min-correct" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count="min" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.

source
Comrade.CoherenciesType
Coherencies(;kwargs...)

Type to specify to extract the coherency matrices table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

Any keyword arguments are ignored for now. Use eht-imaging directly to modify the data.

source
Comrade.baselinesFunction
baselines(CP::EHTClosurePhaseDatum)

Returns the baselines used for a single closure phase datum

source
baselines(CP::EHTLogClosureAmplitudeDatum)

Returns the baselines used for a single closure phase datum

source
baselines(scan::Scan)

Return the baselines for each datum in a scan

source
Comrade.ComplexVisibilitiesType
ComplexVisibilities(;kwargs...)

Type to specify to extract the complex visibilities table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

Any keyword arguments are ignored for now. Use eht-imaging directly to modify the data.

source
Comrade.VisibilityAmplitudesType
ComplexVisibilities(;kwargs...)

Type to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_amp command for obsdata.

source
Comrade.ClosurePhasesType
ClosuresPhases(;kwargs...)

Type to specify to extract the closure phase table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:

  • count: How the closures are formed, the available options are "min-correct", "min", "max"

Warning

The count keyword argument is treated specially in Comrade. The default option is "min-correct" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count="min" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.

source
Comrade.LogClosureAmplitudesType
LogClosureAmplitudes(;kwargs...)

Type to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

For a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:

  • count: How the closures are formed, the available options are "min-correct", "min", "max"

Returns an EHTObservation with log-closure amp. datums

Warning

The count keyword argument is treated specially in Comrade. The default option is "min-correct" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count="min" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.

source
Comrade.CoherenciesType
Coherencies(;kwargs...)

Type to specify to extract the coherency matrices table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.

Special keywords for eht-imaging with Pyehtim.jl

Any keyword arguments are ignored for now. Use eht-imaging directly to modify the data.

source
Comrade.baselinesFunction
baselines(CP::EHTClosurePhaseDatum)

Returns the baselines used for a single closure phase datum

source
baselines(CP::EHTLogClosureAmplitudeDatum)

Returns the baselines used for a single closure phase datum

source
baselines(scan::Scan)

Return the baselines for each datum in a scan

source
ComradeBase.closure_phaseMethod
closure_phase(D1::EHTVisibilityDatum,
               D2::EHTVisibilityDatum,
               D3::EHTVisibilityDatum
-              )

Computes the closure phase of the three visibility datums.

Notes

We currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.

source
Comrade.getdataFunction
getdata(obs::EHTObservation, s::Symbol)

Pass-through function that gets the array of s from the EHTObservation. For example say you want the times of all measurement then

getdata(obs, :time)
source
Comrade.scantableFunction
scantable(obs::EHTObservation)

Reorganizes the observation into a table of scans, where scan are defined by unique timestamps. To access the data you can use scalar indexing

Example

st = scantable(obs)
+              )

Computes the closure phase of the three visibility datums.

Notes

We currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.

source
Comrade.getdataFunction
getdata(obs::EHTObservation, s::Symbol)

Pass-through function that gets the array of s from the EHTObservation. For example say you want the times of all measurement then

getdata(obs, :time)
source
Comrade.scantableFunction
scantable(obs::EHTObservation)

Reorganizes the observation into a table of scans, where scan are defined by unique timestamps. To access the data you can use scalar indexing

Example

st = scantable(obs)
 # Grab the first scan
 scan1 = st[1]
 
@@ -62,18 +62,18 @@
 scan1[1]
 
 # grab e.g. the baselines
-scan1[:baseline]
source
Comrade.stationsFunction
stations(d::EHTObservation)

Get all the stations in a observation. The result is a vector of symbols.

source
stations(g::CalTable)

Return the stations in the calibration table

source
Comrade.uvpositionsFunction
uvpositions(datum::AbstractVisibilityDatum)

Get the uvp positions of an inferometric datum.

source
Comrade.ArrayConfigurationType
abstract type ArrayConfiguration

This defined the abstract type for an array configuration. Namely, baseline times, SEFD's, bandwidth, observation frequencies, etc.

source
Comrade.ClosureConfigType
struct ClosureConfig{A, D} <: Comrade.ArrayConfiguration

Array config file for closure quantities. This stores the design matrix designmat that transforms from visibilties to closure products.

Fields

  • ac: Array configuration for visibilities

  • designmat: Closure design matrix

source
Comrade.EHTObservationType
struct EHTObservation{F, T<:Comrade.AbstractInterferometryDatum{F}, S<:(StructArrays.StructArray{T<:Comrade.AbstractInterferometryDatum{F}}), A, N} <: Comrade.Observation{F}

The main data product type in Comrade this stores the data which can be a StructArray of any AbstractInterferometryDatum type.

Fields

  • data: StructArray of data productts
  • config: Array config holds ancillary information about array
  • mjd: modified julia date of the observation
  • ra: RA of the observation in J2000 (deg)
  • dec: DEC of the observation in J2000 (deg)
  • bandwidth: bandwidth of the observation (Hz)
  • source: Common source name
  • timetype: Time zone used.
source
Comrade.EHTArrayConfigurationType
struct EHTArrayConfiguration{F, T, S, D<:AbstractArray} <: Comrade.ArrayConfiguration

Stores all the non-visibility data products for an EHT array. This is useful when evaluating model visibilities.

Fields

  • bandwidth: Observing bandwith (Hz)
  • tarr: Telescope array file
  • scans: Scan times
  • data: A struct array of ArrayBaselineDatum holding time, freq, u, v, baselines.
source
Comrade.EHTCoherencyDatumType
struct EHTCoherencyDatum{S, B1, B2, M<:(StaticArraysCore.SArray{Tuple{2, 2}, Complex{S}, 2}), E<:(StaticArraysCore.SArray{Tuple{2, 2}, S, 2})} <: Comrade.AbstractInterferometryDatum{S}

A Datum for a single coherency matrix

Fields

  • measurement: coherency matrix, with entries in Jy
  • error: visibility uncertainty matrix, with entries in Jy
  • U: x-direction baseline length, in λ
  • V: y-direction baseline length, in λ
  • T: Timestamp, in hours
  • F: Frequency, in Hz
  • baseline: station baseline codes
  • polbasis: polarization basis for each station
source
Comrade.EHTVisibilityDatumType
struct EHTVisibilityDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}

A struct holding the information for a single measured complex visibility.

FIELDS

  • measurement: Complex Vis. measurement (Jy)
  • error: error of the complex vis (Jy)
  • U: u position of the data point in λ
  • V: v position of the data point in λ
  • T: time of the data point in (Hr)
  • F: frequency of the data point (Hz)
  • baseline: station baseline codes
source
Comrade.EHTVisibilityAmplitudeDatumType
struct EHTVisibilityAmplitudeDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}

A struct holding the information for a single measured visibility amplitude.

FIELDS

  • measurement: amplitude (Jy)
  • error: error of the visibility amplitude (Jy)
  • U: u position of the data point in λ
  • V: v position of the data point in λ
  • T: time of the data point in (Hr)
  • F: frequency of the data point (Hz)
  • baseline: station baseline codes
source
Comrade.EHTLogClosureAmplitudeDatumType
struct EHTLogClosureAmplitudeDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}

A Datum for a single log closure amplitude.

  • measurement: log-closure amplitude
  • error: log-closure amplitude error in the high-snr limit
  • U1: u (λ) of first station
  • V1: v (λ) of first station
  • U2: u (λ) of second station
  • V2: v (λ) of second station
  • U3: u (λ) of third station
  • V3: v (λ) of third station
  • U4: u (λ) of fourth station
  • V4: v (λ) of fourth station
  • T: Measured time of closure phase in hours
  • F: Measured frequency of closure phase in Hz
  • quadrangle: station codes for the quadrangle
source
Comrade.EHTClosurePhaseDatumType
struct EHTClosurePhaseDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}

A Datum for a single closure phase.

Fields

  • measurement: closure phase (rad)
  • error: error of the closure phase assuming the high-snr limit
  • U1: u (λ) of first station
  • V1: v (λ) of first station
  • U2: u (λ) of second station
  • V2: v (λ) of second station
  • U3: u (λ) of third station
  • V3: v (λ) of third station
  • T: Measured time of closure phase in hours
  • F: Measured frequency of closure phase in Hz
  • triangle: station baselines used
source
Comrade.ScanType
struct Scan{T, I, S}

Composite type that holds information for a single scan of the telescope.

Fields

  • time: Scan time
  • index: Scan indices which are (scan index, data start index, data end index)
  • scan: Scan data usually a StructArray of a <:AbstractVisibilityDatum
source
Comrade.ScanTableType
struct ScanTable{O<:Union{Comrade.ArrayConfiguration, Comrade.Observation}, T, S}

Wraps EHTObservation in a table that separates the observation into scans. This implements the table interface. You can access scans by directly indexing into the table. This will create a view into the table not copying the data.

Example

julia> st = scantable(obs)
+scan1[:baseline]
source
Comrade.stationsFunction
stations(d::EHTObservation)

Get all the stations in a observation. The result is a vector of symbols.

source
stations(g::CalTable)

Return the stations in the calibration table

source
Comrade.uvpositionsFunction
uvpositions(datum::AbstractVisibilityDatum)

Get the uvp positions of an inferometric datum.

source
Comrade.ArrayConfigurationType
abstract type ArrayConfiguration

This defined the abstract type for an array configuration. Namely, baseline times, SEFD's, bandwidth, observation frequencies, etc.

source
Comrade.ClosureConfigType
struct ClosureConfig{A, D} <: Comrade.ArrayConfiguration

Array config file for closure quantities. This stores the design matrix designmat that transforms from visibilties to closure products.

Fields

  • ac: Array configuration for visibilities

  • designmat: Closure design matrix

source
Comrade.EHTObservationType
struct EHTObservation{F, T<:Comrade.AbstractInterferometryDatum{F}, S<:(StructArrays.StructArray{T<:Comrade.AbstractInterferometryDatum{F}}), A, N} <: Comrade.Observation{F}

The main data product type in Comrade this stores the data which can be a StructArray of any AbstractInterferometryDatum type.

Fields

  • data: StructArray of data productts
  • config: Array config holds ancillary information about array
  • mjd: modified julia date of the observation
  • ra: RA of the observation in J2000 (deg)
  • dec: DEC of the observation in J2000 (deg)
  • bandwidth: bandwidth of the observation (Hz)
  • source: Common source name
  • timetype: Time zone used.
source
Comrade.EHTArrayConfigurationType
struct EHTArrayConfiguration{F, T, S, D<:AbstractArray} <: Comrade.ArrayConfiguration

Stores all the non-visibility data products for an EHT array. This is useful when evaluating model visibilities.

Fields

  • bandwidth: Observing bandwith (Hz)
  • tarr: Telescope array file
  • scans: Scan times
  • data: A struct array of ArrayBaselineDatum holding time, freq, u, v, baselines.
source
Comrade.EHTCoherencyDatumType
struct EHTCoherencyDatum{S, B1, B2, M<:(StaticArraysCore.SArray{Tuple{2, 2}, Complex{S}, 2}), E<:(StaticArraysCore.SArray{Tuple{2, 2}, S, 2})} <: Comrade.AbstractInterferometryDatum{S}

A Datum for a single coherency matrix

Fields

  • measurement: coherency matrix, with entries in Jy
  • error: visibility uncertainty matrix, with entries in Jy
  • U: x-direction baseline length, in λ
  • V: y-direction baseline length, in λ
  • T: Timestamp, in hours
  • F: Frequency, in Hz
  • baseline: station baseline codes
  • polbasis: polarization basis for each station
source
Comrade.EHTVisibilityDatumType
struct EHTVisibilityDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}

A struct holding the information for a single measured complex visibility.

FIELDS

  • measurement: Complex Vis. measurement (Jy)
  • error: error of the complex vis (Jy)
  • U: u position of the data point in λ
  • V: v position of the data point in λ
  • T: time of the data point in (Hr)
  • F: frequency of the data point (Hz)
  • baseline: station baseline codes
source
Comrade.EHTVisibilityAmplitudeDatumType
struct EHTVisibilityAmplitudeDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}

A struct holding the information for a single measured visibility amplitude.

FIELDS

  • measurement: amplitude (Jy)
  • error: error of the visibility amplitude (Jy)
  • U: u position of the data point in λ
  • V: v position of the data point in λ
  • T: time of the data point in (Hr)
  • F: frequency of the data point (Hz)
  • baseline: station baseline codes
source
Comrade.EHTLogClosureAmplitudeDatumType
struct EHTLogClosureAmplitudeDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}

A Datum for a single log closure amplitude.

  • measurement: log-closure amplitude
  • error: log-closure amplitude error in the high-snr limit
  • U1: u (λ) of first station
  • V1: v (λ) of first station
  • U2: u (λ) of second station
  • V2: v (λ) of second station
  • U3: u (λ) of third station
  • V3: v (λ) of third station
  • U4: u (λ) of fourth station
  • V4: v (λ) of fourth station
  • T: Measured time of closure phase in hours
  • F: Measured frequency of closure phase in Hz
  • quadrangle: station codes for the quadrangle
source
Comrade.EHTClosurePhaseDatumType
struct EHTClosurePhaseDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}

A Datum for a single closure phase.

Fields

  • measurement: closure phase (rad)
  • error: error of the closure phase assuming the high-snr limit
  • U1: u (λ) of first station
  • V1: v (λ) of first station
  • U2: u (λ) of second station
  • V2: v (λ) of second station
  • U3: u (λ) of third station
  • V3: v (λ) of third station
  • T: Measured time of closure phase in hours
  • F: Measured frequency of closure phase in Hz
  • triangle: station baselines used
source
Comrade.ScanType
struct Scan{T, I, S}

Composite type that holds information for a single scan of the telescope.

Fields

  • time: Scan time
  • index: Scan indices which are (scan index, data start index, data end index)
  • scan: Scan data usually a StructArray of a <:AbstractVisibilityDatum
source
Comrade.ScanTableType
struct ScanTable{O<:Union{Comrade.ArrayConfiguration, Comrade.Observation}, T, S}

Wraps EHTObservation in a table that separates the observation into scans. This implements the table interface. You can access scans by directly indexing into the table. This will create a view into the table not copying the data.

Example

julia> st = scantable(obs)
 julia> st[begin] # grab first scan
-julia> st[end]   # grab last scan
source

Model Cache

VLBISkyModels.NFFTAlgMethod
NFFTAlg(obs::EHTObservation; kwargs...)

Create an algorithm object using the non-unform Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.

The possible optional arguments are given in the NFFTAlg struct.

source
VLBISkyModels.NFFTAlgMethod
NFFTAlg(ac::ArrayConfiguration; kwargs...)

Create an algorithm object using the non-unform Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.

The optional arguments are: padfac specifies how much to pad the image by, and m is an internal variable for NFFT.jl.

source
VLBISkyModels.DFTAlgMethod
DFTAlg(obs::EHTObservation)

Create an algorithm object using the direct Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.

source
VLBISkyModels.DFTAlgMethod
DFTAlg(ac::ArrayConfiguration)

Create an algorithm object using the direct Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.

source

Bayesian Tools

Posterior Constructions

HypercubeTransform.ascubeFunction
ascube(post::Posterior)

Construct a flattened version of the posterior where the parameters are transformed to live in (0, 1), i.e. the unit hypercube.

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = ascube(post)
+julia> st[end]   # grab last scan
source

Model Cache

VLBISkyModels.NFFTAlgMethod
NFFTAlg(obs::EHTObservation; kwargs...)

Create an algorithm object using the non-unform Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.

The possible optional arguments are given in the NFFTAlg struct.

source
VLBISkyModels.NFFTAlgMethod
NFFTAlg(ac::ArrayConfiguration; kwargs...)

Create an algorithm object using the non-unform Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.

The optional arguments are: padfac specifies how much to pad the image by, and m is an internal variable for NFFT.jl.

source
VLBISkyModels.DFTAlgMethod
DFTAlg(obs::EHTObservation)

Create an algorithm object using the direct Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.

source
VLBISkyModels.DFTAlgMethod
DFTAlg(ac::ArrayConfiguration)

Create an algorithm object using the direct Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.

source

Bayesian Tools

Posterior Constructions

HypercubeTransform.ascubeFunction
ascube(post::Posterior)

Construct a flattened version of the posterior where the parameters are transformed to live in (0, 1), i.e. the unit hypercube.

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = ascube(post)
 julia> x0 = prior_sample(tpost)
-julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical NestedSampling methods, i.e. ComradeNested. For the transformation to unconstrained space see asflat

source
HypercubeTransform.asflatFunction
asflat(post::Posterior)

Construct a flattened version of the posterior where the parameters are transformed to live in (-∞, ∞).

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = ascube(post)
+julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical NestedSampling methods, i.e. ComradeNested. For the transformation to unconstrained space see asflat

source
HypercubeTransform.asflatFunction
asflat(post::Posterior)

Construct a flattened version of the posterior where the parameters are transformed to live in (-∞, ∞).

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = ascube(post)
 julia> x0 = prior_sample(tpost)
-julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube

source
ParameterHandling.flattenFunction
flatten(post::Posterior)

Construct a flattened version of the posterior but do not transform to any space, i.e. use the support specified by the prior.

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = flatten(post)
+julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube

source
ParameterHandling.flattenFunction
flatten(post::Posterior)

Construct a flattened version of the posterior but do not transform to any space, i.e. use the support specified by the prior.

This returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.

Example

julia> tpost = flatten(post)
 julia> x0 = prior_sample(tpost)
-julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube

source
Comrade.prior_sampleFunction
prior_sample([rng::AbstractRandom], post::Posterior, args...)

Samples the prior distribution from the posterior. The args... are forwarded to the Base.rand method.

source
prior_sample([rng::AbstractRandom], post::Posterior)

Returns a single sample from the prior distribution.

source
Comrade.likelihoodFunction
likelihood(d::ConditionedLikelihood, μ)

Returns the likelihood of the model, with parameters μ. That is, we return the distribution of the data given the model parameters μ. This is an actual probability distribution.

source
Comrade.simulate_observationFunction
simulate_observation([rng::Random.AbstractRNG], post::Posterior, θ)

Create a simulated observation using the posterior and its data post using the parameter values θ. In Bayesian terminology this is a draw from the posterior predictive distribution.

source
Comrade.dataproductsFunction
dataproducts(d::RadioLikelihood)

Returns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.

source
dataproducts(d::Posterior)

Returns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.

source
Comrade.skymodelFunction
skymodel(post::RadioLikelihood, θ)

Returns the sky model or image of a posterior using the parameter valuesθ

source
skymodel(post::Posterior, θ)

Returns the sky model or image of a posterior using the parameter valuesθ

source
Comrade.instrumentmodelFunction
skymodel(lklhd::RadioLikelihood, θ)

Returns the instrument model of a lklhderior using the parameter valuesθ

source
skymodel(post::Posterior, θ)

Returns the instrument model of a posterior using the parameter valuesθ

source
Comrade.vlbimodelFunction
vlbimodel(post::Posterior, θ)

Returns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ

source
vlbimodel(post::Posterior, θ)

Returns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ

source
StatsBase.sampleMethod
sample(post::Posterior, sampler::S, args...; init_params=nothing, kwargs...)

Sample a posterior post using the sampler. You can optionally pass the starting location of the sampler using init_params, otherwise a random draw from the prior will be used.

source
TransformVariables.transformFunction
transform(posterior::TransformedPosterior, x)

Transforms the value x from the transformed space (e.g. unit hypercube if using ascube) to parameter space which is usually encoded as a NamedTuple.

For the inverse transform see inverse

source
Comrade.MultiRadioLikelihoodType
MultiRadioLikelihood(lklhd1, lklhd2, ...)

Combines multiple likelihoods into one object that is useful for fitting multiple days/frequencies.

julia> lklhd1 = RadioLikelihood(dcphase1, dlcamp1)
+julia> logdensityof(tpost, x0)

Notes

This is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube

source
Comrade.prior_sampleFunction
prior_sample([rng::AbstractRandom], post::Posterior, args...)

Samples the prior distribution from the posterior. The args... are forwarded to the Base.rand method.

source
prior_sample([rng::AbstractRandom], post::Posterior)

Returns a single sample from the prior distribution.

source
Comrade.likelihoodFunction
likelihood(d::ConditionedLikelihood, μ)

Returns the likelihood of the model, with parameters μ. That is, we return the distribution of the data given the model parameters μ. This is an actual probability distribution.

source
Comrade.simulate_observationFunction
simulate_observation([rng::Random.AbstractRNG], post::Posterior, θ)

Create a simulated observation using the posterior and its data post using the parameter values θ. In Bayesian terminology this is a draw from the posterior predictive distribution.

source
Comrade.dataproductsFunction
dataproducts(d::RadioLikelihood)

Returns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.

source
dataproducts(d::Posterior)

Returns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.

source
Comrade.skymodelFunction
skymodel(post::RadioLikelihood, θ)

Returns the sky model or image of a posterior using the parameter valuesθ

source
skymodel(post::Posterior, θ)

Returns the sky model or image of a posterior using the parameter valuesθ

source
Comrade.instrumentmodelFunction
skymodel(lklhd::RadioLikelihood, θ)

Returns the instrument model of a lklhderior using the parameter valuesθ

source
skymodel(post::Posterior, θ)

Returns the instrument model of a posterior using the parameter valuesθ

source
Comrade.vlbimodelFunction
vlbimodel(post::Posterior, θ)

Returns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ

source
vlbimodel(post::Posterior, θ)

Returns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ

source
StatsBase.sampleMethod
sample(post::Posterior, sampler::S, args...; init_params=nothing, kwargs...)

Sample a posterior post using the sampler. You can optionally pass the starting location of the sampler using init_params, otherwise a random draw from the prior will be used.

source
TransformVariables.transformFunction
transform(posterior::TransformedPosterior, x)

Transforms the value x from the transformed space (e.g. unit hypercube if using ascube) to parameter space which is usually encoded as a NamedTuple.

For the inverse transform see inverse

source
Comrade.MultiRadioLikelihoodType
MultiRadioLikelihood(lklhd1, lklhd2, ...)

Combines multiple likelihoods into one object that is useful for fitting multiple days/frequencies.

julia> lklhd1 = RadioLikelihood(dcphase1, dlcamp1)
 julia> lklhd2 = RadioLikelihood(dcphase2, dlcamp2)
-julia> MultiRadioLikelihood(lklhd1, lklhd2)
source
Comrade.PosteriorType
Posterior(lklhd, prior)

Creates a Posterior density that follows obeys DensityInterface. The lklhd object is expected to be a VLB object. For instance, these can be created using RadioLikelihood. prior

Notes

Since this function obeys DensityInterface you can evaluate it with

julia> ℓ = logdensityof(post)
-julia> ℓ(x)

or using the 2-argument version directly

julia> logdensityof(post, x)

where post::Posterior.

To generate random draws from the prior see the prior_sample function.

source
Comrade.TransformedPosteriorType
struct TransformedPosterior{P<:Posterior, T} <: Comrade.AbstractPosterior

A transformed version of a Posterior object. This is an internal type that an end user shouldn't have to directly construct. To construct a transformed posterior see the asflat, ascube, and flatten docstrings.

source
Comrade.RadioLikelihoodType
RadioLikelihood(skymodel, instumentmodel, dataproducts::EHTObservation...;
+julia> MultiRadioLikelihood(lklhd1, lklhd2)
source
Comrade.PosteriorType
Posterior(lklhd, prior)

Creates a Posterior density that follows obeys DensityInterface. The lklhd object is expected to be a VLB object. For instance, these can be created using RadioLikelihood. prior

Notes

Since this function obeys DensityInterface you can evaluate it with

julia> ℓ = logdensityof(post)
+julia> ℓ(x)

or using the 2-argument version directly

julia> logdensityof(post, x)

where post::Posterior.

To generate random draws from the prior see the prior_sample function.

source
Comrade.TransformedPosteriorType
struct TransformedPosterior{P<:Posterior, T} <: Comrade.AbstractPosterior

A transformed version of a Posterior object. This is an internal type that an end user shouldn't have to directly construct. To construct a transformed posterior see the asflat, ascube, and flatten docstrings.

source
Comrade.RadioLikelihoodType
RadioLikelihood(skymodel, instumentmodel, dataproducts::EHTObservation...;
                 skymeta=nothing,
                 instrumentmeta=nothing)

Creates a RadioLikelihood using the skymodel its related metadata skymeta and the instrumentmodel and its metadata instumentmeta. . The model is a function that converts from parameters θ to a Comrade AbstractModel which can be used to compute visibilities and a set of metadata that is used by model to compute the model.

Warning

The model itself must be a two argument function where the first argument is the set of model parameters and the second is a container that holds all the additional information needed to construct the model. An example of this is when the model needs some precomputed cache to define the model.

Example

dlcamp, dcphase = extract_table(obs, LogClosureAmplitude(), ClosurePhases())
 cache = create_cache(NFFTAlg(dlcamp), IntensityMap(zeros(128,128), μas2rad(100.0), μas2rad(100.0)))
@@ -98,7 +98,7 @@
 
 RadioLikelihood(skymodel, instrumentmodel, dataproducts::EHTObservation...;
                  skymeta=(;cache,),
-                 instrumentmeta=(;gcache))
source
RadioLikelihood(skymodel, dataproducts::EHTObservation...; skymeta=nothing)

Forms a radio likelihood from a set of data products using only a sky model. This intrinsically assumes that the instrument model is not required since it is perfect. This is useful when fitting closure quantities which are independent of the instrument.

If you want to form a likelihood from multiple arrays such as when fitting different wavelengths or days, you can combine them using MultiRadioLikelihood

Example

julia> RadioLikelihood(skymodel, dcphase, dlcamp)
source
Comrade.IsFlatType
struct IsFlat

Specifies that the sampling algorithm usually expects a uncontrained transform

source
Comrade.IsCubeType
struct IsCube

Specifies that the sampling algorithm usually expects a hypercube transform

source

Sampler Tools

Comrade.samplertypeFunction
samplertype(::Type)

Sampler type specifies whether to use a unit hypercube or unconstrained transformation.

source

Misc

Comrade.station_tupleFunction
station_tuple(stations, default; reference=nothing kwargs...)
+                 instrumentmeta=(;gcache))
source
RadioLikelihood(skymodel, dataproducts::EHTObservation...; skymeta=nothing)

Forms a radio likelihood from a set of data products using only a sky model. This intrinsically assumes that the instrument model is not required since it is perfect. This is useful when fitting closure quantities which are independent of the instrument.

If you want to form a likelihood from multiple arrays such as when fitting different wavelengths or days, you can combine them using MultiRadioLikelihood

Example

julia> RadioLikelihood(skymodel, dcphase, dlcamp)
source
Comrade.IsFlatType
struct IsFlat

Specifies that the sampling algorithm usually expects a uncontrained transform

source
Comrade.IsCubeType
struct IsCube

Specifies that the sampling algorithm usually expects a hypercube transform

source

Sampler Tools

Comrade.samplertypeFunction
samplertype(::Type)

Sampler type specifies whether to use a unit hypercube or unconstrained transformation.

source

Misc

Comrade.station_tupleFunction
station_tuple(stations, default; reference=nothing kwargs...)
 station_tuple(obs::EHTObservation, default; reference=nothing, kwargs...)

Convienence function that will construct a NamedTuple of objects whose names are the stations in the observation obs or explicitly in the argument stations. The NamedTuple will be filled with default if no kwargs are defined otherwise each kwarg (key, value) pair denotes a station and value pair.

Optionally the user can specify a reference station that will be dropped from the tuple. This is useful for selecting a reference station for gain phases

Examples

julia> stations = (:AA, :AP, :LM, :PV)
 julia> station_tuple(stations, ScanSeg())
 (AA = ScanSeg(), AP = ScanSeg(), LM = ScanSeg(), PV = ScanSeg())
@@ -107,4 +107,4 @@
 julia> station_tuple(stations, ScanSeg(); AA = FixedSeg(1.0), PV = TrackSeg())
 (AA = FixedSeg(1.0), AP = ScanSeg(), LM = ScanSeg(), PV = TrackSeg())
 julia> station_tuple(stations, Normal(0.0, 0.1); reference=:AA, LM = Normal(0.0, 1.0))
-(AP = Normal(0.0, 0.1), LM = Normal(0.0, 1.0), PV = Normal(0.0, 0.1))
source
Comrade.dirty_imageFunction
dirty_image(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T

Computes the dirty image of the complex visibilities assuming a field of view of fov and number of pixels npix using the complex visibilities found in the observation obs.

The dirty image is the inverse Fourier transform of the measured visibilties assuming every other visibility is zero.

source
Comrade.dirty_beamFunction
dirty_beam(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T

Computes the dirty beam of the complex visibilities assuming a field of view of fov and number of pixels npix using baseline coverage found in obs.

The dirty beam is the inverse Fourier transform of the (u,v) coverage assuming every visibility is unity and everywhere else is zero.

source
Comrade.beamsizeFunction
beamsize(ac::ArrayConfiguration)

Calculate the approximate beam size of the array ac as the inverse of the longest baseline distance.

source
beamsize(obs::EHTObservation)

Calculate the approximate beam size of the observation obs as the inverse of the longest baseline distance.

source

Internal (Not Public API)

Comrade.extract_FRsFunction
extract_FRs

Extracts the feed rotation Jones matrices (returned as a JonesPair) from an EHT observation obs.

Warning

eht-imaging can sometimes pre-rotate the coherency matrices. As a result the field rotation can sometimes be applied twice. To compensate for this we have added a ehtim_fr_convention which will fix this.

source
ComradeBase._visibilities!Function
_visibilities!(model::AbstractModel, args...)

Internal method used for trait dispatch and unpacking of args arguments in visibilities!

Warn

Not part of the public API so it may change at any moment.

source
ComradeBase._visibilitiesFunction
_visibilities(model::AbstractModel, args...)

Internal method used for trait dispatch and unpacking of args arguments in visibilities

Warn

Not part of the public API so it may change at any moment.

source

eht-imaging interface (Internal)

Comrade.extract_ampFunction
extract_amp(obs; kwargs...)

Extracts the visibility amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_cphaseFunction
extract_cphase(obs; kwargs...)

Extracts the closure phases from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_lcampFunction
extract_lcamp(obs; kwargs...)

Extracts the log-closure amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_visFunction
extract_vis(obs; kwargs...)

Extracts the stokes I complex visibilities from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_coherencyFunction
extract_coherency(obs; kwargs...)

Extracts the full coherency matrix from an observation. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
+(AP = Normal(0.0, 0.1), LM = Normal(0.0, 1.0), PV = Normal(0.0, 0.1))
source
Comrade.dirty_imageFunction
dirty_image(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T

Computes the dirty image of the complex visibilities assuming a field of view of fov and number of pixels npix using the complex visibilities found in the observation obs.

The dirty image is the inverse Fourier transform of the measured visibilties assuming every other visibility is zero.

source
Comrade.dirty_beamFunction
dirty_beam(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T

Computes the dirty beam of the complex visibilities assuming a field of view of fov and number of pixels npix using baseline coverage found in obs.

The dirty beam is the inverse Fourier transform of the (u,v) coverage assuming every visibility is unity and everywhere else is zero.

source
Comrade.beamsizeFunction
beamsize(ac::ArrayConfiguration)

Calculate the approximate beam size of the array ac as the inverse of the longest baseline distance.

source
beamsize(obs::EHTObservation)

Calculate the approximate beam size of the observation obs as the inverse of the longest baseline distance.

source

Internal (Not Public API)

Comrade.extract_FRsFunction
extract_FRs

Extracts the feed rotation Jones matrices (returned as a JonesPair) from an EHT observation obs.

Warning

eht-imaging can sometimes pre-rotate the coherency matrices. As a result the field rotation can sometimes be applied twice. To compensate for this we have added a ehtim_fr_convention which will fix this.

source
ComradeBase._visibilities!Function
_visibilities!(model::AbstractModel, args...)

Internal method used for trait dispatch and unpacking of args arguments in visibilities!

Warn

Not part of the public API so it may change at any moment.

source
ComradeBase._visibilitiesFunction
_visibilities(model::AbstractModel, args...)

Internal method used for trait dispatch and unpacking of args arguments in visibilities

Warn

Not part of the public API so it may change at any moment.

source

eht-imaging interface (Internal)

Comrade.extract_ampFunction
extract_amp(obs; kwargs...)

Extracts the visibility amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_cphaseFunction
extract_cphase(obs; kwargs...)

Extracts the closure phases from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_lcampFunction
extract_lcamp(obs; kwargs...)

Extracts the log-closure amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_visFunction
extract_vis(obs; kwargs...)

Extracts the stokes I complex visibilities from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
Comrade.extract_coherencyFunction
extract_coherency(obs; kwargs...)

Extracts the full coherency matrix from an observation. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.

source
diff --git a/dev/base_api/index.html b/dev/base_api/index.html index 7f17d407..38aa2aa3 100644 --- a/dev/base_api/index.html +++ b/dev/base_api/index.html @@ -1,22 +1,22 @@ -ComradeBase API · Comrade.jl

ComradeBase API

Contents

Index

Model API

ComradeBase.fluxFunction
flux(im::IntensityMap)
-flux(img::StokesIntensityMap)

Computes the flux of a intensity map

source
ComradeBase.visibilityFunction
visibility(d::EHTVisibilityDatum)

Return the complex visibility of the visibility datum

source
visibility(mimg, p)

Computes the complex visibility of model m at coordinates p. p corresponds to the coordinates of the model. These need to have the properties U, V and sometimes Ti for time and Fr for frequency.

Notes

If you want to compute the visibilities at a large number of positions consider using the visibilities.

source
ComradeBase.visibilitiesFunction
visibilities(model::AbstractModel, args...)

Computes the complex visibilities at the locations given by args...

source
ComradeBase.visibilities!Function
visibilities!(vis::AbstractArray, model::AbstractModel, args...)

Computes the complex visibilities vis in place at the locations given by args...

source
ComradeBase.intensitymap!Function
intensitymap!(buffer::AbstractDimArray, model::AbstractModel)

Computes the intensity map of model by modifying the buffer

source
ComradeBase.IntensityMapType
IntensityMap(data::AbstractArray, dims::NamedTuple)
+ComradeBase API · Comrade.jl

ComradeBase API

Contents

Index

Model API

ComradeBase.fluxFunction
flux(im::IntensityMap)
+flux(img::StokesIntensityMap)

Computes the flux of a intensity map

source
ComradeBase.visibilityFunction
visibility(mimg, p)

Computes the complex visibility of model m at coordinates p. p corresponds to the coordinates of the model. These need to have the properties U, V and sometimes Ti for time and Fr for frequency.

Notes

If you want to compute the visibilities at a large number of positions consider using the visibilities.

source
visibility(d::EHTVisibilityDatum)

Return the complex visibility of the visibility datum

source
ComradeBase.visibilitiesFunction
visibilities(model::AbstractModel, args...)

Computes the complex visibilities at the locations given by args...

source
ComradeBase.visibilities!Function
visibilities!(vis::AbstractArray, model::AbstractModel, args...)

Computes the complex visibilities vis in place at the locations given by args...

source
ComradeBase.intensitymap!Function
intensitymap!(buffer::AbstractDimArray, model::AbstractModel)

Computes the intensity map of model by modifying the buffer

source
ComradeBase.IntensityMapType
IntensityMap(data::AbstractArray, dims::NamedTuple)
 IntensityMap(data::AbstractArray, grid::AbstractDims)

Constructs an intensitymap using the image dimensions given by dims. This returns a KeyedArray with keys given by an ImageDimensions object.

dims = (X=range(-10.0, 10.0, length=100), Y = range(-10.0, 10.0, length=100),
         T = [0.1, 0.2, 0.5, 0.9, 1.0], F = [230e9, 345e9]
         )
-imgk = IntensityMap(rand(100,100,5,1), dims)
source
ComradeBase.amplitudeMethod
amplitude(model, p)

Computes the visibility amplitude of model m at the coordinate p. The coordinate p is expected to have the properties U, V, and sometimes Ti and Fr.

If you want to compute the amplitudes at a large number of positions consider using the amplitudes function.

source
ComradeBase.amplitudesFunction
amplitudes(m::AbstractModel, u::AbstractArray, v::AbstractArray)

Computes the visibility amplitudes of the model m at the coordinates p. The coordinates p are expected to have the properties U, V, and sometimes Ti and Fr.

source
ComradeBase.bispectrumFunction
bispectrum(d1::T, d2::T, d3::T) where {T<:EHTVisibilityDatum}

Finds the bispectrum of three visibilities. We will assume these form closed triangles, i.e. the phase of the bispectrum is a closure phase.

source
bispectrum(model, p1, p2, p3)

Computes the complex bispectrum of model m at the uv-triangle p1 -> p2 -> p3

If you want to compute the bispectrum over a number of triangles consider using the bispectra function.

source
ComradeBase.bispectraFunction
bispectra(m, p1, p2, p3)

Computes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.

source
ComradeBase.amplitudeMethod
amplitude(model, p)

Computes the visibility amplitude of model m at the coordinate p. The coordinate p is expected to have the properties U, V, and sometimes Ti and Fr.

If you want to compute the amplitudes at a large number of positions consider using the amplitudes function.

source
ComradeBase.amplitudesFunction
amplitudes(m::AbstractModel, u::AbstractArray, v::AbstractArray)

Computes the visibility amplitudes of the model m at the coordinates p. The coordinates p are expected to have the properties U, V, and sometimes Ti and Fr.

source
ComradeBase.bispectrumFunction
bispectrum(model, p1, p2, p3)

Computes the complex bispectrum of model m at the uv-triangle p1 -> p2 -> p3

If you want to compute the bispectrum over a number of triangles consider using the bispectra function.

source
bispectrum(d1::T, d2::T, d3::T) where {T<:EHTVisibilityDatum}

Finds the bispectrum of three visibilities. We will assume these form closed triangles, i.e. the phase of the bispectrum is a closure phase.

source
ComradeBase.bispectraFunction
bispectra(m, p1, p2, p3)

Computes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.

source
ComradeBase.closure_phaseFunction
closure_phase(model, p1, p2, p3, p4)

Computes the closure phase of model m at the uv-triangle u1,v1 -> u2,v2 -> u3,v3

If you want to compute closure phases over a number of triangles consider using the closure_phases function.

source
closure_phase(D1::EHTVisibilityDatum,
               D2::EHTVisibilityDatum,
               D3::EHTVisibilityDatum
-              )

Computes the closure phase of the three visibility datums.

Notes

We currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.

source
closure_phase(model, p1, p2, p3, p4)

Computes the closure phase of model m at the uv-triangle u1,v1 -> u2,v2 -> u3,v3

If you want to compute closure phases over a number of triangles consider using the closure_phases function.

source
ComradeBase.closure_phasesFunction
closure_phases(m::AbstractModel, ac::ClosureConfig)

Computes the closure phases of the model m using the array configuration ac.

Notes

This is faster than the closure_phases(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]

source
closure_phases(vis::AbstractArray, ac::ArrayConfiguration)

Compute the closure phases for a set of visibilities and an array configuration

Notes

This uses a closure design matrix for the computation.

source
closure_phases(m,
+              )

Computes the closure phase of the three visibility datums.

Notes

We currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.

source
ComradeBase.closure_phasesFunction
closure_phases(m,
                p1::AbstractArray
                p2::AbstractArray
                p3::AbstractArray
-               )

Computes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.

source
ComradeBase.logclosure_amplitudeFunction
logclosure_amplitude(model, p1, p2, p3, p4)

Computes the log-closure amplitude of model m at the uv-quadrangle u1,v1 -> u2,v2 -> u3,v3 -> u4,v4 using the formula

\[C = \log\left|\frac{V(u1,v1)V(u2,v2)}{V(u3,v3)V(u4,v4)}\right|\]

If you want to compute log closure amplitudes over a number of triangles consider using the logclosure_amplitudes function.

source
ComradeBase.logclosure_amplitudesFunction
logclosure_amplitudes(m::AbstractModel, ac::ClosureConfig)

Computes the log closure amplitudes of the model m using the array configuration ac.

Notes

This is faster than the logclosure_amplitudes(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]

source
logclosure_amplitudes(vis::AbstractArray, ac::ArrayConfiguration)

Compute the log-closure amplitudes for a set of visibilities and an array configuration

Notes

This uses a closure design matrix for the computation.

source
logclosure_amplitudes(m::AbstractModel,
+               )

Computes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.

source
closure_phases(m::AbstractModel, ac::ClosureConfig)

Computes the closure phases of the model m using the array configuration ac.

Notes

This is faster than the closure_phases(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]

source
closure_phases(vis::AbstractArray, ac::ArrayConfiguration)

Compute the closure phases for a set of visibilities and an array configuration

Notes

This uses a closure design matrix for the computation.

source
ComradeBase.logclosure_amplitudeFunction
logclosure_amplitude(model, p1, p2, p3, p4)

Computes the log-closure amplitude of model m at the uv-quadrangle u1,v1 -> u2,v2 -> u3,v3 -> u4,v4 using the formula

\[C = \log\left|\frac{V(u1,v1)V(u2,v2)}{V(u3,v3)V(u4,v4)}\right|\]

If you want to compute log closure amplitudes over a number of triangles consider using the logclosure_amplitudes function.

source
ComradeBase.logclosure_amplitudesFunction
logclosure_amplitudes(m::AbstractModel,
                       p1,
                       p2,
                       p3,
                       p4
-                     )

Computes the log closure amplitudes of the model m at the quadrangles p1, p2, p3, p4.

source

Model Interface

ComradeBase.AbstractModelType
AbstractModel

The Comrade abstract model type. To instantiate your own model type you should subtybe from this model. Additionally you need to implement the following methods to satify the interface:

Mandatory Methods

  • isprimitive: defines whether a model is standalone or is defined in terms of other models. is the model is primitive then this should return IsPrimitive() otherwise it returns NotPrimitive()
  • visanalytic: defines whether the model visibilities can be computed analytically. If yes then this should return IsAnalytic() and the user must to define visibility_point. If not analytic then visanalytic should return NotAnalytic().
  • imanalytic: defines whether the model intensities can be computed pointwise. If yes then this should return IsAnalytic() and the user must to define intensity_point. If not analytic then imanalytic should return NotAnalytic().
  • radialextent: Provides a estimate of the radial extent of the model in the image domain. This is used for estimating the size of the image, and for plotting.
  • flux: Returns the total flux of the model.
  • intensity_point: Defines how to compute model intensities pointwise. Note this is must be defined if imanalytic(::Type{YourModel})==IsAnalytic().
  • visibility_point: Defines how to compute model visibilties pointwise. Note this is must be defined if visanalytic(::Type{YourModel})==IsAnalytic().

Optional Methods:

  • ispolarized: Specified whether a model is intrinsically polarized (returns IsPolarized()) or is not (returns NotPolarized()), by default a model is NotPolarized()
  • visibilities_analytic: Vectorized version of visibility_point for models where visanalytic returns IsAnalytic()
  • visibilities_numeric: Vectorized version of visibility_point for models where visanalytic returns NotAnalytic() typically these are numerical FT's
  • intensitymap_analytic: Computes the entire image for models where imanalytic returns IsAnalytic()
  • intensitymap_numeric: Computes the entire image for models where imanalytic returns NotAnalytic()
  • intensitymap_analytic!: Inplace version of intensitymap
  • intensitymap_numeric!: Inplace version of intensitymap
source
ComradeBase.isprimitiveFunction
isprimitive(::Type)

Dispatch function that specifies whether a type is a primitive Comrade model. This function is used for dispatch purposes when composing models.

Notes

If a user is specifying their own model primitive model outside of Comrade they need to specify if it is primitive

struct MyPrimitiveModel end
+                     )

Computes the log closure amplitudes of the model m at the quadrangles p1, p2, p3, p4.

source
logclosure_amplitudes(m::AbstractModel, ac::ClosureConfig)

Computes the log closure amplitudes of the model m using the array configuration ac.

Notes

This is faster than the logclosure_amplitudes(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]

source
logclosure_amplitudes(vis::AbstractArray, ac::ArrayConfiguration)

Compute the log-closure amplitudes for a set of visibilities and an array configuration

Notes

This uses a closure design matrix for the computation.

source

Model Interface

ComradeBase.AbstractModelType
AbstractModel

The Comrade abstract model type. To instantiate your own model type you should subtybe from this model. Additionally you need to implement the following methods to satify the interface:

Mandatory Methods

  • isprimitive: defines whether a model is standalone or is defined in terms of other models. is the model is primitive then this should return IsPrimitive() otherwise it returns NotPrimitive()
  • visanalytic: defines whether the model visibilities can be computed analytically. If yes then this should return IsAnalytic() and the user must to define visibility_point. If not analytic then visanalytic should return NotAnalytic().
  • imanalytic: defines whether the model intensities can be computed pointwise. If yes then this should return IsAnalytic() and the user must to define intensity_point. If not analytic then imanalytic should return NotAnalytic().
  • radialextent: Provides a estimate of the radial extent of the model in the image domain. This is used for estimating the size of the image, and for plotting.
  • flux: Returns the total flux of the model.
  • intensity_point: Defines how to compute model intensities pointwise. Note this is must be defined if imanalytic(::Type{YourModel})==IsAnalytic().
  • visibility_point: Defines how to compute model visibilties pointwise. Note this is must be defined if visanalytic(::Type{YourModel})==IsAnalytic().

Optional Methods:

  • ispolarized: Specified whether a model is intrinsically polarized (returns IsPolarized()) or is not (returns NotPolarized()), by default a model is NotPolarized()
  • visibilities_analytic: Vectorized version of visibility_point for models where visanalytic returns IsAnalytic()
  • visibilities_numeric: Vectorized version of visibility_point for models where visanalytic returns NotAnalytic() typically these are numerical FT's
  • intensitymap_analytic: Computes the entire image for models where imanalytic returns IsAnalytic()
  • intensitymap_numeric: Computes the entire image for models where imanalytic returns NotAnalytic()
  • intensitymap_analytic!: Inplace version of intensitymap
  • intensitymap_numeric!: Inplace version of intensitymap
source
ComradeBase.isprimitiveFunction
isprimitive(::Type)

Dispatch function that specifies whether a type is a primitive Comrade model. This function is used for dispatch purposes when composing models.

Notes

If a user is specifying their own model primitive model outside of Comrade they need to specify if it is primitive

struct MyPrimitiveModel end
 ComradeBase.isprimitive(::Type{MyModel}) = ComradeBase.IsPrimitive()
source
ComradeBase.visanalyticFunction
visanalytic(::Type{<:AbstractModel})

Determines whether the model is pointwise analytic in Fourier domain, i.e. we can evaluate its fourier transform at an arbritrary point.

If IsAnalytic() then it will try to call visibility_point to calculate the complex visibilities. Otherwise it fallback to using the FFT that works for all models that can compute an image.

source
ComradeBase.imanalyticFunction
imanalytic(::Type{<:AbstractModel})

Determines whether the model is pointwise analytic in the image domain, i.e. we can evaluate its intensity at an arbritrary point.

If IsAnalytic() then it will try to call intensity_point to calculate the intensity.

source
ComradeBase.radialextentFunction
radialextent(model::AbstractModel)

Provides an estimate of the radial size/extent of the model. This is used internally to estimate image size when plotting and using modelimage

source
ComradeBase.PrimitiveTraitType
abstract type PrimitiveTrait

This trait specifies whether the model is a primitive

Notes

This will likely turn into a trait in the future so people can inject their models into Comrade more easily.

source
ComradeBase.DensityAnalyticType
DensityAnalytic

Internal type for specifying the nature of the model functions. Whether they can be easily evaluated pointwise analytic. This is an internal type that may change.

source
ComradeBase.IsAnalyticType
struct IsAnalytic <: ComradeBase.DensityAnalytic

Defines a trait that a states that a model is analytic. This is usually used with an abstract model where we use it to specify whether a model has a analytic fourier transform and/or image.

source
ComradeBase.NotAnalyticType
struct NotAnalytic <: ComradeBase.DensityAnalytic

Defines a trait that a states that a model is analytic. This is usually used with an abstract model where we use it to specify whether a model has does not have a easy analytic fourier transform and/or intensity function.

source
ComradeBase.visibility_pointFunction
visibility_point(model::AbstractModel, p)

Function that computes the pointwise visibility. This must be implemented in the model interface if visanalytic(::Type{MyModel}) == IsAnalytic()

source
ComradeBase.visibilities_analyticFunction
visibilties_analytic(model, u, v, time, freq)

Computes the visibilties of a model using using the analytic visibility expression given by visibility_point.

source
ComradeBase.visibilities_analytic!Function
visibilties_analytic!(vis, model, u, v, time, freq)

Computes the visibilties of a model in-place, using using the analytic visibility expression given by visibility_point.

source
ComradeBase.visibilities_numericFunction
visibilties_numeric(model, u, v, time, freq)

Computes the visibilties of a model using a numerical fourier transform. Note that none of these are implemented in ComradeBase. For implementations please see Comrade.

source
ComradeBase.visibilities_numeric!Function
visibilties_numeric!(vis, model, u, v, time, freq)

Computes the visibilties of a model in-place using a numerical fourier transform. Note that none of these are implemented in ComradeBase. For implementations please see Comrade.

source
ComradeBase.intensity_pointFunction
intensity_point(model::AbstractModel, p)

Function that computes the pointwise intensity if the model has the trait in the image domain IsAnalytic(). Otherwise it will use construct the image in visibility space and invert it.

source
ComradeBase.intensitymap_numericFunction
intensitymap_numeric(m::AbstractModel, p::AbstractDims)

Computes the IntensityMap of a model m at the image positions p using a numerical method. This has to be specified uniquely for every model m if imanalytic(typeof(m)) === NotAnalytic(). See Comrade.jl for example implementations.

source
ComradeBase.intensitymap_numeric!Function
intensitymap_numeric!(img::IntensityMap, m::AbstractModel)
 intensitymap_numeric!(img::StokesIntensityMap, m::AbstractModel)

Updates the img using the model m using a numerical method. This has to be specified uniquely for every model m if imanalytic(typeof(m)) === NotAnalytic(). See Comrade.jl for example implementations.

source

Image Types

ComradeBase.IntensityMapMethod
IntensityMap(data::AbstractArray, dims::NamedTuple)
@@ -27,4 +27,4 @@
 imagepixels(img::IntensityMapTypes)

Returns a abstract spatial dimension with the image pixels locations X and Y.

source
ComradeBase.GriddedKeysType
struct GriddedKeys{N, G, Hd<:ComradeBase.AbstractHeader, T} <: ComradeBase.AbstractDims{N, T}

This struct holds the dimensions that the EHT expect. The first type parameter N defines the names of each dimension. These names are usually one of - (:X, :Y, :T, :F) - (:X, :Y, :F, :T) - (:X, :Y) # spatial only where :X,:Y are the RA and DEC spatial dimensions respectively, :T is the the time direction and :F is the frequency direction.

Fieldnames

  • dims

  • header

Notes

Warning it is rare you need to access this constructor directly. Instead use the direct IntensityMap function.

source
ComradeBase.axisdimsFunction
axisdims(img::IntensityMap)

Returns the keys of the IntensityMap as the actual internal AbstractDims object.

source
ComradeBase.stokesFunction
stokes(m::AbstractPolarizedModel, p::Symbol)

Extract the specific stokes component p from the polarized model m

source
ComradeBase.imagegridFunction
imagegrid(k::IntensityMap)

Returns the grid the IntensityMap is defined as. Note that this is unallocating since it lazily computes the grid. The grid is an example of a KeyedArray and works similarly. This is useful for broadcasting a model across an abritrary grid.

source
ComradeBase.fieldofviewFunction
fieldofview(img::IntensityMap)
 fieldofview(img::IntensityMapTypes)

Returns a named tuple with the field of view of the image.

source
ComradeBase.pixelsizesFunction
pixelsizes(img::IntensityMap)
 pixelsizes(img::IntensityMapTypes)

Returns a named tuple with the spatial pixel sizes of the image.

source
ComradeBase.phasecenterFunction
phasecenter(img::IntensityMap)
-phasecenter(img::StokesIntensitymap)

Computes the phase center of an intensity map. Note this is the pixels that is in the middle of the image.

source
ComradeBase.centroidFunction
centroid(im::AbstractIntensityMap)

Computes the image centroid aka the center of light of the image.

For polarized maps we return the centroid for Stokes I only.

source
ComradeBase.second_momentFunction
second_moment(im::AbstractIntensityMap; center=true)

Computes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.

For polarized maps we return the second moment for Stokes I only.

source
second_moment(im::AbstractIntensityMap; center=true)

Computes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.

source
ComradeBase.headerFunction
header(g::AbstractDims)

Returns the headerinformation of the dimensions g

source
header(img::IntensityMap)

Retrieves the header of an IntensityMap

source
ComradeBase.MinimalHeaderType
MinimalHeader{T}

A minimal header type for ancillary image information.

Fields

  • source: Common source name
  • ra: Right ascension of the image in degrees (J2000)
  • dec: Declination of the image in degrees (J2000)
  • mjd: Modified Julian Date in days
  • frequency: Frequency of the image in Hz
source
ComradeBase.loadFunction
ComradeBase.load(fitsfile::String, IntensityMap)

This loads in a fits file that is more robust to the various imaging algorithms in the EHT, i.e. is works with clean, smili, eht-imaging. The function returns an tuple with an intensitymap and a second named tuple with ancillary information about the image, like the source name, location, mjd, and radio frequency.

source
ComradeBase.saveFunction
ComradeBase.save(file::String, img::IntensityMap, obs)

Saves an image to a fits file. You can optionally pass an EHTObservation so that ancillary information will be added.

source

Polarization

ComradeBase.AbstractPolarizedModelType
abstract type AbstractPolarizedModel <: ComradeBase.AbstractModel

Type the classifies a model as being intrinsically polarized. This means that any call to visibility must return a StokesParams to denote the full stokes polarization of the model.

source
  • 1Blackburn L., et al "Closure Statistics in Interferometric Data" ApJ 2020
  • 1Blackburn L., et al "Closure Statistics in Interferometric Data" ApJ 2020
+phasecenter(img::StokesIntensitymap)

Computes the phase center of an intensity map. Note this is the pixels that is in the middle of the image.

source
ComradeBase.centroidFunction
centroid(im::AbstractIntensityMap)

Computes the image centroid aka the center of light of the image.

For polarized maps we return the centroid for Stokes I only.

source
ComradeBase.second_momentFunction
second_moment(im::AbstractIntensityMap; center=true)

Computes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.

For polarized maps we return the second moment for Stokes I only.

source
second_moment(im::AbstractIntensityMap; center=true)

Computes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.

source
ComradeBase.headerFunction
header(g::AbstractDims)

Returns the headerinformation of the dimensions g

source
header(img::IntensityMap)

Retrieves the header of an IntensityMap

source
ComradeBase.MinimalHeaderType
MinimalHeader{T}

A minimal header type for ancillary image information.

Fields

  • source: Common source name
  • ra: Right ascension of the image in degrees (J2000)
  • dec: Declination of the image in degrees (J2000)
  • mjd: Modified Julian Date in days
  • frequency: Frequency of the image in Hz
source
ComradeBase.loadFunction
ComradeBase.load(fitsfile::String, IntensityMap)

This loads in a fits file that is more robust to the various imaging algorithms in the EHT, i.e. is works with clean, smili, eht-imaging. The function returns an tuple with an intensitymap and a second named tuple with ancillary information about the image, like the source name, location, mjd, and radio frequency.

source
ComradeBase.saveFunction
ComradeBase.save(file::String, img::IntensityMap, obs)

Saves an image to a fits file. You can optionally pass an EHTObservation so that ancillary information will be added.

source

Polarization

ComradeBase.AbstractPolarizedModelType
abstract type AbstractPolarizedModel <: ComradeBase.AbstractModel

Type the classifies a model as being intrinsically polarized. This means that any call to visibility must return a StokesParams to denote the full stokes polarization of the model.

source
  • 1Blackburn L., et al "Closure Statistics in Interferometric Data" ApJ 2020
  • 1Blackburn L., et al "Closure Statistics in Interferometric Data" ApJ 2020
diff --git a/dev/benchmarks/index.html b/dev/benchmarks/index.html index 2e47f25f..39015d1f 100644 --- a/dev/benchmarks/index.html +++ b/dev/benchmarks/index.html @@ -182,4 +182,4 @@ @benchmark fobj($pinit) # Now we benchmark the gradient -@benchmark gfobj($pinit)
+@benchmark gfobj($pinit)
diff --git a/dev/conventions/index.html b/dev/conventions/index.html index b21c7809..22780e78 100644 --- a/dev/conventions/index.html +++ b/dev/conventions/index.html @@ -29,4 +29,4 @@ \begin{pmatrix} \tilde{I} + \tilde{Q} & \tilde{U} + i\tilde{V}\\ \tilde{U} - i\tilde{V} & \tilde{I} - \tilde{Q} - \end{pmatrix}.\]

where e.g., $\left<XY^*\right> = 2\left<v_{pX}v^*_{pY}\right>$.

+ \end{pmatrix}.\]

where e.g., $\left<XY^*\right> = 2\left<v_{pX}v^*_{pY}\right>$.

diff --git a/dev/examples/data/index.html b/dev/examples/data/index.html index edf433cb..0d9417c5 100644 --- a/dev/examples/data/index.html +++ b/dev/examples/data/index.html @@ -21,4 +21,4 @@ pcp = plot(cphase) plc = plot(lcamp) -plot(pv, pa, pcp, plc; layout=l)
<< @example-block not executed in draft mode >>

And also the coherency matrices

plot(coh)
<< @example-block not executed in draft mode >>

This page was generated using Literate.jl.

+plot(pv, pa, pcp, plc; layout=l)
<< @example-block not executed in draft mode >>

And also the coherency matrices

plot(coh)
<< @example-block not executed in draft mode >>

This page was generated using Literate.jl.

diff --git a/dev/examples/geometric_modeling/index.html b/dev/examples/geometric_modeling/index.html index 17fe2d9a..3c8ea5e4 100644 --- a/dev/examples/geometric_modeling/index.html +++ b/dev/examples/geometric_modeling/index.html @@ -47,18 +47,19 @@ prob = Optimization.OptimizationProblem(f, randn(rng, ndim), nothing, lb=fill(-5.0, ndim), ub=fill(5.0, ndim))
<< @example-block not executed in draft mode >>

Now we solve for our optimial image.

sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited(); maxiters=50_000);
 nothing #hide
<< @example-block not executed in draft mode >>

The sol vector is in the transformed space, so first we need to transform back to parameter space to that we can interpret the solution.

xopt = transform(fpost, sol)
<< @example-block not executed in draft mode >>

Given this we can now plot the optimal image or the maximum a posteriori (MAP) image.

import CairoMakie as CM
 g = imagepixels(μas2rad(200.0), μas2rad(200.0), 256, 256)
-fig, ax, plt = CM.image(g, model(xopt); axis=(xreversed=true, aspect=1, xlabel="RA (μas)", ylabel="Dec (μas)"), figure=(;resolution=(650,500),) ,colormap=:afmhot)
<< @example-block not executed in draft mode >>

Quantifying the Uncertainty of the Reconstruction

While finding the optimal image is often helpful, in science, the most important thing is to quantify the certainty of our inferences. This is the goal of Comrade. In the language of Bayesian statistics, we want to find a representation of the posterior of possible image reconstructions given our choice of model and the data.

Comrade provides several sampling and other posterior approximation tools. To see the list, please see the Libraries section of the docs. For this example, we will be using AdvancedHMC.jl, which uses an adaptive Hamiltonian Monte Carlo sampler called NUTS to approximate the posterior. Most of Comrade's external libraries follow a similar interface. To use AdvancedHMC do the following:

using ComradeAHMC, Zygote
-chain, stats = sample(rng, post, AHMC(metric=DiagEuclideanMetric(ndim), autodiff=Val(:Zygote)), 2000; nadapts=1000, init_params=xopt)
<< @example-block not executed in draft mode >>

That's it! To finish it up we can then plot some simple visual fit diagnostics.

First to plot the image we call

imgs = intensitymap.(skymodel.(Ref(post), sample(chain[1000:end], 100)), μas2rad(200.0), μas2rad(200.0), 128, 128)
+fig, ax, plt = CM.image(g, model(xopt); axis=(xreversed=true, aspect=1, xlabel="RA (μas)", ylabel="Dec (μas)"), figure=(;resolution=(650,500),) ,colormap=:afmhot)
<< @example-block not executed in draft mode >>

Quantifying the Uncertainty of the Reconstruction

While finding the optimal image is often helpful, in science, the most important thing is to quantify the certainty of our inferences. This is the goal of Comrade. In the language of Bayesian statistics, we want to find a representation of the posterior of possible image reconstructions given our choice of model and the data.

Comrade provides several sampling and other posterior approximation tools. To see the list, please see the Libraries section of the docs. For this example, we will be using Pigeons.jl which is a state-of-the-art parallel tempering sampler that enables global exploration of the posterior. For smaller dimension problems (< 100) we recommend using this sampler especially if you have access to > 1 thread/core.

using Pigeons
+pt = pigeons(target=cpost, explorer=SliceSampler(), record=[traces, round_trip, log_sum_ratio], n_chains=18, n_rounds=9)
+chain = sample_array(cpost, pt)
<< @example-block not executed in draft mode >>

That's it! To finish it up we can then plot some simple visual fit diagnostics.

First to plot the image we call

imgs = intensitymap.(skymodel.(Ref(post), sample(chain, 100)), μas2rad(200.0), μas2rad(200.0), 128, 128)
 imageviz(imgs[end], colormap=:afmhot)
<< @example-block not executed in draft mode >>

What about the mean image? Well let's grab 100 images from the chain, where we first remove the adaptation steps since they don't sample from the correct posterior distribution

meanimg = mean(imgs)
 imageviz(meanimg, colormap=:afmhot)
<< @example-block not executed in draft mode >>

That looks similar to the EHTC VI, and it took us no time at all!. To see how well the model is fitting the data we can plot the model and data products

using Plots
 plot(model(xopt), dlcamp, label="MAP")
<< @example-block not executed in draft mode >>

We can also plot random draws from the posterior predictive distribution. The posterior predictive distribution create a number of synthetic observations that are marginalized over the posterior.

p = plot(dlcamp);
 uva = [sqrt.(uvarea(dlcamp[i])) for i in 1:length(dlcamp)]
 for i in 1:10
-    m = simulate_observation(post, chain[rand(rng, 1000:2000)])[1]
+    m = simulate_observation(post, sample(chain, 1)[1])[1]
     scatter!(uva, m, color=:grey, label=:none, alpha=0.1)
 end
 p
<< @example-block not executed in draft mode >>

Finally, we can also put everything onto a common scale and plot the normalized residuals. The normalied residuals are the difference between the data and the model, divided by the data's error:

residual(model(xopt), dlcamp)
<< @example-block not executed in draft mode >>

All diagnostic plots suggest that the model is missing some emission sources. In fact, this model is too simple to explain the data. Check out EHTC VI 2019 for some ideas about what features need to be added to the model to get a better fit!

For a real run we should also check that the MCMC chain has converged. For this we can use MCMCDiagnosticTools

using MCMCDiagnosticTools, Tables
<< @example-block not executed in draft mode >>

First, lets look at the effective sample size (ESS) and R̂. This is important since the Monte Carlo standard error for MCMC estimates is proportional to 1/√ESS (for some problems) and R̂ is a measure of chain convergence. To find both, we can use:

compute_ess(x::NamedTuple) = map(compute_ess, x)
 compute_ess(x::AbstractVector{<:Number}) = ess_rhat(x)
 compute_ess(x::AbstractVector{<:Tuple}) = map(ess_rhat, Tables.columns(x))
 compute_ess(x::Tuple) = map(compute_ess, x)
-essrhat = compute_ess(Tables.columns(chain))
<< @example-block not executed in draft mode >>

Here, the first value is the ESS, and the second is the R̂. Note that we typically want R̂ < 1.01 for all parameters, but you should also be running the problem at least four times from four different starting locations. In the future we will write an extension that works with Arviz.jl.

In our example here, we see that we have an ESS > 100 for all parameters and the R̂ < 1.01 meaning that our MCMC chain is a reasonable approximation of the posterior. For more diagnostics, see MCMCDiagnosticTools.jl.


This page was generated using Literate.jl.

+essrhat = compute_ess(Tables.columns(chain))
<< @example-block not executed in draft mode >>

Here, the first value is the ESS, and the second is the R̂. Note that we typically want R̂ < 1.01 for all parameters, but you should also be running the problem at least four times from four different starting locations. In the future we will write an extension that works with Arviz.jl.

In our example here, we see that we have an ESS > 100 for all parameters and the R̂ < 1.01 meaning that our MCMC chain is a reasonable approximation of the posterior. For more diagnostics, see MCMCDiagnosticTools.jl.


This page was generated using Literate.jl.

diff --git a/dev/examples/hybrid_imaging/index.html b/dev/examples/hybrid_imaging/index.html index 13f0e57d..6b24c3c6 100644 --- a/dev/examples/hybrid_imaging/index.html +++ b/dev/examples/hybrid_imaging/index.html @@ -111,4 +111,4 @@ Threads: 1 on 32 virtual cores Environment: JULIA_EDITOR = code - JULIA_NUM_THREADS = 1

This page was generated using Literate.jl.

+ JULIA_NUM_THREADS = 1

This page was generated using Literate.jl.

diff --git a/dev/examples/imaging_closures/index.html b/dev/examples/imaging_closures/index.html index b2d10bc3..51cdcd84 100644 --- a/dev/examples/imaging_closures/index.html +++ b/dev/examples/imaging_closures/index.html @@ -74,4 +74,4 @@ residual!(p, vlbimodel(post, s), dcphase) end ylabel!("|Closure Phase Res.|"); -p
<< @example-block not executed in draft mode >>

And viola, you have a quick and preliminary image of M87 fitting only closure products. For a publication-level version we would recommend

  1. Running the chain longer and multiple times to properly assess things like ESS and R̂ (see Geometric Modeling of EHT Data)
  2. Fitting gains. Typically gain amplitudes are good to 10-20% for the EHT not the infinite uncertainty closures implicitly assume
  3. Making sure the posterior is unimodal (hint for this example it isn't!). The EHT image posteriors can be pretty complicated, so typically you want to use a sampler that can deal with multi-modal posteriors. Check out the package Pigeons.jl for an in-development package that should easily enable this type of sampling.

This page was generated using Literate.jl.

+p
<< @example-block not executed in draft mode >>

And viola, you have a quick and preliminary image of M87 fitting only closure products. For a publication-level version we would recommend

  1. Running the chain longer and multiple times to properly assess things like ESS and R̂ (see Geometric Modeling of EHT Data)
  2. Fitting gains. Typically gain amplitudes are good to 10-20% for the EHT not the infinite uncertainty closures implicitly assume
  3. Making sure the posterior is unimodal (hint for this example it isn't!). The EHT image posteriors can be pretty complicated, so typically you want to use a sampler that can deal with multi-modal posteriors. Check out the package Pigeons.jl for an in-development package that should easily enable this type of sampling.

This page was generated using Literate.jl.

diff --git a/dev/examples/imaging_pol/index.html b/dev/examples/imaging_pol/index.html index adb85881..b6dccada 100644 --- a/dev/examples/imaging_pol/index.html +++ b/dev/examples/imaging_pol/index.html @@ -124,4 +124,4 @@ Threads: 1 on 32 virtual cores Environment: JULIA_EDITOR = code - JULIA_NUM_THREADS = 1

This page was generated using Literate.jl.

+ JULIA_NUM_THREADS = 1

This page was generated using Literate.jl.

diff --git a/dev/examples/imaging_vis/index.html b/dev/examples/imaging_vis/index.html index 3651d407..6056895f 100644 --- a/dev/examples/imaging_vis/index.html +++ b/dev/examples/imaging_vis/index.html @@ -95,4 +95,4 @@ for s in sample(chain, 10) residual!(p, vlbimodel(post, s), dvis) end -p
<< @example-block not executed in draft mode >>

And viola, you have just finished making a preliminary image and instrument model reconstruction. In reality, you should run the sample step for many more MCMC steps to get a reliable estimate for the reconstructed image and instrument model parameters.


This page was generated using Literate.jl.

+p
<< @example-block not executed in draft mode >>

And viola, you have just finished making a preliminary image and instrument model reconstruction. In reality, you should run the sample step for many more MCMC steps to get a reliable estimate for the reconstructed image and instrument model parameters.


This page was generated using Literate.jl.

diff --git a/dev/index.html b/dev/index.html index 382faf36..51c4f797 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · Comrade.jl

Comrade

Comrade is a Bayesian differentiable modular modeling framework for use with very long baseline interferometry. The goal is to allow the user to easily combine and modify a set of primitive models to construct complicated source structures. The benefit of this approach is that it is straightforward to construct different source models out of these primitives. Namely, an end-user does not have to create a separate source "model" every time they change the model specification. Additionally, most models currently implemented are differentiable with at Zygote and sometimes ForwardDiff[2]. This allows for gradient accelerated optimization and sampling (e.g., HMC) to be used with little effort by the end user. To sample from the posterior, we provide a somewhat barebones interface since, most of the time, and we don't require the additional features offered by most PPLs. Additionally, the overhead introduced by PPLs tends to be rather large. In the future, we may revisit this as Julia's PPL ecosystem matures.

Note

The primitives the Comrade defines, however, would allow for it to be easily included in PPLs like Turing.

Our tutorial section currently has a large number of examples. The simplest example is fitting simple geometric models to the 2017 M87 data and is detailed in the Geometric Modeling of EHT Data tutorial. We also include "non-parametric" modeling or imaging examples in Imaging a Black Hole using only Closure Quantities, and Stokes I Simultaneous Image and Instrument Modeling. There is also an introduction to hybrid geometric and image modeling in Hybrid Imaging of a Black Hole, which combines physically motivated geometric modeling with the flexibility of image-based models.

As of 0.7, Comrade also can simultaneously reconstruct polarized image models and instrument corruptions through the RIME[1] formalism. A short example explaining these features can be found in Polarized Image and Instrumental Modeling.

Contributing

This repository has recently moved to ColPrac. If you would like to contribute please feel free to open a issue or pull-request.

a Dual number overload. As a result we recommend using Zygote which does work and often is similarly performant (reverse 3-6x slower compared to the forward pass).

Requirements

The minimum Julia version we require is 1.7. In the future we may increase this as Julia advances.

References

  • 2As of 0.9 Comrade switched to using full covariance closures. As a result this requires a sparse cholesky solve in the likelihood evaluation which requires
  • 1Hamaker J.P and Bregman J.D. and Sault R.J. Understanding radio polarimetry. I. Mathematical foundations ADS.
+Home · Comrade.jl

Comrade

Comrade is a Bayesian differentiable modular modeling framework for use with very long baseline interferometry. The goal is to allow the user to easily combine and modify a set of primitive models to construct complicated source structures. The benefit of this approach is that it is straightforward to construct different source models out of these primitives. Namely, an end-user does not have to create a separate source "model" every time they change the model specification. Additionally, most models currently implemented are differentiable with at Zygote and sometimes ForwardDiff[2]. This allows for gradient accelerated optimization and sampling (e.g., HMC) to be used with little effort by the end user. To sample from the posterior, we provide a somewhat barebones interface since, most of the time, and we don't require the additional features offered by most PPLs. Additionally, the overhead introduced by PPLs tends to be rather large. In the future, we may revisit this as Julia's PPL ecosystem matures.

Note

The primitives the Comrade defines, however, would allow for it to be easily included in PPLs like Turing.

Our tutorial section currently has a large number of examples. The simplest example is fitting simple geometric models to the 2017 M87 data and is detailed in the Geometric Modeling of EHT Data tutorial. We also include "non-parametric" modeling or imaging examples in Imaging a Black Hole using only Closure Quantities, and Stokes I Simultaneous Image and Instrument Modeling. There is also an introduction to hybrid geometric and image modeling in Hybrid Imaging of a Black Hole, which combines physically motivated geometric modeling with the flexibility of image-based models.

As of 0.7, Comrade also can simultaneously reconstruct polarized image models and instrument corruptions through the RIME[1] formalism. A short example explaining these features can be found in Polarized Image and Instrumental Modeling.

Contributing

This repository has recently moved to ColPrac. If you would like to contribute please feel free to open a issue or pull-request.

a Dual number overload. As a result we recommend using Zygote which does work and often is similarly performant (reverse 3-6x slower compared to the forward pass).

Requirements

The minimum Julia version we require is 1.7. In the future we may increase this as Julia advances.

References

  • 2As of 0.9 Comrade switched to using full covariance closures. As a result this requires a sparse cholesky solve in the likelihood evaluation which requires
  • 1Hamaker J.P and Bregman J.D. and Sault R.J. Understanding radio polarimetry. I. Mathematical foundations ADS.
diff --git a/dev/interface/index.html b/dev/interface/index.html index efdbb67b..cffdcf73 100644 --- a/dev/interface/index.html +++ b/dev/interface/index.html @@ -1,2 +1,2 @@ -Model Interface · Comrade.jl
+Model Interface · Comrade.jl
diff --git a/dev/libs/adaptmcmc/index.html b/dev/libs/adaptmcmc/index.html index 3a6ac68d..68210bde 100644 --- a/dev/libs/adaptmcmc/index.html +++ b/dev/libs/adaptmcmc/index.html @@ -14,4 +14,4 @@ fulladapt = true, acc_sw = 0.234, all_levels = false - )

Create an AdaptMCMC.jl sampler. This sampler uses the AdaptiveMCMC.jl package to sample from the posterior. Namely, this is a parallel tempering algorithm with an adaptive exploration and tempering sampler. For more information please see [https://github.com/mvihola/AdaptiveMCMC.jl].

The arguments of the function are:

source
StatsBase.sampleFunction
sample(post::Posterior, sampler::AdaptMCMC, nsamples, burnin=nsamples÷2, args...; init_params=nothing, kwargs...)

Sample the posterior post using the AdaptMCMC sampler. This will produce nsamples with the first burnin steps removed. The init_params indicate where to start the sampler from and it is expected to be a NamedTuple of parameters.

Possible additional kwargs are:

  • thin::Int = 1: which says to save only every thin sample to memory
  • rng: Specify a random number generator (default uses GLOBAL_RNG)

This return a tuple where:

  • First element are the chains from the sampler. If all_levels=false the only the unit temperature (posterior) chain is returned
  • Second element is the additional ancilliary information about the samples including the loglikelihood logl, sampler state state, average exploration kernel acceptance rate accexp for each tempering level, and average temperate swap acceptance rates accswp for each tempering level.
source
+ )

Create an AdaptMCMC.jl sampler. This sampler uses the AdaptiveMCMC.jl package to sample from the posterior. Namely, this is a parallel tempering algorithm with an adaptive exploration and tempering sampler. For more information please see [https://github.com/mvihola/AdaptiveMCMC.jl].

The arguments of the function are:

source
StatsBase.sampleFunction
sample(post::Posterior, sampler::AdaptMCMC, nsamples, burnin=nsamples÷2, args...; init_params=nothing, kwargs...)

Sample the posterior post using the AdaptMCMC sampler. This will produce nsamples with the first burnin steps removed. The init_params indicate where to start the sampler from and it is expected to be a NamedTuple of parameters.

Possible additional kwargs are:

  • thin::Int = 1: which says to save only every thin sample to memory
  • rng: Specify a random number generator (default uses GLOBAL_RNG)

This return a tuple where:

  • First element are the chains from the sampler. If all_levels=false the only the unit temperature (posterior) chain is returned
  • Second element is the additional ancilliary information about the samples including the loglikelihood logl, sampler state state, average exploration kernel acceptance rate accexp for each tempering level, and average temperate swap acceptance rates accswp for each tempering level.
source
diff --git a/dev/libs/ahmc/index.html b/dev/libs/ahmc/index.html index 23f0b38d..88ba12bf 100644 --- a/dev/libs/ahmc/index.html +++ b/dev/libs/ahmc/index.html @@ -9,16 +9,16 @@ metric = DiagEuclideanMetric(dimension(post)) smplr = AHMC(metric=metric, autodiff=Val(:Zygote)) -samples, stats = sample(post, smplr, 2_000; nadapts=1_000)

API

ComradeAHMC.AHMCType
AHMC

Creates a sampler that uses the AdvancedHMC framework to construct an Hamiltonian Monte Carlo NUTS sampler.

The user must specify the metric they want to use. Typically we recommend DiagEuclideanMetric as a reasonable starting place. The other options are chosen to match the Stan languages defaults and should provide a good starting point. Please see the AdvancedHMC docs for more information.

Notes

For autodiff the must provide a Val(::Symbol) that specifies the AD backend. Currently, we use LogDensityProblemsAD.

Fields

  • metric: AdvancedHMC metric to use
  • integrator: AdvancedHMC integrator Defaults to AdvancedHMC.Leapfrog
  • trajectory: HMC trajectory sampler Defaults to AdvancedHMC.MultinomialTS
  • termination: HMC termination condition Defaults to AdvancedHMC.StrictGeneralisedNoUTurn
  • adaptor: Adaptation strategy for mass matrix and stepsize Defaults to AdvancedHMC.StanHMCAdaptor
  • targetacc: Target acceptance rate for all trajectories on the tree Defaults to 0.85
  • init_buffer: The number of steps for the initial tuning phase. Defaults to 75 which is the Stan default
  • term_buffer: The number of steps for the final fast step size adaptation Default if 50 which is the Stan default
  • window_size: The number of steps to tune the covariance before the first doubling Default is 25 which is the Stan default
  • autodiff: autodiff backend see LogDensitProblemsAD.jl for possible backends. The default is Zygote which is appropriate for high dimensional problems.
source
ComradeAHMC.DiskStoreType
Disk

Type that specifies to save the HMC results to disk.

Fields

  • name: Path of the directory where the results will be saved. If the path does not exist it will be automatically created.
  • stride: The output stride, i.e. every stride steps the MCMC output will be dumped to disk.
source
ComradeAHMC.MemoryStoreType
Memory

Stored the HMC samplers in memory or ram.

source
ComradeAHMC.load_tableFunction
load_table(out::DiskOutput, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table="samples")
-load_table(out::String, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table="samples")

The the results from a HMC run saved to disk. To read in the output the user can either pass the resulting out object, or the path to the directory that the results were saved, i.e. the path specified in DiskStore.

Arguments

  • out::Union{String, DiskOutput}: If out is a string is must point to the direct that the DiskStore pointed to. Otherwise it is what is directly returned from sample.
  • indices: The indices of the that you want to load into memory. The default is to load the entire table.

Keyword Arguments

  • table: A string specifying the table you wish to read in. There are two options: "samples" which corresponds to the actual MCMC chain, and stats which corresponds to additional information about the sampler, e.g., the log density of each sample and tree statistics.
source
StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior,
+samples, stats = sample(post, smplr, 2_000; nadapts=1_000)

API

ComradeAHMC.AHMCType
AHMC

Creates a sampler that uses the AdvancedHMC framework to construct an Hamiltonian Monte Carlo NUTS sampler.

The user must specify the metric they want to use. Typically we recommend DiagEuclideanMetric as a reasonable starting place. The other options are chosen to match the Stan languages defaults and should provide a good starting point. Please see the AdvancedHMC docs for more information.

Notes

For autodiff the must provide a Val(::Symbol) that specifies the AD backend. Currently, we use LogDensityProblemsAD.

Fields

  • metric: AdvancedHMC metric to use
  • integrator: AdvancedHMC integrator Defaults to AdvancedHMC.Leapfrog
  • trajectory: HMC trajectory sampler Defaults to AdvancedHMC.MultinomialTS
  • termination: HMC termination condition Defaults to AdvancedHMC.StrictGeneralisedNoUTurn
  • adaptor: Adaptation strategy for mass matrix and stepsize Defaults to AdvancedHMC.StanHMCAdaptor
  • targetacc: Target acceptance rate for all trajectories on the tree Defaults to 0.85
  • init_buffer: The number of steps for the initial tuning phase. Defaults to 75 which is the Stan default
  • term_buffer: The number of steps for the final fast step size adaptation Default if 50 which is the Stan default
  • window_size: The number of steps to tune the covariance before the first doubling Default is 25 which is the Stan default
  • autodiff: autodiff backend see LogDensitProblemsAD.jl for possible backends. The default is Zygote which is appropriate for high dimensional problems.
source
ComradeAHMC.DiskStoreType
Disk

Type that specifies to save the HMC results to disk.

Fields

  • name: Path of the directory where the results will be saved. If the path does not exist it will be automatically created.
  • stride: The output stride, i.e. every stride steps the MCMC output will be dumped to disk.
source
ComradeAHMC.load_tableFunction
load_table(out::DiskOutput, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table="samples")
+load_table(out::String, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table="samples")

The the results from a HMC run saved to disk. To read in the output the user can either pass the resulting out object, or the path to the directory that the results were saved, i.e. the path specified in DiskStore.

Arguments

  • out::Union{String, DiskOutput}: If out is a string is must point to the direct that the DiskStore pointed to. Otherwise it is what is directly returned from sample.
  • indices: The indices of the that you want to load into memory. The default is to load the entire table.

Keyword Arguments

  • table: A string specifying the table you wish to read in. There are two options: "samples" which corresponds to the actual MCMC chain, and stats which corresponds to additional information about the sampler, e.g., the log density of each sample and tree statistics.
source
StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior,
                     sampler::AHMC,
                     nsamples;
                     init_params=nothing,
                     saveto::Union{Memory, Disk}=Memory(),
-                    kwargs...)

Samples the posterior post using the AdvancedHMC sampler specified by AHMC. This will run the sampler for nsamples.

To initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.

With saveto the user can optionally specify whether to store the samples in memory MemoryStore or save directly to disk with DiskStore(filename, stride). The stride controls how often t he samples are dumped to disk.

For possible kwargs please see the AdvancedHMC.jl docs

This returns a tuple where the first element is a TypedTable of the MCMC samples in parameter space and the second argument is a set of ancilliary information about the sampler.

Notes

This will automatically transform the posterior to the flattened unconstrained space.

source
StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior,
+                    kwargs...)

Samples the posterior post using the AdvancedHMC sampler specified by AHMC. This will run the sampler for nsamples.

To initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.

With saveto the user can optionally specify whether to store the samples in memory MemoryStore or save directly to disk with DiskStore(filename, stride). The stride controls how often t he samples are dumped to disk.

For possible kwargs please see the AdvancedHMC.jl docs

This returns a tuple where the first element is a TypedTable of the MCMC samples in parameter space and the second argument is a set of ancilliary information about the sampler.

Notes

This will automatically transform the posterior to the flattened unconstrained space.

source
StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior,
                     sampler::AHMC,
                     parallel::AbstractMCMC.AbstractMCMCEnsemble,
                     nsamples,
                     nchainsl;
                     init_params=nothing,
-                    kwargs...)

Samples the posterior post using the AdvancedHMC sampler specified by AHMC. This will sample nchains copies of the posterior using the parallel scheme. Each chain will be sampled for nsamples.

To initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.

For possible kwargs please see the AdvancedHMC.jl docs

This returns a tuple where the first element is nchains of TypedTable's each which contains the MCMC samples of one of the parallel chain and the second argument is a set of ancilliary information about each set of samples.

Notes

This will automatically transform the posterior to the flattened unconstrained space.

source
+ kwargs...)

Samples the posterior post using the AdvancedHMC sampler specified by AHMC. This will sample nchains copies of the posterior using the parallel scheme. Each chain will be sampled for nsamples.

To initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.

For possible kwargs please see the AdvancedHMC.jl docs

This returns a tuple where the first element is nchains of TypedTable's each which contains the MCMC samples of one of the parallel chain and the second argument is a set of ancilliary information about each set of samples.

Notes

This will automatically transform the posterior to the flattened unconstrained space.

source diff --git a/dev/libs/dynesty/index.html b/dev/libs/dynesty/index.html index a3d59266..7323243e 100644 --- a/dev/libs/dynesty/index.html +++ b/dev/libs/dynesty/index.html @@ -15,4 +15,4 @@ equal_weight_chain = ComradeDynesty.equalresample(samples, 10_000)

API

StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior, smplr::Dynesty.NestedSampler, args...; kwargs...)
 AbstractMCMC.sample(post::Comrade.Posterior, smplr::Dynesty.DynamicNestedSampler, args...; kwargs...)

Sample the posterior post using Dynesty.jl NestedSampler/DynamicNestedSampler sampler. The args/kwargs are forwarded to Dynesty for more information see its docs

This returns a tuple where the first element are the weighted samples from dynesty in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights. The final element of the tuple is the original dynesty output file.

To create equally weighted samples the user can use

using StatsBase
 chain, stats = sample(post, NestedSampler(dimension(post), 1000))
-equal_weighted_chain = sample(chain, Weights(stats.weights), 10_000)
source
+equal_weighted_chain = sample(chain, Weights(stats.weights), 10_000)source diff --git a/dev/libs/nested/index.html b/dev/libs/nested/index.html index e79cb342..8b5ff26f 100644 --- a/dev/libs/nested/index.html +++ b/dev/libs/nested/index.html @@ -12,4 +12,4 @@ # Optionally resample the chain to create an equal weighted output using StatsBase -equal_weight_chain = ComradeNested.equalresample(samples, 10_000)

API

StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior, smplr::Nested, args...; kwargs...)

Sample the posterior post using NestedSamplers.jl Nested sampler. The args/kwargs are forwarded to NestedSampler for more information see its docs

This returns a tuple where the first element are the weighted samples from NestedSamplers in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights.

To create equally weighted samples the user can use ```julia using StatsBase chain, stats = sample(post, NestedSampler(dimension(post), 1000)) equalweightedchain = sample(chain, Weights(stats.weights), 10_000)

source
+equal_weight_chain = ComradeNested.equalresample(samples, 10_000)

API

StatsBase.sampleMethod
AbstractMCMC.sample(post::Comrade.Posterior, smplr::Nested, args...; kwargs...)

Sample the posterior post using NestedSamplers.jl Nested sampler. The args/kwargs are forwarded to NestedSampler for more information see its docs

This returns a tuple where the first element are the weighted samples from NestedSamplers in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights.

To create equally weighted samples the user can use ```julia using StatsBase chain, stats = sample(post, NestedSampler(dimension(post), 1000)) equalweightedchain = sample(chain, Weights(stats.weights), 10_000)

source
diff --git a/dev/libs/optimization/index.html b/dev/libs/optimization/index.html index b71a6f71..0afa3aa3 100644 --- a/dev/libs/optimization/index.html +++ b/dev/libs/optimization/index.html @@ -13,4 +13,4 @@ prob = OptimizationProblem(fflat, prior_sample(asflat(post)), nothing) # Now solve! Here we use LBFGS -sol = solve(prob, LBFGS(); g_tol=1e-2)

API

ComradeOptimization.laplaceMethod
laplace(prob, opt, args...; kwargs...)

Compute the Laplace or Quadratic approximation to the prob or posterior. The args and kwargs are passed the the SciMLBase.solve function. This will return a Distributions.MvNormal object that approximates the posterior in the transformed space.

Note the quadratic approximation is in the space of the transformed posterior not the usual parameter space. This is better for constrained problems where we may run up against a boundary.

source
SciMLBase.OptimizationFunctionMethod
SciMLBase.OptimizationFunction(post::Posterior, args...; kwargs...)

Constructs a OptimizationFunction from a Comrade.TransformedPosterior object. Note that a user must transform the posterior first. This is so we know which space is most amenable to optimization.

source
+sol = solve(prob, LBFGS(); g_tol=1e-2)

API

ComradeOptimization.laplaceMethod
laplace(prob, opt, args...; kwargs...)

Compute the Laplace or Quadratic approximation to the prob or posterior. The args and kwargs are passed the the SciMLBase.solve function. This will return a Distributions.MvNormal object that approximates the posterior in the transformed space.

Note the quadratic approximation is in the space of the transformed posterior not the usual parameter space. This is better for constrained problems where we may run up against a boundary.

source
SciMLBase.OptimizationFunctionMethod
SciMLBase.OptimizationFunction(post::Posterior, args...; kwargs...)

Constructs a OptimizationFunction from a Comrade.TransformedPosterior object. Note that a user must transform the posterior first. This is so we know which space is most amenable to optimization.

source
diff --git a/dev/search_index.js b/dev/search_index.js index 1f527237..2ddd1aa3 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"base_api/#ComradeBase-API","page":"ComradeBase API","title":"ComradeBase API","text":"","category":"section"},{"location":"base_api/#Contents","page":"ComradeBase API","title":"Contents","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"Pages = [\"base_api.md\"]","category":"page"},{"location":"base_api/#Index","page":"ComradeBase API","title":"Index","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"Pages = [\"base_api.md\"]","category":"page"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"CurrentModule = ComradeBase","category":"page"},{"location":"base_api/#Model-API","page":"ComradeBase API","title":"Model API","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"ComradeBase.flux\nComradeBase.visibility\nComradeBase.visibilities\nComradeBase.visibilities!\nComradeBase.intensitymap\nComradeBase.intensitymap!\nComradeBase.IntensityMap\nComradeBase.amplitude(::Any, ::Any)\nComradeBase.amplitudes\nComradeBase.bispectrum\nComradeBase.bispectra\nComradeBase.closure_phase\nComradeBase.closure_phases\nComradeBase.logclosure_amplitude\nComradeBase.logclosure_amplitudes","category":"page"},{"location":"base_api/#ComradeBase.flux","page":"ComradeBase API","title":"ComradeBase.flux","text":"flux(im::IntensityMap)\nflux(img::StokesIntensityMap)\n\nComputes the flux of a intensity map\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibility","page":"ComradeBase API","title":"ComradeBase.visibility","text":"visibility(d::EHTVisibilityDatum)\n\nReturn the complex visibility of the visibility datum\n\n\n\n\n\nvisibility(mimg, p)\n\nComputes the complex visibility of model m at coordinates p. p corresponds to the coordinates of the model. These need to have the properties U, V and sometimes Ti for time and Fr for frequency.\n\nNotes\n\nIf you want to compute the visibilities at a large number of positions consider using the visibilities.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities","page":"ComradeBase API","title":"ComradeBase.visibilities","text":"visibilities(model::AbstractModel, args...)\n\nComputes the complex visibilities at the locations given by args...\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities!","page":"ComradeBase API","title":"ComradeBase.visibilities!","text":"visibilities!(vis::AbstractArray, model::AbstractModel, args...)\n\nComputes the complex visibilities vis in place at the locations given by args...\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap","page":"ComradeBase API","title":"ComradeBase.intensitymap","text":"intensitymap(model::AbstractModel, p::AbstractDims)\n\nComputes the intensity map of model. For the inplace version see intensitymap!\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap!","page":"ComradeBase API","title":"ComradeBase.intensitymap!","text":"intensitymap!(buffer::AbstractDimArray, model::AbstractModel)\n\nComputes the intensity map of model by modifying the buffer\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.IntensityMap","page":"ComradeBase API","title":"ComradeBase.IntensityMap","text":"IntensityMap(data::AbstractArray, dims::NamedTuple)\nIntensityMap(data::AbstractArray, grid::AbstractDims)\n\nConstructs an intensitymap using the image dimensions given by dims. This returns a KeyedArray with keys given by an ImageDimensions object.\n\ndims = (X=range(-10.0, 10.0, length=100), Y = range(-10.0, 10.0, length=100),\n T = [0.1, 0.2, 0.5, 0.9, 1.0], F = [230e9, 345e9]\n )\nimgk = IntensityMap(rand(100,100,5,1), dims)\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.amplitude-Tuple{Any, Any}","page":"ComradeBase API","title":"ComradeBase.amplitude","text":"amplitude(model, p)\n\nComputes the visibility amplitude of model m at the coordinate p. The coordinate p is expected to have the properties U, V, and sometimes Ti and Fr.\n\nIf you want to compute the amplitudes at a large number of positions consider using the amplitudes function.\n\n\n\n\n\n","category":"method"},{"location":"base_api/#ComradeBase.amplitudes","page":"ComradeBase API","title":"ComradeBase.amplitudes","text":"amplitudes(m::AbstractModel, u::AbstractArray, v::AbstractArray)\n\nComputes the visibility amplitudes of the model m at the coordinates p. The coordinates p are expected to have the properties U, V, and sometimes Ti and Fr.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.bispectrum","page":"ComradeBase API","title":"ComradeBase.bispectrum","text":"bispectrum(d1::T, d2::T, d3::T) where {T<:EHTVisibilityDatum}\n\nFinds the bispectrum of three visibilities. We will assume these form closed triangles, i.e. the phase of the bispectrum is a closure phase.\n\n\n\n\n\nbispectrum(model, p1, p2, p3)\n\nComputes the complex bispectrum of model m at the uv-triangle p1 -> p2 -> p3\n\nIf you want to compute the bispectrum over a number of triangles consider using the bispectra function.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.bispectra","page":"ComradeBase API","title":"ComradeBase.bispectra","text":"bispectra(m, p1, p2, p3)\n\nComputes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.closure_phase","page":"ComradeBase API","title":"ComradeBase.closure_phase","text":"closure_phase(D1::EHTVisibilityDatum,\n D2::EHTVisibilityDatum,\n D3::EHTVisibilityDatum\n )\n\nComputes the closure phase of the three visibility datums.\n\nNotes\n\nWe currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.\n\n\n\n\n\nclosure_phase(model, p1, p2, p3, p4)\n\nComputes the closure phase of model m at the uv-triangle u1,v1 -> u2,v2 -> u3,v3\n\nIf you want to compute closure phases over a number of triangles consider using the closure_phases function.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.closure_phases","page":"ComradeBase API","title":"ComradeBase.closure_phases","text":"closure_phases(m::AbstractModel, ac::ClosureConfig)\n\nComputes the closure phases of the model m using the array configuration ac.\n\nNotes\n\nThis is faster than the closure_phases(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]\n\n[1]: Blackburn L., et al \"Closure Statistics in Interferometric Data\" ApJ 2020\n\n\n\n\n\nclosure_phases(vis::AbstractArray, ac::ArrayConfiguration)\n\nCompute the closure phases for a set of visibilities and an array configuration\n\nNotes\n\nThis uses a closure design matrix for the computation.\n\n\n\n\n\nclosure_phases(m,\n p1::AbstractArray\n p2::AbstractArray\n p3::AbstractArray\n )\n\nComputes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.logclosure_amplitude","page":"ComradeBase API","title":"ComradeBase.logclosure_amplitude","text":"logclosure_amplitude(model, p1, p2, p3, p4)\n\nComputes the log-closure amplitude of model m at the uv-quadrangle u1,v1 -> u2,v2 -> u3,v3 -> u4,v4 using the formula\n\nC = logleftfracV(u1v1)V(u2v2)V(u3v3)V(u4v4)right\n\nIf you want to compute log closure amplitudes over a number of triangles consider using the logclosure_amplitudes function.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.logclosure_amplitudes","page":"ComradeBase API","title":"ComradeBase.logclosure_amplitudes","text":"logclosure_amplitudes(m::AbstractModel, ac::ClosureConfig)\n\nComputes the log closure amplitudes of the model m using the array configuration ac.\n\nNotes\n\nThis is faster than the logclosure_amplitudes(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]\n\n[1]: Blackburn L., et al \"Closure Statistics in Interferometric Data\" ApJ 2020\n\n\n\n\n\nlogclosure_amplitudes(vis::AbstractArray, ac::ArrayConfiguration)\n\nCompute the log-closure amplitudes for a set of visibilities and an array configuration\n\nNotes\n\nThis uses a closure design matrix for the computation.\n\n\n\n\n\nlogclosure_amplitudes(m::AbstractModel,\n p1,\n p2,\n p3,\n p4\n )\n\nComputes the log closure amplitudes of the model m at the quadrangles p1, p2, p3, p4.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#Model-Interface","page":"ComradeBase API","title":"Model Interface","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"ComradeBase.AbstractModel\nComradeBase.isprimitive\nComradeBase.visanalytic\nComradeBase.imanalytic\nComradeBase.ispolarized\nComradeBase.radialextent\nComradeBase.PrimitiveTrait\nComradeBase.IsPrimitive\nComradeBase.NotPrimitive\nComradeBase.DensityAnalytic\nComradeBase.IsAnalytic\nComradeBase.NotAnalytic\nComradeBase.visibility_point\nComradeBase.visibilities_analytic\nComradeBase.visibilities_analytic!\nComradeBase.visibilities_numeric\nComradeBase.visibilities_numeric!\nComradeBase.intensity_point\nComradeBase.intensitymap_analytic\nComradeBase.intensitymap_analytic!\nComradeBase.intensitymap_numeric\nComradeBase.intensitymap_numeric!","category":"page"},{"location":"base_api/#ComradeBase.AbstractModel","page":"ComradeBase API","title":"ComradeBase.AbstractModel","text":"AbstractModel\n\nThe Comrade abstract model type. To instantiate your own model type you should subtybe from this model. Additionally you need to implement the following methods to satify the interface:\n\nMandatory Methods\n\nisprimitive: defines whether a model is standalone or is defined in terms of other models. is the model is primitive then this should return IsPrimitive() otherwise it returns NotPrimitive()\nvisanalytic: defines whether the model visibilities can be computed analytically. If yes then this should return IsAnalytic() and the user must to define visibility_point. If not analytic then visanalytic should return NotAnalytic().\nimanalytic: defines whether the model intensities can be computed pointwise. If yes then this should return IsAnalytic() and the user must to define intensity_point. If not analytic then imanalytic should return NotAnalytic().\nradialextent: Provides a estimate of the radial extent of the model in the image domain. This is used for estimating the size of the image, and for plotting.\nflux: Returns the total flux of the model.\nintensity_point: Defines how to compute model intensities pointwise. Note this is must be defined if imanalytic(::Type{YourModel})==IsAnalytic().\nvisibility_point: Defines how to compute model visibilties pointwise. Note this is must be defined if visanalytic(::Type{YourModel})==IsAnalytic().\n\nOptional Methods:\n\nispolarized: Specified whether a model is intrinsically polarized (returns IsPolarized()) or is not (returns NotPolarized()), by default a model is NotPolarized()\nvisibilities_analytic: Vectorized version of visibility_point for models where visanalytic returns IsAnalytic()\nvisibilities_numeric: Vectorized version of visibility_point for models where visanalytic returns NotAnalytic() typically these are numerical FT's\nintensitymap_analytic: Computes the entire image for models where imanalytic returns IsAnalytic()\nintensitymap_numeric: Computes the entire image for models where imanalytic returns NotAnalytic()\nintensitymap_analytic!: Inplace version of intensitymap\nintensitymap_numeric!: Inplace version of intensitymap\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.isprimitive","page":"ComradeBase API","title":"ComradeBase.isprimitive","text":"isprimitive(::Type)\n\nDispatch function that specifies whether a type is a primitive Comrade model. This function is used for dispatch purposes when composing models.\n\nNotes\n\nIf a user is specifying their own model primitive model outside of Comrade they need to specify if it is primitive\n\nstruct MyPrimitiveModel end\nComradeBase.isprimitive(::Type{MyModel}) = ComradeBase.IsPrimitive()\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visanalytic","page":"ComradeBase API","title":"ComradeBase.visanalytic","text":"visanalytic(::Type{<:AbstractModel})\n\nDetermines whether the model is pointwise analytic in Fourier domain, i.e. we can evaluate its fourier transform at an arbritrary point.\n\nIf IsAnalytic() then it will try to call visibility_point to calculate the complex visibilities. Otherwise it fallback to using the FFT that works for all models that can compute an image.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.imanalytic","page":"ComradeBase API","title":"ComradeBase.imanalytic","text":"imanalytic(::Type{<:AbstractModel})\n\nDetermines whether the model is pointwise analytic in the image domain, i.e. we can evaluate its intensity at an arbritrary point.\n\nIf IsAnalytic() then it will try to call intensity_point to calculate the intensity.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.ispolarized","page":"ComradeBase API","title":"ComradeBase.ispolarized","text":"ispolarized(::Type)\n\nTrait function that defines whether a model is polarized or not.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.radialextent","page":"ComradeBase API","title":"ComradeBase.radialextent","text":"radialextent(model::AbstractModel)\n\nProvides an estimate of the radial size/extent of the model. This is used internally to estimate image size when plotting and using modelimage\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.PrimitiveTrait","page":"ComradeBase API","title":"ComradeBase.PrimitiveTrait","text":"abstract type PrimitiveTrait\n\nThis trait specifies whether the model is a primitive\n\nNotes\n\nThis will likely turn into a trait in the future so people can inject their models into Comrade more easily.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.IsPrimitive","page":"ComradeBase API","title":"ComradeBase.IsPrimitive","text":"struct IsPrimitive\n\nTrait for primitive model\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.NotPrimitive","page":"ComradeBase API","title":"ComradeBase.NotPrimitive","text":"struct NotPrimitive\n\nTrait for not-primitive model\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.DensityAnalytic","page":"ComradeBase API","title":"ComradeBase.DensityAnalytic","text":"DensityAnalytic\n\nInternal type for specifying the nature of the model functions. Whether they can be easily evaluated pointwise analytic. This is an internal type that may change.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.IsAnalytic","page":"ComradeBase API","title":"ComradeBase.IsAnalytic","text":"struct IsAnalytic <: ComradeBase.DensityAnalytic\n\nDefines a trait that a states that a model is analytic. This is usually used with an abstract model where we use it to specify whether a model has a analytic fourier transform and/or image.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.NotAnalytic","page":"ComradeBase API","title":"ComradeBase.NotAnalytic","text":"struct NotAnalytic <: ComradeBase.DensityAnalytic\n\nDefines a trait that a states that a model is analytic. This is usually used with an abstract model where we use it to specify whether a model has does not have a easy analytic fourier transform and/or intensity function.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.visibility_point","page":"ComradeBase API","title":"ComradeBase.visibility_point","text":"visibility_point(model::AbstractModel, p)\n\nFunction that computes the pointwise visibility. This must be implemented in the model interface if visanalytic(::Type{MyModel}) == IsAnalytic()\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities_analytic","page":"ComradeBase API","title":"ComradeBase.visibilities_analytic","text":"visibilties_analytic(model, u, v, time, freq)\n\nComputes the visibilties of a model using using the analytic visibility expression given by visibility_point.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities_analytic!","page":"ComradeBase API","title":"ComradeBase.visibilities_analytic!","text":"visibilties_analytic!(vis, model, u, v, time, freq)\n\nComputes the visibilties of a model in-place, using using the analytic visibility expression given by visibility_point.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities_numeric","page":"ComradeBase API","title":"ComradeBase.visibilities_numeric","text":"visibilties_numeric(model, u, v, time, freq)\n\nComputes the visibilties of a model using a numerical fourier transform. Note that none of these are implemented in ComradeBase. For implementations please see Comrade.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities_numeric!","page":"ComradeBase API","title":"ComradeBase.visibilities_numeric!","text":"visibilties_numeric!(vis, model, u, v, time, freq)\n\nComputes the visibilties of a model in-place using a numerical fourier transform. Note that none of these are implemented in ComradeBase. For implementations please see Comrade.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensity_point","page":"ComradeBase API","title":"ComradeBase.intensity_point","text":"intensity_point(model::AbstractModel, p)\n\nFunction that computes the pointwise intensity if the model has the trait in the image domain IsAnalytic(). Otherwise it will use construct the image in visibility space and invert it.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap_analytic","page":"ComradeBase API","title":"ComradeBase.intensitymap_analytic","text":"intensitymap_analytic(m::AbstractModel, p::AbstractDims)\n\nComputes the IntensityMap of a model m using the image dimensions p by broadcasting over the analytic intensity_point method.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap_analytic!","page":"ComradeBase API","title":"ComradeBase.intensitymap_analytic!","text":"intensitymap_analytic!(img::IntensityMap, m::AbstractModel)\nintensitymap_analytic!(img::StokesIntensityMap, m::AbstractModel)\n\nUpdates the img using the model m by broadcasting over the analytic intensity_point method.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap_numeric","page":"ComradeBase API","title":"ComradeBase.intensitymap_numeric","text":"intensitymap_numeric(m::AbstractModel, p::AbstractDims)\n\nComputes the IntensityMap of a model m at the image positions p using a numerical method. This has to be specified uniquely for every model m if imanalytic(typeof(m)) === NotAnalytic(). See Comrade.jl for example implementations.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap_numeric!","page":"ComradeBase API","title":"ComradeBase.intensitymap_numeric!","text":"intensitymap_numeric!(img::IntensityMap, m::AbstractModel)\nintensitymap_numeric!(img::StokesIntensityMap, m::AbstractModel)\n\nUpdates the img using the model m using a numerical method. This has to be specified uniquely for every model m if imanalytic(typeof(m)) === NotAnalytic(). See Comrade.jl for example implementations.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#Image-Types","page":"ComradeBase API","title":"Image Types","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"ComradeBase.IntensityMap(::AbstractArray, ::AbstractDims)\nComradeBase.StokesIntensityMap\nComradeBase.imagepixels\nComradeBase.GriddedKeys\nComradeBase.dims\nComradeBase.named_dims\nComradeBase.axisdims\nComradeBase.stokes\nComradeBase.imagegrid\nComradeBase.fieldofview\nComradeBase.pixelsizes\nComradeBase.phasecenter\nComradeBase.centroid\nComradeBase.second_moment\nComradeBase.header\nComradeBase.NoHeader\nComradeBase.MinimalHeader\nComradeBase.load\nComradeBase.save","category":"page"},{"location":"base_api/#ComradeBase.IntensityMap-Tuple{AbstractArray, ComradeBase.AbstractDims}","page":"ComradeBase API","title":"ComradeBase.IntensityMap","text":"IntensityMap(data::AbstractArray, dims::NamedTuple)\nIntensityMap(data::AbstractArray, grid::AbstractDims)\n\nConstructs an intensitymap using the image dimensions given by dims. This returns a KeyedArray with keys given by an ImageDimensions object.\n\ndims = (X=range(-10.0, 10.0, length=100), Y = range(-10.0, 10.0, length=100),\n T = [0.1, 0.2, 0.5, 0.9, 1.0], F = [230e9, 345e9]\n )\nimgk = IntensityMap(rand(100,100,5,1), dims)\n\n\n\n\n\n","category":"method"},{"location":"base_api/#ComradeBase.StokesIntensityMap","page":"ComradeBase API","title":"ComradeBase.StokesIntensityMap","text":"struct StokesIntensityMap{T, N, SI, SQ, SU, SV}\n\nGeneral struct that holds intensity maps for each stokes parameter. Each image I, Q, U, V must share the same axis dimensions. This type also obeys much of the usual array interface in Julia. The following methods have been implemented:\n\nsize\neltype (returns StokesParams)\nndims\ngetindex\nsetindex!\npixelsizes\nfieldofview\nimagepixels\nimagegrid\nstokes\n\nwarning: Warning\nThis may eventually be phased out for IntensityMaps whose base types are StokesParams, but currently we use this for speed reasons with Zygote.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.imagepixels","page":"ComradeBase API","title":"ComradeBase.imagepixels","text":"imagepixels(img::IntensityMap)\nimagepixels(img::IntensityMapTypes)\n\nReturns a abstract spatial dimension with the image pixels locations X and Y.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.GriddedKeys","page":"ComradeBase API","title":"ComradeBase.GriddedKeys","text":"struct GriddedKeys{N, G, Hd<:ComradeBase.AbstractHeader, T} <: ComradeBase.AbstractDims{N, T}\n\nThis struct holds the dimensions that the EHT expect. The first type parameter N defines the names of each dimension. These names are usually one of - (:X, :Y, :T, :F) - (:X, :Y, :F, :T) - (:X, :Y) # spatial only where :X,:Y are the RA and DEC spatial dimensions respectively, :T is the the time direction and :F is the frequency direction.\n\nFieldnames\n\ndims\nheader\n\nNotes\n\nWarning it is rare you need to access this constructor directly. Instead use the direct IntensityMap function.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.dims","page":"ComradeBase API","title":"ComradeBase.dims","text":"dims(g::AbstractDims)\n\nReturns a tuple containing the dimensions of g. For a named version see ComradeBase.named_dims\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.named_dims","page":"ComradeBase API","title":"ComradeBase.named_dims","text":"named_dims(g::AbstractDims)\n\nReturns a named tuple containing the dimensions of g. For a unnamed version see dims\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.axisdims","page":"ComradeBase API","title":"ComradeBase.axisdims","text":"axisdims(img::IntensityMap)\n\nReturns the keys of the IntensityMap as the actual internal AbstractDims object.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.stokes","page":"ComradeBase API","title":"ComradeBase.stokes","text":"stokes(m::AbstractPolarizedModel, p::Symbol)\n\nExtract the specific stokes component p from the polarized model m\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.imagegrid","page":"ComradeBase API","title":"ComradeBase.imagegrid","text":"imagegrid(k::IntensityMap)\n\nReturns the grid the IntensityMap is defined as. Note that this is unallocating since it lazily computes the grid. The grid is an example of a KeyedArray and works similarly. This is useful for broadcasting a model across an abritrary grid.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.fieldofview","page":"ComradeBase API","title":"ComradeBase.fieldofview","text":"fieldofview(img::IntensityMap)\nfieldofview(img::IntensityMapTypes)\n\nReturns a named tuple with the field of view of the image.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.pixelsizes","page":"ComradeBase API","title":"ComradeBase.pixelsizes","text":"pixelsizes(img::IntensityMap)\npixelsizes(img::IntensityMapTypes)\n\nReturns a named tuple with the spatial pixel sizes of the image.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.phasecenter","page":"ComradeBase API","title":"ComradeBase.phasecenter","text":"phasecenter(img::IntensityMap)\nphasecenter(img::StokesIntensitymap)\n\nComputes the phase center of an intensity map. Note this is the pixels that is in the middle of the image.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.centroid","page":"ComradeBase API","title":"ComradeBase.centroid","text":"centroid(im::AbstractIntensityMap)\n\nComputes the image centroid aka the center of light of the image.\n\nFor polarized maps we return the centroid for Stokes I only.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.second_moment","page":"ComradeBase API","title":"ComradeBase.second_moment","text":"second_moment(im::AbstractIntensityMap; center=true)\n\nComputes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.\n\nFor polarized maps we return the second moment for Stokes I only.\n\n\n\n\n\nsecond_moment(im::AbstractIntensityMap; center=true)\n\nComputes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.header","page":"ComradeBase API","title":"ComradeBase.header","text":"header(g::AbstractDims)\n\nReturns the headerinformation of the dimensions g\n\n\n\n\n\nheader(img::IntensityMap)\n\nRetrieves the header of an IntensityMap\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.NoHeader","page":"ComradeBase API","title":"ComradeBase.NoHeader","text":"NoHeader\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.MinimalHeader","page":"ComradeBase API","title":"ComradeBase.MinimalHeader","text":"MinimalHeader{T}\n\nA minimal header type for ancillary image information.\n\nFields\n\nsource: Common source name\n\nra: Right ascension of the image in degrees (J2000)\n\ndec: Declination of the image in degrees (J2000)\n\nmjd: Modified Julian Date in days\n\nfrequency: Frequency of the image in Hz\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.load","page":"ComradeBase API","title":"ComradeBase.load","text":"ComradeBase.load(fitsfile::String, IntensityMap)\n\nThis loads in a fits file that is more robust to the various imaging algorithms in the EHT, i.e. is works with clean, smili, eht-imaging. The function returns an tuple with an intensitymap and a second named tuple with ancillary information about the image, like the source name, location, mjd, and radio frequency.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.save","page":"ComradeBase API","title":"ComradeBase.save","text":"ComradeBase.save(file::String, img::IntensityMap, obs)\n\nSaves an image to a fits file. You can optionally pass an EHTObservation so that ancillary information will be added.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#Polarization","page":"ComradeBase API","title":"Polarization","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"ComradeBase.AbstractPolarizedModel","category":"page"},{"location":"base_api/#ComradeBase.AbstractPolarizedModel","page":"ComradeBase API","title":"ComradeBase.AbstractPolarizedModel","text":"abstract type AbstractPolarizedModel <: ComradeBase.AbstractModel\n\nType the classifies a model as being intrinsically polarized. This means that any call to visibility must return a StokesParams to denote the full stokes polarization of the model.\n\n\n\n\n\n","category":"type"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"EditURL = \"../../../examples/imaging_closures.jl\"","category":"page"},{"location":"examples/imaging_closures/#Imaging-a-Black-Hole-using-only-Closure-Quantities","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"In this tutorial, we will create a preliminary reconstruction of the 2017 M87 data on April 6 using closure-only imaging. This tutorial is a general introduction to closure-only imaging in Comrade. For an introduction to simultaneous image and instrument modeling, see Stokes I Simultaneous Image and Instrument Modeling","category":"page"},{"location":"examples/imaging_closures/#Introduction-to-Closure-Imaging","page":"Imaging a Black Hole using only Closure Quantities","title":"Introduction to Closure Imaging","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"The EHT is the highest-resolution telescope ever created. Its resolution is equivalent to roughly tracking a hockey puck on the moon when viewing it from the earth. However, the EHT is also a unique interferometer. For one, the data it produces is incredibly sparse. The array is formed from only eight geographic locations around the planet, each with its unique telescope. Additionally, the EHT observes at a much higher frequency than typical interferometers. As a result, it is often difficult to directly provide calibrated data since the source model can be complicated. This implies there can be large instrumental effects often called gains that can corrupt our signal. One way to deal with this is to fit quantities that are independent of gains. These are often called closure quantities. The types of closure quantities are briefly described in Introduction to the VLBI Imaging Problem.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"In this tutorial, we will do closure-only modeling of M87 to produce preliminary images of M87.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To get started, we will load Comrade","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using Comrade\n\n\nusing Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\n\nusing Pyehtim","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using StableRNGs\nrng = StableRNG(123)","category":"page"},{"location":"examples/imaging_closures/#Load-the-Data","page":"Imaging a Black Hole using only Closure Quantities","title":"Load the Data","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To download the data visit https://doi.org/10.25739/g85n-f134 To load the eht-imaging obsdata object we do:","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"obs = ehtim.obsdata.load_uvfits(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now we do some minor preprocessing:","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Scan average the data since the data have been preprocessed so that the gain phases are coherent.\nAdd 1% systematic noise to deal with calibration issues that cause 1% non-closing errors.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"obs = scan_average(obs).add_fractional_noise(0.015)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now, we extract our closure quantities from the EHT data set.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3), ClosurePhases(;snrcut=3))","category":"page"},{"location":"examples/imaging_closures/#Build-the-Model/Posterior","page":"Imaging a Black Hole using only Closure Quantities","title":"Build the Model/Posterior","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"For our model, we will be using an image model that consists of a raster of point sources, convolved with some pulse or kernel to make a ContinuousImage object with it Comrade's. generic image model.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"function sky(θ, metadata)\n (;fg, c, σimg) = θ\n (;K, meanpr, grid, cache) = metadata\n # Construct the image model we fix the flux to 0.6 Jy in this case\n cp = meanpr .+ σimg.*c.params\n rast = ((1-fg))*K(to_simplex(CenteredLR(), cp))\n img = IntensityMap(rast, grid)\n m = ContinuousImage(img, cache)\n # Add a large-scale gaussian to deal with the over-resolved mas flux\n g = modify(Gaussian(), Stretch(μas2rad(250.0), μas2rad(250.0)), Renormalize(fg))\n return m + g\nend","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now, let's set up our image model. The EHT's nominal resolution is 20-25 μas. Additionally, the EHT is not very sensitive to a larger field of views; typically, 60-80 μas is enough to describe the compact flux of M87. Given this, we only need to use a small number of pixels to describe our image.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"npix = 32\nfovxy = μas2rad(150.0)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now, we can feed in the array information to form the cache","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"grid = imagepixels(fovxy, fovxy, npix, npix)\nbuffer = IntensityMap(zeros(npix,npix), grid)\ncache = create_cache(NFFTAlg(dlcamp), buffer, BSplinePulse{3}())","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now we need to specify our image prior. For this work we will use a Gaussian Markov Random field prior","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using VLBIImagePriors, Distributions, DistributionsAD","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Since we are using a Gaussian Markov random field prior we need to first specify our mean image. For this work we will use a symmetric Gaussian with a FWHM of 50 μas","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"fwhmfac = 2*sqrt(2*log(2))\nmpr = modify(Gaussian(), Stretch(μas2rad(50.0)./fwhmfac))\nimgpr = intensitymap(mpr, grid)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now since we are actually modeling our image on the simplex we need to ensure that our mean image has unit flux","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"imgpr ./= flux(imgpr)\n\nmeanpr = to_real(CenteredLR(), Comrade.baseimage(imgpr))\nmetadata = (;meanpr,K=CenterImage(imgpr), grid, cache)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"In addition we want a reasonable guess for what the resolution of our image should be. For radio astronomy this is given by roughly the longest baseline in the image. To put this into pixel space we then divide by the pixel size.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"beam = beamsize(dlcamp)\nrat = (beam/(step(grid.X)))","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To make the Gaussian Markov random field efficient we first precompute a bunch of quantities that allow us to scale things linearly with the number of image pixels. This drastically improves the usual N^3 scaling you get from usual Gaussian Processes.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"crcache = MarkovRandomFieldCache(meanpr)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"One of the benefits of the Bayesian approach is that we can fit for the hyperparameters of our prior/regularizers unlike traditional RML appraoches. To construct this heirarchical prior we will first make a map that takes in our regularizer hyperparameters and returns the image prior given those hyperparameters.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"fmap = let crcache=crcache\n x->GaussMarkovRandomField(x, 1.0, crcache)\nend","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now we can finally form our image prior. For this we use a heirarchical prior where the correlation length is given by a inverse gamma prior to prevent overfitting. Gaussian Markov random fields are extremly flexible models. To prevent overfitting it is common to use priors that penalize complexity. Therefore, we want to use priors that enforce similarity to our mean image, and prefer smoothness.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"cprior = HierarchicalPrior(fmap, InverseGamma(1.0, -log(0.01*rat)))\n\nprior = NamedDist(c = cprior, σimg = truncated(Normal(0.0, 1.0); lower=0.01), fg=Uniform(0.0, 1.0))\n\nlklhd = RadioLikelihood(sky, dlcamp, dcphase;\n skymeta = metadata)\npost = Posterior(lklhd, prior)","category":"page"},{"location":"examples/imaging_closures/#Reconstructing-the-Image","page":"Imaging a Black Hole using only Closure Quantities","title":"Reconstructing the Image","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To sample from this posterior, it is convenient to first move from our constrained parameter space to an unconstrained one (i.e., the support of the transformed posterior is (-∞, ∞)). This is done using the asflat function.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"tpost = asflat(post)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"We can now also find the dimension of our posterior or the number of parameters we will sample.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"warning: Warning\nThis can often be different from what you would expect. This is especially true when using angular variables, where we often artificially increase the dimension of the parameter space to make sampling easier.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"ndim = dimension(tpost)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now we optimize using LBFGS","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using ComradeOptimization\nusing OptimizationOptimJL\nusing Zygote\nf = OptimizationFunction(tpost, Optimization.AutoZygote())\nprob = Optimization.OptimizationProblem(f, prior_sample(rng, tpost), nothing)\nsol = solve(prob, LBFGS(); maxiters=5_00);\nnothing #hide","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Before we analyze our solution we first need to transform back to parameter space.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"xopt = transform(tpost, sol)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"First we will evaluate our fit by plotting the residuals","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using Plots\nresidual(skymodel(post, xopt), dlcamp, ylabel=\"Log Closure Amplitude Res.\")","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"and now closure phases","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"residual(skymodel(post, xopt), dcphase, ylabel=\"|Closure Phase Res.|\")","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now let's plot the MAP estimate.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"import CairoMakie as CM\nimg = intensitymap(skymodel(post, xopt), μas2rad(150.0), μas2rad(150.0), 100, 100)\nCM.image(img, axis=(xreversed=true, aspect=1, title=\"MAP Image\"), colormap=:afmhot)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To sample from the posterior we will use HMC and more specifically the NUTS algorithm. For information about NUTS see Michael Betancourt's notes.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"note: Note\nFor our metric we use a diagonal matrix due to easier tuning.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using ComradeAHMC\nusing Zygote\nmetric = DiagEuclideanMetric(ndim)\nchain, stats = sample(post, AHMC(;metric, autodiff=Val(:Zygote)), 700; nadapts=500, init_params=xopt)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"warning: Warning\nThis should be run for longer!","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now that we have our posterior, we can assess which parts of the image are strongly inferred by the data. This is rather unique to Comrade where more traditional imaging algorithms like CLEAN and RML are inherently unable to assess uncertainty in their reconstructions.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To explore our posterior let's first create images from a bunch of draws from the posterior","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"msamples = skymodel.(Ref(post), chain[501:2:end]);\nnothing #hide","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"The mean image is then given by","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using StatsBase\nimgs = intensitymap.(msamples, μas2rad(150.0), μas2rad(150.0), 128, 128)\nmimg = mean(imgs)\nsimg = std(imgs)\nfig = CM.Figure(;resolution=(800, 800))\nCM.image(fig[1,1], mimg,\n axis=(xreversed=true, aspect=1, title=\"Mean Image\"),\n colormap=:afmhot)\nCM.image(fig[1,2], simg./(max.(mimg, 1e-5)),\n axis=(xreversed=true, aspect=1, title=\"1/SNR\",), colorrange=(0.0, 2.0),\n colormap=:afmhot)\nCM.image(fig[2,1], imgs[1],\n axis=(xreversed=true, aspect=1,title=\"Draw 1\"),\n colormap=:afmhot)\nCM.image(fig[2,2], imgs[end],\n axis=(xreversed=true, aspect=1,title=\"Draw 2\"),\n colormap=:afmhot)\nfig","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now let's see whether our residuals look better.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"p = plot();\nfor s in sample(chain[501:end], 10)\n residual!(p, vlbimodel(post, s), dlcamp)\nend\nylabel!(\"Log-Closure Amplitude Res.\");\np","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"p = plot();\nfor s in sample(chain[501:end], 10)\n residual!(p, vlbimodel(post, s), dcphase)\nend\nylabel!(\"|Closure Phase Res.|\");\np","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"And viola, you have a quick and preliminary image of M87 fitting only closure products. For a publication-level version we would recommend","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Running the chain longer and multiple times to properly assess things like ESS and R̂ (see Geometric Modeling of EHT Data)\nFitting gains. Typically gain amplitudes are good to 10-20% for the EHT not the infinite uncertainty closures implicitly assume\nMaking sure the posterior is unimodal (hint for this example it isn't!). The EHT image posteriors can be pretty complicated, so typically you want to use a sampler that can deal with multi-modal posteriors. Check out the package Pigeons.jl for an in-development package that should easily enable this type of sampling.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"This page was generated using Literate.jl.","category":"page"},{"location":"libs/adaptmcmc/#ComradeAdaptMCMC","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"","category":"section"},{"location":"libs/adaptmcmc/","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"Interface to the `AdaptiveMCMC.jl MCMC package. This uses parallel tempering to sample from the posterior. We typically recommend using one of the nested sampling packages. This interface follows Comrade's usual sampling interface for uniformity.","category":"page"},{"location":"libs/adaptmcmc/#Example","page":"ComradeAdaptMCMC","title":"Example","text":"","category":"section"},{"location":"libs/adaptmcmc/","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"using Comrade\nusing ComradeAdaptMCMC\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\n\nsmplr = AdaptMCMC(ntemp=5) # use 5 tempering levels\n\nsamples, endstate = sample(post, smplr, 500_000, 300_000)","category":"page"},{"location":"libs/adaptmcmc/#API","page":"ComradeAdaptMCMC","title":"API","text":"","category":"section"},{"location":"libs/adaptmcmc/","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"CurrentModule = ComradeAdaptMCMC","category":"page"},{"location":"libs/adaptmcmc/","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"Modules = [ComradeAdaptMCMC]","category":"page"},{"location":"libs/adaptmcmc/#ComradeAdaptMCMC.AdaptMCMC","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC.AdaptMCMC","text":"AdaptMCMC(;ntemp,\n swap=:nonrev,\n algorithm = :ram,\n fulladapt = true,\n acc_sw = 0.234,\n all_levels = false\n )\n\nCreate an AdaptMCMC.jl sampler. This sampler uses the AdaptiveMCMC.jl package to sample from the posterior. Namely, this is a parallel tempering algorithm with an adaptive exploration and tempering sampler. For more information please see [https://github.com/mvihola/AdaptiveMCMC.jl].\n\nThe arguments of the function are:\n\nntemp: Number of temperature to run in parallel tempering\nswap: Which temperature swapping strategy to use, options are:\n:norev (default) uses a non-reversible tempering scheme (still ergodic)\n:single single randomly picked swap\n:randperm swap in random order\n:sweep upward or downward sweeps picked at random\nalgorithm: exploration MCMC algorithm (default is :ram which uses robust adaptive metropolis-hastings) options are:\n:ram (default) Robust adaptive metropolis\n:am Adaptive metropolis\n:asm Adaptive scaling metropolis\n:aswam Adaptive scaling within adaptive metropolis\nfulladapt: whether we adapt both the tempering ladder and the exploration kernel (default is true, i.e. adapt everything)\nacc_sw: The target acceptance rate for temperature swaps\nall_levels: Store all tempering levels to memory (warning this can use a lot of memory)\n\n\n\n\n\n","category":"type"},{"location":"libs/adaptmcmc/#StatsBase.sample","page":"ComradeAdaptMCMC","title":"StatsBase.sample","text":"sample(post::Posterior, sampler::AdaptMCMC, nsamples, burnin=nsamples÷2, args...; init_params=nothing, kwargs...)\n\nSample the posterior post using the AdaptMCMC sampler. This will produce nsamples with the first burnin steps removed. The init_params indicate where to start the sampler from and it is expected to be a NamedTuple of parameters.\n\nPossible additional kwargs are:\n\nthin::Int = 1: which says to save only every thin sample to memory\nrng: Specify a random number generator (default uses GLOBAL_RNG)\n\nThis return a tuple where:\n\nFirst element are the chains from the sampler. If all_levels=false the only the unit temperature (posterior) chain is returned\nSecond element is the additional ancilliary information about the samples including the loglikelihood logl, sampler state state, average exploration kernel acceptance rate accexp for each tempering level, and average temperate swap acceptance rates accswp for each tempering level.\n\n\n\n\n\n","category":"function"},{"location":"benchmarks/#Benchmarks","page":"Benchmarks","title":"Benchmarks","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Comrade was partially designed with performance in mind. Solving imaging inverse problems is traditionally very computationally expensive, especially since Comrade uses Bayesian inference. To benchmark Comrade we will compare it to two of the most common modeling or imaging packages within the EHT:","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"eht-imaging\nThemis","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"eht-imaging[1] or ehtim is a Python package that is widely used within the EHT for its imaging and modeling interfaces. It is easy to use and is commonly used in the EHT. However, to specify the model, the user must specify how to calculate the model's complex visibilities and its gradients, allowing eht-imaging's modeling package to achieve acceptable speeds.","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Themis is a C++ package focused on providing Bayesian estimates of the image structure. In fact, Comrade took some design cues from Themis. Themis has been used in various EHT publications and is the standard Bayesian modeling tool used in the EHT. However, Themis is quite challenging to use and requires a high level of knowledge from its users, requiring them to understand makefile, C++, and the MPI standard. Additionally, Themis provides no infrastructure to compute gradients, instead relying on finite differencing, which scales poorly for large numbers of model parameters. ","category":"page"},{"location":"benchmarks/#Benchmarking-Problem","page":"Benchmarks","title":"Benchmarking Problem","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"For our benchmarking problem, we analyze a situation very similar to the one explained in Geometric Modeling of EHT Data. Namely, we will consider fitting 2017 M87 April 6 data using an m-ring and a single Gaussian component. Please see the end of this page to see the code we used for Comrade and eht-imaging.","category":"page"},{"location":"benchmarks/#Results","page":"Benchmarks","title":"Results","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"All tests were run using the following system","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Julia Version 1.7.3\nPython Version 3.10.5\nComrade Version 0.4.0\neht-imaging Version 1.2.4\nCommit 742b9abb4d (2022-05-06 12:58 UTC)\nPlatform Info:\n OS: Linux (x86_64-pc-linux-gnu)\n CPU: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz\n WORD_SIZE: 64\n LIBM: libopenlibm\n LLVM: libLLVM-12.0.1 (ORCJIT, tigerlake)","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Our benchmark results are the following:","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":" Comrade (micro sec) eht-imaging (micro sec) Themis (micro sec)\nposterior eval (min) 31 445 55\nposterior eval (mean) 36 476 60\ngrad posterior eval (min) 105 (ForwardDiff) 1898 1809\ngrad posterior eval (mean) 119 (ForwardDiff) 1971 1866","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Therefore, for this test we found that Comrade was the fastest method in all tests. For the posterior evaluation we found that Comrade is > 10x faster than eht-imaging, and 2x faster then Themis. For gradient evaluations we have Comrade is > 15x faster than both eht-imaging and Themis.","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"[1]: Chael A, et al. Inteferometric Imaging Directly with Closure Phases 2018 ApJ 857 1 arXiv:1803/07088","category":"page"},{"location":"benchmarks/#Code","page":"Benchmarks","title":"Code","text":"","category":"section"},{"location":"benchmarks/#Julia-Code","page":"Benchmarks","title":"Julia Code","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"using Pyehtim\nusing Comrade\nusing Distributions\nusing BenchmarkTools\nusing ForwardDiff\nusing VLBIImagePriors\nusing Zygote\n\n# To download the data visit https://doi.org/10.25739/g85n-f134\nobs = ehtim.obsdata.load_uvfits(joinpath(@__DIR__, \"assets/SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))\nobs = scan_average(obs)\namp = extract_table(obs, VisibilityAmplitudes())\n\nfunction model(θ)\n (;rad, wid, a, b, f, sig, asy, pa, x, y) = θ\n ring = f*smoothed(modify(MRing((a,), (b,)), Stretch(μas2rad(rad))), μas2rad(wid))\n g = modify(Gaussian(), Stretch(μas2rad(sig)*asy, μas2rad(sig)), Rotate(pa), Shift(μas2rad(x), μas2rad(y)), Renormalize(1-f))\n return ring + g\nend\n\nlklhd = RadioLikelihood(model, amp)\nprior = NamedDist(\n rad = Uniform(10.0, 30.0),\n wid = Uniform(1.0, 10.0),\n a = Uniform(-0.5, 0.5), b = Uniform(-0.5, 0.5),\n f = Uniform(0.0, 1.0),\n sig = Uniform((1.0), (60.0)),\n asy = Uniform(0.0, 0.9),\n pa = Uniform(0.0, 1π),\n x = Uniform(-(80.0), (80.0)),\n y = Uniform(-(80.0), (80.0))\n )\n\nθ = (rad= 22.0, wid= 3.0, a = 0.0, b = 0.15, f=0.8, sig = 20.0, asy=0.2, pa=π/2, x=20.0, y=20.0)\nm = model(θ)\n\npost = Posterior(lklhd, prior)\ntpost = asflat(post)\n\n# Transform to the unconstrained space\nx0 = inverse(tpost, θ)\n\n# Lets benchmark the posterior evaluation\nℓ = logdensityof(tpost)\n@benchmark ℓ($x0)\n\nusing LogDensityProblemsAD\n# Now we benchmark the gradient\ngℓ = ADgradient(Val(:Zygote), tpost)\n@benchmark LogDensityProblemsAD.logdensity_and_gradient($gℓ, $x0)","category":"page"},{"location":"benchmarks/#eht-imaging-Code","page":"Benchmarks","title":"eht-imaging Code","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"# To download the data visit https://doi.org/10.25739/g85n-f134\nobs = ehtim.obsdata.load_uvfits(joinpath(@__DIR__, \"assets/SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))\nobs = scan_average(obs)\n\n\n\nmeh = ehtim.model.Model()\nmeh = meh.add_thick_mring(F0=θ.f,\n d=2*μas2rad(θ.rad),\n alpha=2*sqrt(2*log(2))*μas2rad(θ.wid),\n x0 = 0.0,\n y0 = 0.0,\n beta_list=[0.0+θ.b]\n )\nmeh = meh.add_gauss(F0=1-θ.f,\n FWHM_maj=2*sqrt(2*log(2))*μas2rad(θ.sig),\n FWHM_min=2*sqrt(2*log(2))*μas2rad(θ.sig)*θ.asy,\n PA = θ.pa,\n x0 = μas2rad(20.0),\n y0 = μas2rad(20.0)\n )\n\npreh = meh.default_prior()\npreh[1][\"F0\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>0.0, \"max\"=>1.0)\npreh[1][\"d\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>μas2rad(20.0), \"max\"=>μas2rad(60.0))\npreh[1][\"alpha\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>μas2rad(2.0), \"max\"=>μas2rad(25.0))\npreh[1][\"x0\"] = Dict(\"prior_type\"=>\"fixed\")\npreh[1][\"y0\"] = Dict(\"prior_type\"=>\"fixed\")\n\npreh[2][\"F0\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>0.0, \"max\"=>1.0)\npreh[2][\"FWHM_maj\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>μas2rad(2.0), \"max\"=>μas2rad(120.0))\npreh[2][\"FWHM_min\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>μas2rad(2.0), \"max\"=>μas2rad(120.0))\npreh[2][\"x0\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>-μas2rad(40.0), \"max\"=>μas2rad(40.0))\npreh[2][\"y0\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>-μas2rad(40.0), \"max\"=>μas2rad(40.0))\npreh[2][\"PA\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>-1π, \"max\"=>1π)\n\nusing PyCall\npy\"\"\"\nimport ehtim\nimport numpy as np\ntransform_param = ehtim.modeling.modeling_utils.transform_param\ndef make_paraminit(param_map, meh, trial_model, model_prior):\n model_init = meh.copy()\n param_init = []\n for j in range(len(param_map)):\n pm = param_map[j]\n if param_map[j][1] in trial_model.params[param_map[j][0]].keys():\n param_init.append(transform_param(model_init.params[pm[0]][pm[1]]/pm[2], model_prior[pm[0]][pm[1]],inverse=False))\n else: # In this case, the parameter is a list of complex numbers, so the real/imaginary or abs/arg components need to be assigned\n if param_map[j][1].find('cpol') != -1:\n param_type = 'beta_list_cpol'\n idx = int(param_map[j][1].split('_')[0][8:])\n elif param_map[j][1].find('pol') != -1:\n param_type = 'beta_list_pol'\n idx = int(param_map[j][1].split('_')[0][7:]) + (len(trial_model.params[param_map[j][0]][param_type])-1)//2\n elif param_map[j][1].find('beta') != -1:\n param_type = 'beta_list'\n idx = int(param_map[j][1].split('_')[0][4:]) - 1\n else:\n raise Exception('Unsure how to interpret ' + param_map[j][1])\n\n curval = model_init.params[param_map[j][0]][param_type][idx]\n if '_' not in param_map[j][1]:\n param_init.append(transform_param(np.real( model_init.params[pm[0]][param_type][idx]/pm[2]), model_prior[pm[0]][pm[1]],inverse=False))\n elif param_map[j][1][-2:] == 're':\n param_init.append(transform_param(np.real( model_init.params[pm[0]][param_type][idx]/pm[2]), model_prior[pm[0]][pm[1]],inverse=False))\n elif param_map[j][1][-2:] == 'im':\n param_init.append(transform_param(np.imag( model_init.params[pm[0]][param_type][idx]/pm[2]), model_prior[pm[0]][pm[1]],inverse=False))\n elif param_map[j][1][-3:] == 'abs':\n param_init.append(transform_param(np.abs( model_init.params[pm[0]][param_type][idx]/pm[2]), model_prior[pm[0]][pm[1]],inverse=False))\n elif param_map[j][1][-3:] == 'arg':\n param_init.append(transform_param(np.angle(model_init.params[pm[0]][param_type][idx])/pm[2], model_prior[pm[0]][pm[1]],inverse=False))\n else:\n if not quiet: print('Parameter ' + param_map[j][1] + ' not understood!')\n n_params = len(param_init)\n return n_params, param_init\n\"\"\"\n\n# make the python param map and use optimize so we flatten the parameter space.\npmap, pmask = ehtim.modeling.modeling_utils.make_param_map(meh, preh, \"scipy.optimize.dual_annealing\", fit_model=true)\ntrial_model = meh.copy()\n\n# get initial parameters\nn_params, pinit = py\"make_paraminit\"(pmap, meh, trial_model, preh)\n\n# make data products for the globdict\ndata1, sigma1, uv1, _ = ehtim.modeling.modeling_utils.chisqdata(obs, \"amp\")\ndata2, sigma2, uv2, _ = ehtim.modeling.modeling_utils.chisqdata(obs, false)\ndata3, sigma3, uv3, _ = ehtim.modeling.modeling_utils.chisqdata(obs, false)\n\n# now set the ehtim modeling globdict\n\nehtim.modeling.modeling_utils.globdict = Dict(\"trial_model\"=>trial_model,\n \"d1\"=>\"amp\", \"d2\"=>false, \"d3\"=>false,\n \"pol1\"=>\"I\", \"pol2\"=>\"I\", \"pol3\"=>\"I\",\n \"data1\"=>data1, \"sigma1\"=>sigma1, \"uv1\"=>uv1, \"jonesdict1\"=>nothing,\n \"data2\"=>data2, \"sigma2\"=>sigma2, \"uv2\"=>uv2, \"jonesdict2\"=>nothing,\n \"data3\"=>data3, \"sigma3\"=>sigma3, \"uv3\"=>uv3, \"jonesdict3\"=>nothing,\n \"alpha_d1\"=>0, \"alpha_d2\"=>0, \"alpha_d3\"=>0,\n \"n_params\"=> n_params, \"n_gains\"=>0, \"n_leakage\"=>0,\n \"model_prior\"=>preh, \"param_map\"=>pmap, \"param_mask\"=>pmask,\n \"gain_prior\"=>nothing, \"gain_list\"=>[], \"gain_init\"=>nothing,\n \"fit_leakage\"=>false, \"leakage_init\"=>[], \"leakage_fit\"=>[],\n \"station_leakages\"=>nothing, \"leakage_prior\"=>nothing,\n \"show_updates\"=>false, \"update_interval\"=>1,\n \"gains_t1\"=>nothing, \"gains_t2\"=>nothing,\n \"minimizer_func\"=>\"scipy.optimize.dual_annealing\",\n \"Obsdata\"=>obs,\n \"fit_pol\"=>false, \"fit_cpol\"=>false,\n \"flux\"=>1.0, \"alpha_flux\"=>0, \"fit_gains\"=>false,\n \"marginalize_gains\"=>false, \"ln_norm\"=>1314.33,\n \"param_init\"=>pinit, \"test_gradient\"=>false\n )\n\n# This is the negative log-posterior\nfobj = ehtim.modeling.modeling_utils.objfunc\n\n# This is the gradient of the negative log-posterior\ngfobj = ehtim.modeling.modeling_utils.objgrad\n\n# Lets benchmark the posterior evaluation\n@benchmark fobj($pinit)\n\n# Now we benchmark the gradient\n@benchmark gfobj($pinit)","category":"page"},{"location":"libs/ahmc/#ComradeAHMC","page":"ComradeAHMC","title":"ComradeAHMC","text":"","category":"section"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"The first choice when sampling from the model/image posterior, is AdvancedHMC ), which uses Hamiltonian Monte Carlo to sample from the posterior. Specifically, we usually use the NUTS algorithm. ","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"The interface to AdvancedHMC is very powerful and general. To simplify the procedure for Comrade users, we have provided a thin interface. A user needs to specify a sampler and then call the sample function.","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"For AdvancedHMC, the user can create the sampler by calling the AHMC function. This only has one mandatory argument, the metric the sampler uses. There are currently two options:","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"- `DiagEuclideanMetric` which uses a diagonal metric for covariance adaptation\n- `DenseEuclideanMetric` which uses a dense or full rank metric for covariance adaptation","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"We recommend that a user starts with DiagEuclideanMetric since the dense metric typically requires many more samples to tune correctly. The other options for AHMC (sans autodiff) specify which version of HMC to use. Our default options match the choices made by the Stan programming language. The final option to consider is the autodiff optional argument. This specifies which auto differentiation package to use. Currently Val(:Zygote) is the recommended default for all models. If you model doesn't work with Zygote please file an issue. Eventually we will move entirely to Enzyme.","category":"page"},{"location":"libs/ahmc/#Example","page":"ComradeAHMC","title":"Example","text":"","category":"section"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"using Comrade\nusing ComradeAHMC\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\nmetric = DiagEuclideanMetric(dimension(post))\nsmplr = AHMC(metric=metric, autodiff=Val(:Zygote))\n\nsamples, stats = sample(post, smplr, 2_000; nadapts=1_000)","category":"page"},{"location":"libs/ahmc/#API","page":"ComradeAHMC","title":"API","text":"","category":"section"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"CurrentModule = ComradeAHMC","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"Modules = [ComradeAHMC]","category":"page"},{"location":"libs/ahmc/#ComradeAHMC.AHMC","page":"ComradeAHMC","title":"ComradeAHMC.AHMC","text":"AHMC\n\nCreates a sampler that uses the AdvancedHMC framework to construct an Hamiltonian Monte Carlo NUTS sampler.\n\nThe user must specify the metric they want to use. Typically we recommend DiagEuclideanMetric as a reasonable starting place. The other options are chosen to match the Stan languages defaults and should provide a good starting point. Please see the AdvancedHMC docs for more information.\n\nNotes\n\nFor autodiff the must provide a Val(::Symbol) that specifies the AD backend. Currently, we use LogDensityProblemsAD.\n\nFields\n\nmetric: AdvancedHMC metric to use\n\nintegrator: AdvancedHMC integrator Defaults to AdvancedHMC.Leapfrog\n\ntrajectory: HMC trajectory sampler Defaults to AdvancedHMC.MultinomialTS\n\ntermination: HMC termination condition Defaults to AdvancedHMC.StrictGeneralisedNoUTurn\n\nadaptor: Adaptation strategy for mass matrix and stepsize Defaults to AdvancedHMC.StanHMCAdaptor\n\ntargetacc: Target acceptance rate for all trajectories on the tree Defaults to 0.85\n\ninit_buffer: The number of steps for the initial tuning phase. Defaults to 75 which is the Stan default\n\nterm_buffer: The number of steps for the final fast step size adaptation Default if 50 which is the Stan default\n\nwindow_size: The number of steps to tune the covariance before the first doubling Default is 25 which is the Stan default\n\nautodiff: autodiff backend see LogDensitProblemsAD.jl for possible backends. The default is Zygote which is appropriate for high dimensional problems.\n\n\n\n\n\n","category":"type"},{"location":"libs/ahmc/#ComradeAHMC.DiskStore","page":"ComradeAHMC","title":"ComradeAHMC.DiskStore","text":"Disk\n\nType that specifies to save the HMC results to disk.\n\nFields\n\nname: Path of the directory where the results will be saved. If the path does not exist it will be automatically created.\n\nstride: The output stride, i.e. every stride steps the MCMC output will be dumped to disk.\n\n\n\n\n\n","category":"type"},{"location":"libs/ahmc/#ComradeAHMC.MemoryStore","page":"ComradeAHMC","title":"ComradeAHMC.MemoryStore","text":"Memory\n\nStored the HMC samplers in memory or ram.\n\n\n\n\n\n","category":"type"},{"location":"libs/ahmc/#ComradeAHMC.load_table","page":"ComradeAHMC","title":"ComradeAHMC.load_table","text":"load_table(out::DiskOutput, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table=\"samples\")\nload_table(out::String, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table=\"samples\")\n\nThe the results from a HMC run saved to disk. To read in the output the user can either pass the resulting out object, or the path to the directory that the results were saved, i.e. the path specified in DiskStore.\n\nArguments\n\nout::Union{String, DiskOutput}: If out is a string is must point to the direct that the DiskStore pointed to. Otherwise it is what is directly returned from sample.\nindices: The indices of the that you want to load into memory. The default is to load the entire table.\n\nKeyword Arguments\n\ntable: A string specifying the table you wish to read in. There are two options: \"samples\" which corresponds to the actual MCMC chain, and stats which corresponds to additional information about the sampler, e.g., the log density of each sample and tree statistics.\n\n\n\n\n\n","category":"function"},{"location":"libs/ahmc/#StatsBase.sample-Tuple{Random.AbstractRNG, Comrade.TransformedPosterior, AHMC, Any, Vararg{Any}}","page":"ComradeAHMC","title":"StatsBase.sample","text":"AbstractMCMC.sample(post::Comrade.Posterior,\n sampler::AHMC,\n nsamples;\n init_params=nothing,\n saveto::Union{Memory, Disk}=Memory(),\n kwargs...)\n\nSamples the posterior post using the AdvancedHMC sampler specified by AHMC. This will run the sampler for nsamples.\n\nTo initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.\n\nWith saveto the user can optionally specify whether to store the samples in memory MemoryStore or save directly to disk with DiskStore(filename, stride). The stride controls how often t he samples are dumped to disk.\n\nFor possible kwargs please see the AdvancedHMC.jl docs\n\nThis returns a tuple where the first element is a TypedTable of the MCMC samples in parameter space and the second argument is a set of ancilliary information about the sampler.\n\nNotes\n\nThis will automatically transform the posterior to the flattened unconstrained space.\n\n\n\n\n\n","category":"method"},{"location":"libs/ahmc/#StatsBase.sample-Union{Tuple{A}, Tuple{Random.AbstractRNG, Posterior, A, AbstractMCMC.AbstractMCMCEnsemble, Any, Any}} where A<:AHMC","page":"ComradeAHMC","title":"StatsBase.sample","text":"AbstractMCMC.sample(post::Comrade.Posterior,\n sampler::AHMC,\n parallel::AbstractMCMC.AbstractMCMCEnsemble,\n nsamples,\n nchainsl;\n init_params=nothing,\n kwargs...)\n\nSamples the posterior post using the AdvancedHMC sampler specified by AHMC. This will sample nchains copies of the posterior using the parallel scheme. Each chain will be sampled for nsamples.\n\nTo initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.\n\nFor possible kwargs please see the AdvancedHMC.jl docs\n\nThis returns a tuple where the first element is nchains of TypedTable's each which contains the MCMC samples of one of the parallel chain and the second argument is a set of ancilliary information about each set of samples.\n\nNotes\n\nThis will automatically transform the posterior to the flattened unconstrained space.\n\n\n\n\n\n","category":"method"},{"location":"interface/#Model-Interface","page":"Model Interface","title":"Model Interface","text":"","category":"section"},{"location":"interface/","page":"Model Interface","title":"Model Interface","text":"For the interface for sky models please see VLBISkyModels.","category":"page"},{"location":"libs/optimization/#ComradeOptimization","page":"ComradeOptimization","title":"ComradeOptimization","text":"","category":"section"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"To optimize our posterior, we use the Optimization.jl package. Optimization provides a global interface to several Julia optimizers. The Comrade wrapper for Optimization.jl is very thin. The only difference addition is that Comrade has provided a method:","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"OptimizationFunction(::TransformedPosterior, args...; kwargs...)","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"meaning we can pass it a posterior object and it will set up the OptimizationFunction for us. ","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"note: Note\n","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"We only specify this for a transformed version of the posterior. This is because Optimization.jl requires a flattened version of the posterior.","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"Additionally, different optimizers may prefer different parameter transformations. For example, if we use OptimizationBBO, using ascube is a good choice since it needs a compact region to search over, and ascube convert our parameter space to the unit hypercube. On the other hand, gradient-based optimizers work best without bounds, so a better choice would be the asflat transformation.","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"To see what optimizers are available and what options are available, please see the Optimizations.jl docs.","category":"page"},{"location":"libs/optimization/#Example","page":"ComradeOptimization","title":"Example","text":"","category":"section"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"using Comrade\nusing ComradeOptimization\nusing OptimizationOptimJL\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\n# Create a optimization problem using Zygote as the backend\nfflat = OptimizationProblem(asflat(post), Optimization.AutoZygote())\n\n# create the problem from a random point in the prior, nothing is b/c there are no additional arugments to our function.\nprob = OptimizationProblem(fflat, prior_sample(asflat(post)), nothing)\n\n# Now solve! Here we use LBFGS\nsol = solve(prob, LBFGS(); g_tol=1e-2)","category":"page"},{"location":"libs/optimization/#API","page":"ComradeOptimization","title":"API","text":"","category":"section"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"CurrentModule = ComradeOptimization","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"Modules = [ComradeOptimization]\nOrder = [:function, :type]","category":"page"},{"location":"libs/optimization/#ComradeOptimization.laplace-Tuple{OptimizationProblem, Any, Vararg{Any}}","page":"ComradeOptimization","title":"ComradeOptimization.laplace","text":"laplace(prob, opt, args...; kwargs...)\n\nCompute the Laplace or Quadratic approximation to the prob or posterior. The args and kwargs are passed the the SciMLBase.solve function. This will return a Distributions.MvNormal object that approximates the posterior in the transformed space.\n\nNote the quadratic approximation is in the space of the transformed posterior not the usual parameter space. This is better for constrained problems where we may run up against a boundary.\n\n\n\n\n\n","category":"method"},{"location":"libs/optimization/#SciMLBase.OptimizationFunction-Tuple{Comrade.TransformedPosterior, Vararg{Any}}","page":"ComradeOptimization","title":"SciMLBase.OptimizationFunction","text":"SciMLBase.OptimizationFunction(post::Posterior, args...; kwargs...)\n\nConstructs a OptimizationFunction from a Comrade.TransformedPosterior object. Note that a user must transform the posterior first. This is so we know which space is most amenable to optimization.\n\n\n\n\n\n","category":"method"},{"location":"libs/dynesty/#ComradeDynesty","page":"ComradeDynesty","title":"ComradeDynesty","text":"","category":"section"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"ComradeDynesty interfaces Comrade to the excellent dynesty package, more specifically the Dynesty.jl Julia wrapper.","category":"page"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"We follow Dynesty.jl interface closely. However, instead of having to pass a log-likelihood function and prior transform, we instead just pass a Comrade.Posterior object and Comrade takes care of defining the prior transformation and log-likelihood for us. For more information about Dynesty.jl, please see its docs and docstrings.","category":"page"},{"location":"libs/dynesty/#Example","page":"ComradeDynesty","title":"Example","text":"","category":"section"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"using Comrade\nusing ComradeDynesty\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\n# Create sampler using 1000 live points\nsmplr = NestedSampler(dimension(post), 1000)\n\nsamples, dyres = sample(post, smplr; dlogz=1.0)\n\n# Optionally resample the chain to create an equal weighted output\nusing StatsBase\nequal_weight_chain = ComradeDynesty.equalresample(samples, 10_000)","category":"page"},{"location":"libs/dynesty/#API","page":"ComradeDynesty","title":"API","text":"","category":"section"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"CurrentModule = ComradeDynesty","category":"page"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"Modules = [ComradeDynesty]\nOrder = [:function, :type]","category":"page"},{"location":"libs/dynesty/#StatsBase.sample-Tuple{Random.AbstractRNG, Comrade.TransformedPosterior, Union{DynamicNestedSampler, NestedSampler}}","page":"ComradeDynesty","title":"StatsBase.sample","text":"AbstractMCMC.sample(post::Comrade.Posterior, smplr::Dynesty.NestedSampler, args...; kwargs...)\nAbstractMCMC.sample(post::Comrade.Posterior, smplr::Dynesty.DynamicNestedSampler, args...; kwargs...)\n\nSample the posterior post using Dynesty.jl NestedSampler/DynamicNestedSampler sampler. The args/kwargs are forwarded to Dynesty for more information see its docs\n\nThis returns a tuple where the first element are the weighted samples from dynesty in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights. The final element of the tuple is the original dynesty output file.\n\nTo create equally weighted samples the user can use\n\nusing StatsBase\nchain, stats = sample(post, NestedSampler(dimension(post), 1000))\nequal_weighted_chain = sample(chain, Weights(stats.weights), 10_000)\n\n\n\n\n\n","category":"method"},{"location":"libs/nested/#ComradeNested","page":"ComradeNested","title":"ComradeNested","text":"","category":"section"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"ComradeNested interfaces Comrade to the excellent NestedSamplers.jl package.","category":"page"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"We follow NestedSamplers interface closely. The difference is that instead of creating a NestedModel, we pass a Comrade.Posterior object as our model. Internally, Comrade defines the prior transform and extracts the log-likelihood function.","category":"page"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"For more information about NestedSamplers.jl please see its docs.","category":"page"},{"location":"libs/nested/#Example","page":"ComradeNested","title":"Example","text":"","category":"section"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"using Comrade\nusing ComradeNested\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\n# Create sampler using 1000 live points\nsmplr = Nested(dimension(post), 1000)\n\nsamples = sample(post, smplr; d_logz=1.0)\n\n# Optionally resample the chain to create an equal weighted output\nusing StatsBase\nequal_weight_chain = ComradeNested.equalresample(samples, 10_000)","category":"page"},{"location":"libs/nested/#API","page":"ComradeNested","title":"API","text":"","category":"section"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"CurrentModule = ComradeNested","category":"page"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"Modules = [ComradeNested]\nOrder = [:function, :type]","category":"page"},{"location":"libs/nested/#StatsBase.sample-Tuple{Random.AbstractRNG, Comrade.TransformedPosterior, Nested, Vararg{Any}}","page":"ComradeNested","title":"StatsBase.sample","text":"AbstractMCMC.sample(post::Comrade.Posterior, smplr::Nested, args...; kwargs...)\n\nSample the posterior post using NestedSamplers.jl Nested sampler. The args/kwargs are forwarded to NestedSampler for more information see its docs\n\nThis returns a tuple where the first element are the weighted samples from NestedSamplers in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights.\n\nTo create equally weighted samples the user can use ```julia using StatsBase chain, stats = sample(post, NestedSampler(dimension(post), 1000)) equalweightedchain = sample(chain, Weights(stats.weights), 10_000)\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade-API","page":"Comrade API","title":"Comrade API","text":"","category":"section"},{"location":"api/#Contents","page":"Comrade API","title":"Contents","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Pages = [\"api.md\"]","category":"page"},{"location":"api/#Index","page":"Comrade API","title":"Index","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Pages = [\"api.md\"]","category":"page"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.Comrade","category":"page"},{"location":"api/#Comrade.Comrade","page":"Comrade API","title":"Comrade.Comrade","text":"Comrade\n\nComposable Modeling of Radio Emission\n\n\n\n\n\n","category":"module"},{"location":"api/#Model-Definitions","page":"Comrade API","title":"Model Definitions","text":"","category":"section"},{"location":"api/#Calibration-Models","page":"Comrade API","title":"Calibration Models","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.corrupt\nComrade.CalTable\nComrade.caltable(::Comrade.JonesCache, ::AbstractVector)\nComrade.caltable(::Comrade.EHTObservation, ::AbstractVector)\nComrade.DesignMatrix\nComrade.JonesCache\nComrade.ResponseCache\nComrade.JonesModel\nComrade.VLBIModel\nComrade.CalPrior\nComrade.CalPrior(::NamedTuple, ::JonesCache)\nComrade.CalPrior(::NamedTuple, ::NamedTuple, ::JonesCache)\nComrade.RIMEModel\nComrade.ObsSegmentation\nComrade.IntegSeg\nComrade.ScanSeg\nComrade.TrackSeg\nComrade.FixedSeg\nComrade.jonescache(::Comrade.EHTObservation, ::Comrade.ObsSegmentation)\nComrade.SingleReference\nComrade.RandomReference\nComrade.SEFDReference\nComrade.jonesStokes\nComrade.jonesG\nComrade.jonesD\nComrade.jonesT\nBase.map(::Any, ::Vararg{Comrade.JonesPairs})\nComrade.caltable\nComrade.JonesPairs\nComrade.GainSchema\nComrade.SegmentedJonesCache","category":"page"},{"location":"api/#Comrade.corrupt","page":"Comrade API","title":"Comrade.corrupt","text":"corrupt(vis, j1, j2)\n\nCorrupts the model coherency matrices with the Jones matrices j1 for station 1 and j2 for station 2.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.CalTable","page":"Comrade API","title":"Comrade.CalTable","text":"struct CalTable{T, G<:(AbstractVecOrMat)}\n\nA Tabes of calibration quantities. The columns of the table are the telescope station codes. The rows are the calibration quantities at a specific time stamp. This user should not use this struct directly. Instead that should call caltable.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.caltable-Tuple{JonesCache, AbstractVector}","page":"Comrade API","title":"Comrade.caltable","text":"caltable(g::JonesCache, jterms::AbstractVector)\n\nConvert the JonesCache g and recovered Jones/corruption elements jterms into a CalTable which satisfies the Tables.jl interface.\n\nExample\n\nct = caltable(gcache, gains)\n\n# Access a particular station (here ALMA)\nct[:AA]\nct.AA\n\n# Access a the first row\nct[1, :]\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.caltable-Tuple{Comrade.EHTObservation, AbstractVector}","page":"Comrade API","title":"Comrade.caltable","text":"caltable(obs::EHTObservation, gains::AbstractVector)\n\nCreate a calibration table for the observations obs with gains. This returns a CalTable object that satisfies the Tables.jl interface. This table is very similar to the DataFrames interface.\n\nExample\n\nct = caltable(obs, gains)\n\n# Access a particular station (here ALMA)\nct[:AA]\nct.AA\n\n# Access a the first row\nct[1, :]\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.DesignMatrix","page":"Comrade API","title":"Comrade.DesignMatrix","text":"struct DesignMatrix{X, M<:AbstractArray{X, 2}, T, S} <: AbstractArray{X, 2}\n\nInternal type that holds the gain design matrices for visibility corruption.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.JonesCache","page":"Comrade API","title":"Comrade.JonesCache","text":"struct JonesCache{D1, D2, S, Sc, R} <: Comrade.AbstractJonesCache\n\nHolds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope.\n\nFields\n\nm1: Design matrix for the first station\n\nm2: Design matrix for the second station\n\nseg: Segmentation schemes for this cache\n\nschema: Gain Schema\n\nreferences: List of Reference stations\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ResponseCache","page":"Comrade API","title":"Comrade.ResponseCache","text":"struct ResponseCache{M, B<:PolBasis} <: Comrade.AbstractJonesCache\n\nHolds various transformations that move from the measured telescope basis to the chosen on sky reference basis.\n\nFields\n\nT1: Transform matrices for the first stations\n\nT2: Transform matrices for the second stations\n\nrefbasis: Reference polarization basis\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.JonesModel","page":"Comrade API","title":"Comrade.JonesModel","text":"JonesModel(jones::JonesPairs, refbasis = CirBasis())\nJonesModel(jones::JonesPairs, tcache::ResponseCache)\n\nConstructs the intrument corruption model using pairs of jones matrices jones and a reference basis\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.VLBIModel","page":"Comrade API","title":"Comrade.VLBIModel","text":"VLBIModel(skymodel, instrumentmodel)\n\nConstructs a VLBIModel from a jones pairs that describe the intrument model and the model which describes the on-sky polarized visibilities. The third argument can either be the tcache that converts from the model coherency basis to the instrumental basis, or just the refbasis that will be used when constructing the model coherency matrices.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.CalPrior","page":"Comrade API","title":"Comrade.CalPrior","text":"CalPrior(dists, cache::JonesCache, reference=:none)\n\nCreates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.\n\nExample\n\nFor the 2017 observations of M87 a common CalPrior call is:\n\njulia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),\n AP = LogNormal(0.0, 0.1),\n JC = LogNormal(0.0, 0.1),\n SM = LogNormal(0.0, 0.1),\n AZ = LogNormal(0.0, 0.1),\n LM = LogNormal(0.0, 1.0),\n PV = LogNormal(0.0, 0.1)\n ), cache)\n\njulia> x = rand(gdist)\njulia> logdensityof(gdist, x)\n\n\n\n\n\nCalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)\n\nConstructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have\n\ndist0 = (AA = Normal(0.0, 1.0), )\ndistt = (AA = Normal(0.0, 0.1), )\n\nthen the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from\n\ng2 = g1 + ϵ1\n\nwhere ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.CalPrior-Tuple{NamedTuple, JonesCache}","page":"Comrade API","title":"Comrade.CalPrior","text":"CalPrior(dists, cache::JonesCache, reference=:none)\n\nCreates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.\n\nExample\n\nFor the 2017 observations of M87 a common CalPrior call is:\n\njulia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),\n AP = LogNormal(0.0, 0.1),\n JC = LogNormal(0.0, 0.1),\n SM = LogNormal(0.0, 0.1),\n AZ = LogNormal(0.0, 0.1),\n LM = LogNormal(0.0, 1.0),\n PV = LogNormal(0.0, 0.1)\n ), cache)\n\njulia> x = rand(gdist)\njulia> logdensityof(gdist, x)\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.CalPrior-Tuple{NamedTuple, NamedTuple, JonesCache}","page":"Comrade API","title":"Comrade.CalPrior","text":"CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)\n\nConstructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have\n\ndist0 = (AA = Normal(0.0, 1.0), )\ndistt = (AA = Normal(0.0, 0.1), )\n\nthen the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from\n\ng2 = g1 + ϵ1\n\nwhere ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.RIMEModel","page":"Comrade API","title":"Comrade.RIMEModel","text":"abstract type RIMEModel <: ComradeBase.AbstractModel\n\nAbstract type that encompasses all RIME style corruptions.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ObsSegmentation","page":"Comrade API","title":"Comrade.ObsSegmentation","text":"abstract type ObsSegmentation\n\nThe data segmentation scheme to use. This is important for constructing a JonesCache\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.IntegSeg","page":"Comrade API","title":"Comrade.IntegSeg","text":"struct IntegSeg{S} <: Comrade.ObsSegmentation\n\nData segmentation such that the quantity is constant over a correlation integration.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ScanSeg","page":"Comrade API","title":"Comrade.ScanSeg","text":"struct ScanSeg{S} <: Comrade.ObsSegmentation\n\nData segmentation such that the quantity is constant over a scan.\n\nWarning\n\nCurrently we do not explicity track the telescope scans. This will be fixed in a future version. Right now ScanSeg and TrackSeg are the same\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.TrackSeg","page":"Comrade API","title":"Comrade.TrackSeg","text":"struct TrackSeg <: Comrade.ObsSegmentation\n\nData segmentation such that the quantity is constant over a track, i.e., the observation \"night\".\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.FixedSeg","page":"Comrade API","title":"Comrade.FixedSeg","text":"struct FixedSeg{T} <: Comrade.ObsSegmentation\n\nEnforces that the station calibraton value will have a fixed value. This is most commonly used when enforcing a reference station for gain phases.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.jonescache-Tuple{Comrade.EHTObservation, Comrade.ObsSegmentation}","page":"Comrade API","title":"Comrade.jonescache","text":"jonescache(obs::EHTObservation, segmentation::ObsSegmentation)\njonescache(obs::EHTObservatoin, segmentation::NamedTuple)\n\nConstructs a JonesCache from a given observation obs using the segmentation scheme segmentation. If segmentation is a named tuple it is assumed that each symbol in the named tuple corresponds to a segmentation for thes sites in obs.\n\nExample\n\n# coh is a EHTObservation\njulia> jonescache(coh, ScanSeg())\njulia> segs = (AA = ScanSeg(), AP = TrachSeg(), AZ=FixedSegSeg())\njulia> jonescache(coh, segs)\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.SingleReference","page":"Comrade API","title":"Comrade.SingleReference","text":"SingleReference(site::Symbol, val::Number)\n\nUse a single site as a reference. The station gain will be set equal to val.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.RandomReference","page":"Comrade API","title":"Comrade.RandomReference","text":"RandomReference(val::Number)\n\nFor each timestamp select a random reference station whose station gain will be set to val.\n\nNotes\n\nThis is useful when there isn't a single site available for all scans and you want to split up the choice of reference site. We recommend only using this option for Stokes I fitting.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.SEFDReference","page":"Comrade API","title":"Comrade.SEFDReference","text":"SiteOrderReference(val::Number, sefd_index = 1)\n\nSelects the reference site based on the SEFD of each telescope, where the smallest SEFD is preferentially selected. The reference gain is set to val and the user can select to use the n lowest SEFD site by passing sefd_index = n.\n\nNotes\n\nThis is done on a per-scan basis so if a site is missing from a scan the next highest SEFD site will be used.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.jonesStokes","page":"Comrade API","title":"Comrade.jonesStokes","text":"jonesStokes(g1::AbstractArray, gcache::AbstractJonesCache)\njonesStokes(f, g1::AbstractArray, gcache::AbstractJonesCache)\n\nConstruct the Jones Pairs for the stokes I image only. That is, we only need to pass a single vector corresponding to the gain for the stokes I visibility. This is for when you only want to image Stokes I. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.\n\nWarning\n\nIn the future this functionality may be removed when stokes I fitting is replaced with the more correct trace(coherency), i.e. RR+LL for a circular basis.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.jonesG","page":"Comrade API","title":"Comrade.jonesG","text":"jonesG(g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)\njonesG(f, g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)\n\nConstructs the pairs Jones G matrices for each pair of stations. The g1 are the gains for the first polarization basis and g2 are the gains for the other polarization. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.\n\nThe layout for each matrix is as follows:\n\n g1 0\n 0 g2\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.jonesD","page":"Comrade API","title":"Comrade.jonesD","text":"jonesD(d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)\njonesD(f, d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)\n\nConstructs the pairs Jones D matrices for each pair of stations. The d1 are the d-termsfor the first polarization basis and d2 are the d-terms for the other polarization. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if d1 and d2 are the log-dterms then f=exp will convert them into the dterms.\n\nThe layout for each matrix is as follows:\n\n 1 d1\n d2 1\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.jonesT","page":"Comrade API","title":"Comrade.jonesT","text":"jonesT(tcache::ResponseCache)\n\nReturns a JonesPair of matrices that transform from the model coherency matrices basis to the on-sky coherency basis, this includes the feed rotation and choice of polarization feeds.\n\n\n\n\n\n","category":"function"},{"location":"api/#Base.map-Tuple{Any, Vararg{Comrade.JonesPairs}}","page":"Comrade API","title":"Base.map","text":"map(f, args::JonesPairs...) -> JonesPairs\n\nMaps over a set of JonesPairs applying the function f to each element. This returns a collected JonesPair. This us useful for more advanced operations on Jones matrices.\n\nExamples\n\nmap(G, D, F) do g, d, f\n return f'*exp.(g)*d*f\nend\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.caltable","page":"Comrade API","title":"Comrade.caltable","text":"caltable(args...)\n\nCreates a calibration table from a set of arguments. The specific arguments depend on what calibration you are applying.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.JonesPairs","page":"Comrade API","title":"Comrade.JonesPairs","text":"struct JonesPairs{T, M1<:AbstractArray{T, 1}, M2<:AbstractArray{T, 1}}\n\nHolds the pairs of Jones matrices for the first and second station of a baseline.\n\nFields\n\nm1: Vector of jones matrices for station 1\n\nm2: Vector of jones matrices for station 2\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.GainSchema","page":"Comrade API","title":"Comrade.GainSchema","text":"GainSchema(sites, times)\n\nConstructs a schema for the gains of an observation. The sites and times correspond to the specific site and time for each gain that will be modeled.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.SegmentedJonesCache","page":"Comrade API","title":"Comrade.SegmentedJonesCache","text":"struct SegmentedJonesCache{D, S<:Comrade.ObsSegmentation, ST, Ti} <: Comrade.AbstractJonesCache\n\nHolds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope. This uses a segmented decomposition so that the gain at a single timestamp is the sum of the previous gains. In this formulation the gains parameters are the segmented gain offsets from timestamp to timestamp\n\nFields\n\nm1: Design matrix for the first station\n\nm2: Design matrix for the second station\n\nseg: Segmentation scheme for this cache\n\nstations: station codes\n\ntimes: times\n\n\n\n\n\n","category":"type"},{"location":"api/#Models","page":"Comrade API","title":"Models","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"For the description of the model API see VLBISkyModels.","category":"page"},{"location":"api/#Data-Types","page":"Comrade API","title":"Data Types","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.extract_table\nComrade.ComplexVisibilities\nComrade.VisibilityAmplitudes\nComrade.ClosurePhases\nComrade.LogClosureAmplitudes\nComrade.Coherencies\nComrade.baselines\nComrade.arrayconfig\nComrade.closure_phase(::Comrade.EHTVisibilityDatum, ::Comrade.EHTVisibilityDatum, ::Comrade.EHTVisibilityDatum)\nComrade.getdata\nComrade.getuv\nComrade.getuvtimefreq\nComrade.scantable\nComrade.stations\nComrade.uvpositions\nComrade.ArrayConfiguration\nComrade.ClosureConfig\nComrade.AbstractInterferometryDatum\nComrade.ArrayBaselineDatum\nComrade.EHTObservation\nComrade.EHTArrayConfiguration\nComrade.EHTCoherencyDatum\nComrade.EHTVisibilityDatum\nComrade.EHTVisibilityAmplitudeDatum\nComrade.EHTLogClosureAmplitudeDatum\nComrade.EHTClosurePhaseDatum\nComrade.Scan\nComrade.ScanTable","category":"page"},{"location":"api/#Comrade.extract_table","page":"Comrade API","title":"Comrade.extract_table","text":"extract_table(obs, dataproducts::VLBIDataProducts)\n\nExtract an Comrade.EHTObservation table of data products dataproducts. To pass additional keyword for the data products you can pass them as keyword arguments to the data product type. For a list of potential data products see subtypes(Comrade.VLBIDataProducts).\n\nExample\n\njulia> dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3.0), ClosurePhases(;snrcut=3.0, cut_trivial=true))\njulia> dcoh = extract_table(obs, Coherencies())\njulia> dvis = extract_table(obs, VisibilityAmplitudes())\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.ComplexVisibilities","page":"Comrade API","title":"Comrade.ComplexVisibilities","text":"ComplexVisibilities(;kwargs...)\n\nType to specify to extract the complex visibilities table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nAny keyword arguments are ignored for now. Use eht-imaging directly to modify the data.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.VisibilityAmplitudes","page":"Comrade API","title":"Comrade.VisibilityAmplitudes","text":"ComplexVisibilities(;kwargs...)\n\nType to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nFor a list of potential keyword arguments see eht-imaging and add_amp command for obsdata.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ClosurePhases","page":"Comrade API","title":"Comrade.ClosurePhases","text":"ClosuresPhases(;kwargs...)\n\nType to specify to extract the closure phase table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nFor a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:\n\ncount: How the closures are formed, the available options are \"min-correct\", \"min\", \"max\"\n\nWarning\n\nThe count keyword argument is treated specially in Comrade. The default option is \"min-correct\" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count=\"min\" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.LogClosureAmplitudes","page":"Comrade API","title":"Comrade.LogClosureAmplitudes","text":"LogClosureAmplitudes(;kwargs...)\n\nType to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nFor a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:\n\ncount: How the closures are formed, the available options are \"min-correct\", \"min\", \"max\"\n\nReturns an EHTObservation with log-closure amp. datums\n\nWarning\n\nThe count keyword argument is treated specially in Comrade. The default option is \"min-correct\" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count=\"min\" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.Coherencies","page":"Comrade API","title":"Comrade.Coherencies","text":"Coherencies(;kwargs...)\n\nType to specify to extract the coherency matrices table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nAny keyword arguments are ignored for now. Use eht-imaging directly to modify the data.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.baselines","page":"Comrade API","title":"Comrade.baselines","text":"baselines(CP::EHTClosurePhaseDatum)\n\nReturns the baselines used for a single closure phase datum\n\n\n\n\n\nbaselines(CP::EHTLogClosureAmplitudeDatum)\n\nReturns the baselines used for a single closure phase datum\n\n\n\n\n\nbaselines(scan::Scan)\n\nReturn the baselines for each datum in a scan\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.arrayconfig","page":"Comrade API","title":"Comrade.arrayconfig","text":"arrayconfig(vis)\n\n\nExtract the array configuration from a EHT observation.\n\n\n\n\n\n","category":"function"},{"location":"api/#ComradeBase.closure_phase-Tuple{Comrade.EHTVisibilityDatum, Comrade.EHTVisibilityDatum, Comrade.EHTVisibilityDatum}","page":"Comrade API","title":"ComradeBase.closure_phase","text":"closure_phase(D1::EHTVisibilityDatum,\n D2::EHTVisibilityDatum,\n D3::EHTVisibilityDatum\n )\n\nComputes the closure phase of the three visibility datums.\n\nNotes\n\nWe currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.getdata","page":"Comrade API","title":"Comrade.getdata","text":"getdata(obs::EHTObservation, s::Symbol)\n\nPass-through function that gets the array of s from the EHTObservation. For example say you want the times of all measurement then\n\ngetdata(obs, :time)\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.getuv","page":"Comrade API","title":"Comrade.getuv","text":"getuv\n\nGet the u, v positions of the array.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.getuvtimefreq","page":"Comrade API","title":"Comrade.getuvtimefreq","text":"getuvtimefreq(ac)\n\n\nGet the u, v, time, freq of the array as a tuple.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.scantable","page":"Comrade API","title":"Comrade.scantable","text":"scantable(obs::EHTObservation)\n\nReorganizes the observation into a table of scans, where scan are defined by unique timestamps. To access the data you can use scalar indexing\n\nExample\n\nst = scantable(obs)\n# Grab the first scan\nscan1 = st[1]\n\n# Acess the detections in the scan\nscan1[1]\n\n# grab e.g. the baselines\nscan1[:baseline]\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.stations","page":"Comrade API","title":"Comrade.stations","text":"stations(d::EHTObservation)\n\nGet all the stations in a observation. The result is a vector of symbols.\n\n\n\n\n\nstations(g::CalTable)\n\nReturn the stations in the calibration table\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.uvpositions","page":"Comrade API","title":"Comrade.uvpositions","text":"uvpositions(datum::AbstractVisibilityDatum)\n\nGet the uvp positions of an inferometric datum.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.ArrayConfiguration","page":"Comrade API","title":"Comrade.ArrayConfiguration","text":"abstract type ArrayConfiguration\n\nThis defined the abstract type for an array configuration. Namely, baseline times, SEFD's, bandwidth, observation frequencies, etc.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ClosureConfig","page":"Comrade API","title":"Comrade.ClosureConfig","text":"struct ClosureConfig{A, D} <: Comrade.ArrayConfiguration\n\nArray config file for closure quantities. This stores the design matrix designmat that transforms from visibilties to closure products.\n\nFields\n\nac: Array configuration for visibilities\ndesignmat: Closure design matrix\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.AbstractInterferometryDatum","page":"Comrade API","title":"Comrade.AbstractInterferometryDatum","text":"abstract type AbstractInterferometryDatum{T}\n\nAn abstract type for all VLBI interfermetry data types. See Comrade.EHTVisibilityDatum for an example.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ArrayBaselineDatum","page":"Comrade API","title":"Comrade.ArrayBaselineDatum","text":"struct ArrayBaselineDatum{T, E, V}\n\nA single datum of an ArrayConfiguration\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTObservation","page":"Comrade API","title":"Comrade.EHTObservation","text":"struct EHTObservation{F, T<:Comrade.AbstractInterferometryDatum{F}, S<:(StructArrays.StructArray{T<:Comrade.AbstractInterferometryDatum{F}}), A, N} <: Comrade.Observation{F}\n\nThe main data product type in Comrade this stores the data which can be a StructArray of any AbstractInterferometryDatum type.\n\nFields\n\ndata: StructArray of data productts\n\nconfig: Array config holds ancillary information about array\n\nmjd: modified julia date of the observation\n\nra: RA of the observation in J2000 (deg)\n\ndec: DEC of the observation in J2000 (deg)\n\nbandwidth: bandwidth of the observation (Hz)\n\nsource: Common source name\n\ntimetype: Time zone used.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTArrayConfiguration","page":"Comrade API","title":"Comrade.EHTArrayConfiguration","text":"struct EHTArrayConfiguration{F, T, S, D<:AbstractArray} <: Comrade.ArrayConfiguration\n\nStores all the non-visibility data products for an EHT array. This is useful when evaluating model visibilities.\n\nFields\n\nbandwidth: Observing bandwith (Hz)\n\ntarr: Telescope array file\n\nscans: Scan times\n\ndata: A struct array of ArrayBaselineDatum holding time, freq, u, v, baselines.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTCoherencyDatum","page":"Comrade API","title":"Comrade.EHTCoherencyDatum","text":"struct EHTCoherencyDatum{S, B1, B2, M<:(StaticArraysCore.SArray{Tuple{2, 2}, Complex{S}, 2}), E<:(StaticArraysCore.SArray{Tuple{2, 2}, S, 2})} <: Comrade.AbstractInterferometryDatum{S}\n\nA Datum for a single coherency matrix\n\nFields\n\nmeasurement: coherency matrix, with entries in Jy\n\nerror: visibility uncertainty matrix, with entries in Jy\n\nU: x-direction baseline length, in λ\n\nV: y-direction baseline length, in λ\n\nT: Timestamp, in hours\n\nF: Frequency, in Hz\n\nbaseline: station baseline codes\n\npolbasis: polarization basis for each station\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTVisibilityDatum","page":"Comrade API","title":"Comrade.EHTVisibilityDatum","text":"struct EHTVisibilityDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}\n\nA struct holding the information for a single measured complex visibility.\n\nFIELDS\n\nmeasurement: Complex Vis. measurement (Jy)\n\nerror: error of the complex vis (Jy)\n\nU: u position of the data point in λ\n\nV: v position of the data point in λ\n\nT: time of the data point in (Hr)\n\nF: frequency of the data point (Hz)\n\nbaseline: station baseline codes\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTVisibilityAmplitudeDatum","page":"Comrade API","title":"Comrade.EHTVisibilityAmplitudeDatum","text":"struct EHTVisibilityAmplitudeDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}\n\nA struct holding the information for a single measured visibility amplitude.\n\nFIELDS\n\nmeasurement: amplitude (Jy)\n\nerror: error of the visibility amplitude (Jy)\n\nU: u position of the data point in λ\n\nV: v position of the data point in λ\n\nT: time of the data point in (Hr)\n\nF: frequency of the data point (Hz)\n\nbaseline: station baseline codes\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTLogClosureAmplitudeDatum","page":"Comrade API","title":"Comrade.EHTLogClosureAmplitudeDatum","text":"struct EHTLogClosureAmplitudeDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}\n\nA Datum for a single log closure amplitude.\n\n\n\nmeasurement: log-closure amplitude\n\nerror: log-closure amplitude error in the high-snr limit\n\nU1: u (λ) of first station\n\nV1: v (λ) of first station\n\nU2: u (λ) of second station\n\nV2: v (λ) of second station\n\nU3: u (λ) of third station\n\nV3: v (λ) of third station\n\nU4: u (λ) of fourth station\n\nV4: v (λ) of fourth station\n\nT: Measured time of closure phase in hours\n\nF: Measured frequency of closure phase in Hz\n\nquadrangle: station codes for the quadrangle\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTClosurePhaseDatum","page":"Comrade API","title":"Comrade.EHTClosurePhaseDatum","text":"struct EHTClosurePhaseDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}\n\nA Datum for a single closure phase.\n\nFields\n\nmeasurement: closure phase (rad)\n\nerror: error of the closure phase assuming the high-snr limit\n\nU1: u (λ) of first station\n\nV1: v (λ) of first station\n\nU2: u (λ) of second station\n\nV2: v (λ) of second station\n\nU3: u (λ) of third station\n\nV3: v (λ) of third station\n\nT: Measured time of closure phase in hours\n\nF: Measured frequency of closure phase in Hz\n\ntriangle: station baselines used\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.Scan","page":"Comrade API","title":"Comrade.Scan","text":"struct Scan{T, I, S}\n\nComposite type that holds information for a single scan of the telescope.\n\nFields\n\ntime: Scan time\n\nindex: Scan indices which are (scan index, data start index, data end index)\n\nscan: Scan data usually a StructArray of a <:AbstractVisibilityDatum\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ScanTable","page":"Comrade API","title":"Comrade.ScanTable","text":"struct ScanTable{O<:Union{Comrade.ArrayConfiguration, Comrade.Observation}, T, S}\n\nWraps EHTObservation in a table that separates the observation into scans. This implements the table interface. You can access scans by directly indexing into the table. This will create a view into the table not copying the data.\n\nExample\n\njulia> st = scantable(obs)\njulia> st[begin] # grab first scan\njulia> st[end] # grab last scan\n\n\n\n\n\n","category":"type"},{"location":"api/#Model-Cache","page":"Comrade API","title":"Model Cache","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"VLBISkyModels.NFFTAlg(::Comrade.EHTObservation)\nVLBISkyModels.NFFTAlg(::Comrade.ArrayConfiguration)\nVLBISkyModels.DFTAlg(::Comrade.EHTObservation)\nVLBISkyModels.DFTAlg(::Comrade.ArrayConfiguration)","category":"page"},{"location":"api/#VLBISkyModels.NFFTAlg-Tuple{Comrade.EHTObservation}","page":"Comrade API","title":"VLBISkyModels.NFFTAlg","text":"NFFTAlg(obs::EHTObservation; kwargs...)\n\nCreate an algorithm object using the non-unform Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.\n\nThe possible optional arguments are given in the NFFTAlg struct.\n\n\n\n\n\n","category":"method"},{"location":"api/#VLBISkyModels.NFFTAlg-Tuple{Comrade.ArrayConfiguration}","page":"Comrade API","title":"VLBISkyModels.NFFTAlg","text":"NFFTAlg(ac::ArrayConfiguration; kwargs...)\n\nCreate an algorithm object using the non-unform Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.\n\nThe optional arguments are: padfac specifies how much to pad the image by, and m is an internal variable for NFFT.jl.\n\n\n\n\n\n","category":"method"},{"location":"api/#VLBISkyModels.DFTAlg-Tuple{Comrade.EHTObservation}","page":"Comrade API","title":"VLBISkyModels.DFTAlg","text":"DFTAlg(obs::EHTObservation)\n\nCreate an algorithm object using the direct Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.\n\n\n\n\n\n","category":"method"},{"location":"api/#VLBISkyModels.DFTAlg-Tuple{Comrade.ArrayConfiguration}","page":"Comrade API","title":"VLBISkyModels.DFTAlg","text":"DFTAlg(ac::ArrayConfiguration)\n\nCreate an algorithm object using the direct Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.\n\n\n\n\n\n","category":"method"},{"location":"api/#Bayesian-Tools","page":"Comrade API","title":"Bayesian Tools","text":"","category":"section"},{"location":"api/#Posterior-Constructions","page":"Comrade API","title":"Posterior Constructions","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.ascube\nComrade.asflat\nComrade.flatten\nComrade.inverse\nComrade.prior_sample\nComrade.likelihood\nComrade.simulate_observation\nComrade.dataproducts\nComrade.skymodel\nComrade.instrumentmodel\nComrade.vlbimodel\nComrade.sample(::Posterior)\nComrade.transform\nComrade.MultiRadioLikelihood\nComrade.Posterior\nComrade.TransformedPosterior\nComrade.RadioLikelihood\nComrade.IsFlat\nComrade.IsCube","category":"page"},{"location":"api/#HypercubeTransform.ascube","page":"Comrade API","title":"HypercubeTransform.ascube","text":"ascube(post::Posterior)\n\nConstruct a flattened version of the posterior where the parameters are transformed to live in (0, 1), i.e. the unit hypercube.\n\nThis returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.\n\nExample\n\njulia> tpost = ascube(post)\njulia> x0 = prior_sample(tpost)\njulia> logdensityof(tpost, x0)\n\nNotes\n\nThis is the transform that should be used if using typical NestedSampling methods, i.e. ComradeNested. For the transformation to unconstrained space see asflat\n\n\n\n\n\n","category":"function"},{"location":"api/#HypercubeTransform.asflat","page":"Comrade API","title":"HypercubeTransform.asflat","text":"asflat(post::Posterior)\n\nConstruct a flattened version of the posterior where the parameters are transformed to live in (-∞, ∞).\n\nThis returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.\n\nExample\n\njulia> tpost = ascube(post)\njulia> x0 = prior_sample(tpost)\njulia> logdensityof(tpost, x0)\n\nNotes\n\nThis is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube\n\n\n\n\n\n","category":"function"},{"location":"api/#ParameterHandling.flatten","page":"Comrade API","title":"ParameterHandling.flatten","text":"flatten(post::Posterior)\n\nConstruct a flattened version of the posterior but do not transform to any space, i.e. use the support specified by the prior.\n\nThis returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.\n\nExample\n\njulia> tpost = flatten(post)\njulia> x0 = prior_sample(tpost)\njulia> logdensityof(tpost, x0)\n\nNotes\n\nThis is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube\n\n\n\n\n\n","category":"function"},{"location":"api/#TransformVariables.inverse","page":"Comrade API","title":"TransformVariables.inverse","text":"inverse(posterior::TransformedPosterior, x)\n\nTransforms the value y from parameter space to the transformed space (e.g. unit hypercube if using ascube).\n\nFor the inverse transform see transform\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.prior_sample","page":"Comrade API","title":"Comrade.prior_sample","text":"prior_sample([rng::AbstractRandom], post::Posterior, args...)\n\nSamples the prior distribution from the posterior. The args... are forwarded to the Base.rand method.\n\n\n\n\n\nprior_sample([rng::AbstractRandom], post::Posterior)\n\nReturns a single sample from the prior distribution.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.likelihood","page":"Comrade API","title":"Comrade.likelihood","text":"likelihood(d::ConditionedLikelihood, μ)\n\nReturns the likelihood of the model, with parameters μ. That is, we return the distribution of the data given the model parameters μ. This is an actual probability distribution.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.simulate_observation","page":"Comrade API","title":"Comrade.simulate_observation","text":"simulate_observation([rng::Random.AbstractRNG], post::Posterior, θ)\n\nCreate a simulated observation using the posterior and its data post using the parameter values θ. In Bayesian terminology this is a draw from the posterior predictive distribution.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.dataproducts","page":"Comrade API","title":"Comrade.dataproducts","text":"dataproducts(d::RadioLikelihood)\n\nReturns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.\n\n\n\n\n\ndataproducts(d::Posterior)\n\nReturns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.skymodel","page":"Comrade API","title":"Comrade.skymodel","text":"skymodel(post::RadioLikelihood, θ)\n\nReturns the sky model or image of a posterior using the parameter valuesθ\n\n\n\n\n\nskymodel(post::Posterior, θ)\n\nReturns the sky model or image of a posterior using the parameter valuesθ\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.instrumentmodel","page":"Comrade API","title":"Comrade.instrumentmodel","text":"skymodel(lklhd::RadioLikelihood, θ)\n\nReturns the instrument model of a lklhderior using the parameter valuesθ\n\n\n\n\n\nskymodel(post::Posterior, θ)\n\nReturns the instrument model of a posterior using the parameter valuesθ\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.vlbimodel","page":"Comrade API","title":"Comrade.vlbimodel","text":"vlbimodel(post::Posterior, θ)\n\nReturns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ\n\n\n\n\n\nvlbimodel(post::Posterior, θ)\n\nReturns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ\n\n\n\n\n\n","category":"function"},{"location":"api/#StatsBase.sample-Tuple{Posterior}","page":"Comrade API","title":"StatsBase.sample","text":"sample(post::Posterior, sampler::S, args...; init_params=nothing, kwargs...)\n\nSample a posterior post using the sampler. You can optionally pass the starting location of the sampler using init_params, otherwise a random draw from the prior will be used.\n\n\n\n\n\n","category":"method"},{"location":"api/#TransformVariables.transform","page":"Comrade API","title":"TransformVariables.transform","text":"transform(posterior::TransformedPosterior, x)\n\nTransforms the value x from the transformed space (e.g. unit hypercube if using ascube) to parameter space which is usually encoded as a NamedTuple.\n\nFor the inverse transform see inverse\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.MultiRadioLikelihood","page":"Comrade API","title":"Comrade.MultiRadioLikelihood","text":"MultiRadioLikelihood(lklhd1, lklhd2, ...)\n\nCombines multiple likelihoods into one object that is useful for fitting multiple days/frequencies.\n\njulia> lklhd1 = RadioLikelihood(dcphase1, dlcamp1)\njulia> lklhd2 = RadioLikelihood(dcphase2, dlcamp2)\njulia> MultiRadioLikelihood(lklhd1, lklhd2)\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.Posterior","page":"Comrade API","title":"Comrade.Posterior","text":"Posterior(lklhd, prior)\n\nCreates a Posterior density that follows obeys DensityInterface. The lklhd object is expected to be a VLB object. For instance, these can be created using RadioLikelihood. prior\n\nNotes\n\nSince this function obeys DensityInterface you can evaluate it with\n\njulia> ℓ = logdensityof(post)\njulia> ℓ(x)\n\nor using the 2-argument version directly\n\njulia> logdensityof(post, x)\n\nwhere post::Posterior.\n\nTo generate random draws from the prior see the prior_sample function.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.TransformedPosterior","page":"Comrade API","title":"Comrade.TransformedPosterior","text":"struct TransformedPosterior{P<:Posterior, T} <: Comrade.AbstractPosterior\n\nA transformed version of a Posterior object. This is an internal type that an end user shouldn't have to directly construct. To construct a transformed posterior see the asflat, ascube, and flatten docstrings.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.RadioLikelihood","page":"Comrade API","title":"Comrade.RadioLikelihood","text":"RadioLikelihood(skymodel, instumentmodel, dataproducts::EHTObservation...;\n skymeta=nothing,\n instrumentmeta=nothing)\n\nCreates a RadioLikelihood using the skymodel its related metadata skymeta and the instrumentmodel and its metadata instumentmeta. . The model is a function that converts from parameters θ to a Comrade AbstractModel which can be used to compute visibilities and a set of metadata that is used by model to compute the model.\n\nWarning\n\nThe model itself must be a two argument function where the first argument is the set of model parameters and the second is a container that holds all the additional information needed to construct the model. An example of this is when the model needs some precomputed cache to define the model.\n\nExample\n\ndlcamp, dcphase = extract_table(obs, LogClosureAmplitude(), ClosurePhases())\ncache = create_cache(NFFTAlg(dlcamp), IntensityMap(zeros(128,128), μas2rad(100.0), μas2rad(100.0)))\n\nfunction skymodel(θ, metadata)\n (; r, a) = θ\n (; cache) = metadata\n m = stretched(ExtendedRing(a), r, r)\n return modelimage(m, metadata.cache)\nend\n\nfunction instrumentmodel(g, metadata)\n (;lg, gp) = g\n (;gcache) = metadata\n jonesStokes(lg.*exp.(1im.*gp), gcache)\nend\n\nprior = (\n r = Uniform(μas2rad(10.0), μas2rad(40.0)),\n a = Uniform(0.1, 5.0)\n )\n\nRadioLikelihood(skymodel, instrumentmodel, dataproducts::EHTObservation...;\n skymeta=(;cache,),\n instrumentmeta=(;gcache))\n\n\n\n\n\nRadioLikelihood(skymodel, dataproducts::EHTObservation...; skymeta=nothing)\n\nForms a radio likelihood from a set of data products using only a sky model. This intrinsically assumes that the instrument model is not required since it is perfect. This is useful when fitting closure quantities which are independent of the instrument.\n\nIf you want to form a likelihood from multiple arrays such as when fitting different wavelengths or days, you can combine them using MultiRadioLikelihood\n\nExample\n\njulia> RadioLikelihood(skymodel, dcphase, dlcamp)\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.IsFlat","page":"Comrade API","title":"Comrade.IsFlat","text":"struct IsFlat\n\nSpecifies that the sampling algorithm usually expects a uncontrained transform\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.IsCube","page":"Comrade API","title":"Comrade.IsCube","text":"struct IsCube\n\nSpecifies that the sampling algorithm usually expects a hypercube transform\n\n\n\n\n\n","category":"type"},{"location":"api/#Sampler-Tools","page":"Comrade API","title":"Sampler Tools","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.samplertype","category":"page"},{"location":"api/#Comrade.samplertype","page":"Comrade API","title":"Comrade.samplertype","text":"samplertype(::Type)\n\nSampler type specifies whether to use a unit hypercube or unconstrained transformation.\n\n\n\n\n\n","category":"function"},{"location":"api/#Misc","page":"Comrade API","title":"Misc","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.station_tuple\nComrade.dirty_image\nComrade.dirty_beam\nComrade.beamsize","category":"page"},{"location":"api/#Comrade.station_tuple","page":"Comrade API","title":"Comrade.station_tuple","text":"station_tuple(stations, default; reference=nothing kwargs...)\nstation_tuple(obs::EHTObservation, default; reference=nothing, kwargs...)\n\nConvienence function that will construct a NamedTuple of objects whose names are the stations in the observation obs or explicitly in the argument stations. The NamedTuple will be filled with default if no kwargs are defined otherwise each kwarg (key, value) pair denotes a station and value pair.\n\nOptionally the user can specify a reference station that will be dropped from the tuple. This is useful for selecting a reference station for gain phases\n\nExamples\n\njulia> stations = (:AA, :AP, :LM, :PV)\njulia> station_tuple(stations, ScanSeg())\n(AA = ScanSeg(), AP = ScanSeg(), LM = ScanSeg(), PV = ScanSeg())\njulia> station_tuple(stations, ScanSeg(); AA = FixedSeg(1.0))\n(AA = FixedSeg(1.0), AP = ScanSeg(), LM = ScanSeg(), PV = ScanSeg())\njulia> station_tuple(stations, ScanSeg(); AA = FixedSeg(1.0), PV = TrackSeg())\n(AA = FixedSeg(1.0), AP = ScanSeg(), LM = ScanSeg(), PV = TrackSeg())\njulia> station_tuple(stations, Normal(0.0, 0.1); reference=:AA, LM = Normal(0.0, 1.0))\n(AP = Normal(0.0, 0.1), LM = Normal(0.0, 1.0), PV = Normal(0.0, 0.1))\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.dirty_image","page":"Comrade API","title":"Comrade.dirty_image","text":"dirty_image(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T\n\nComputes the dirty image of the complex visibilities assuming a field of view of fov and number of pixels npix using the complex visibilities found in the observation obs.\n\nThe dirty image is the inverse Fourier transform of the measured visibilties assuming every other visibility is zero.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.dirty_beam","page":"Comrade API","title":"Comrade.dirty_beam","text":"dirty_beam(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T\n\nComputes the dirty beam of the complex visibilities assuming a field of view of fov and number of pixels npix using baseline coverage found in obs.\n\nThe dirty beam is the inverse Fourier transform of the (u,v) coverage assuming every visibility is unity and everywhere else is zero.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.beamsize","page":"Comrade API","title":"Comrade.beamsize","text":"beamsize(ac::ArrayConfiguration)\n\nCalculate the approximate beam size of the array ac as the inverse of the longest baseline distance.\n\n\n\n\n\nbeamsize(obs::EHTObservation)\n\nCalculate the approximate beam size of the observation obs as the inverse of the longest baseline distance.\n\n\n\n\n\n","category":"function"},{"location":"api/#Internal-(Not-Public-API)","page":"Comrade API","title":"Internal (Not Public API)","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.extract_FRs\nComradeBase._visibilities!\nComradeBase._visibilities","category":"page"},{"location":"api/#Comrade.extract_FRs","page":"Comrade API","title":"Comrade.extract_FRs","text":"extract_FRs\n\nExtracts the feed rotation Jones matrices (returned as a JonesPair) from an EHT observation obs.\n\nWarning\n\neht-imaging can sometimes pre-rotate the coherency matrices. As a result the field rotation can sometimes be applied twice. To compensate for this we have added a ehtim_fr_convention which will fix this.\n\n\n\n\n\n","category":"function"},{"location":"api/#ComradeBase._visibilities!","page":"Comrade API","title":"ComradeBase._visibilities!","text":"_visibilities!(model::AbstractModel, args...)\n\nInternal method used for trait dispatch and unpacking of args arguments in visibilities!\n\nwarn: Warn\nNot part of the public API so it may change at any moment.\n\n\n\n\n\n","category":"function"},{"location":"api/#ComradeBase._visibilities","page":"Comrade API","title":"ComradeBase._visibilities","text":"_visibilities(model::AbstractModel, args...)\n\nInternal method used for trait dispatch and unpacking of args arguments in visibilities\n\nwarn: Warn\nNot part of the public API so it may change at any moment.\n\n\n\n\n\n","category":"function"},{"location":"api/#eht-imaging-interface-(Internal)","page":"Comrade API","title":"eht-imaging interface (Internal)","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.extract_amp\nComrade.extract_cphase\nComrade.extract_lcamp\nComrade.extract_vis\nComrade.extract_coherency","category":"page"},{"location":"api/#Comrade.extract_amp","page":"Comrade API","title":"Comrade.extract_amp","text":"extract_amp(obs; kwargs...)\n\nExtracts the visibility amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.extract_cphase","page":"Comrade API","title":"Comrade.extract_cphase","text":"extract_cphase(obs; kwargs...)\n\nExtracts the closure phases from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.extract_lcamp","page":"Comrade API","title":"Comrade.extract_lcamp","text":"extract_lcamp(obs; kwargs...)\n\nExtracts the log-closure amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.extract_vis","page":"Comrade API","title":"Comrade.extract_vis","text":"extract_vis(obs; kwargs...)\n\nExtracts the stokes I complex visibilities from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.extract_coherency","page":"Comrade API","title":"Comrade.extract_coherency","text":"extract_coherency(obs; kwargs...)\n\nExtracts the full coherency matrix from an observation. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"conventions/#Conventions","page":"Conventions","title":"Conventions","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"VLBI and radio astronomy has many non-standard conventions when coming from physics. Additionally, these conventions change from telescope to telescope, often making it difficult to know what assumptions different data sets and codes are making. We will detail the specific conventions that Comrade adheres to.","category":"page"},{"location":"conventions/#Rotation-Convention","page":"Conventions","title":"Rotation Convention","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"We follow the standard EHT and rotate starting from the upper y-axis and moving in a counter-clockwise direction. ","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"note: Note\nWe still use the standard astronomy definition where the positive x-axis is to the left.","category":"page"},{"location":"conventions/#Fourier-Transform-Convention","page":"Conventions","title":"Fourier Transform Convention","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"We use the positive exponent definition of the Fourier transform to define our visibilities. That is, we assume that the visibilities measured by a perfect interferometer are given by","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" V(u v) = int I(x y)e^2pi i(ux + vy)dx dy","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"This convention is consistent with the AIPS convention and what is used in other EHT codes, such as eht-imaging. ","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"warning: Warning\nThis is the opposite convention of what is written in the EHT papers, but it is the correct version for the released data.","category":"page"},{"location":"conventions/#Coherency-matrix-Convention","page":"Conventions","title":"Coherency matrix Convention","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"We use the factor of 2 definition when defining the coherency matrices. That is, the relation coherency matrix C is given by","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" C_pq = \n 2beginpmatrix\n leftv_pa v_qa^*right left v_pav_qb^*right \n leftv_pb v_qa^*right left v_pbv_qb^*right \n endpmatrix","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"where v_pa is the voltage measured from station p and feed a.","category":"page"},{"location":"conventions/#Circular-Polarization-Conversions","page":"Conventions","title":"Circular Polarization Conversions","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"To convert from measured RL circular cross-correlation products to the Fourier transform of the Stokes parameters, we use:","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" beginpmatrix\n tildeI tildeQ tildeU tildeV\n endpmatrix\n =frac12\n beginpmatrix\n leftRR^*right + leftLL^*right \n leftRL^*right + leftLR^*right \n i(leftLR^*right - leftRL^*right)\n leftRR^*right - leftLL^*right\n endpmatrix","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"where e.g., leftRL^*right = 2leftv_pRv^*_pLright.","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"The inverse transformation is then:","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" C = \n beginpmatrix\n tildeI + tildeV tildeQ + itildeU\n tildeQ - itildeU tildeI - tildeV\n endpmatrix","category":"page"},{"location":"conventions/#Linear-Polarization-Conversions","page":"Conventions","title":"Linear Polarization Conversions","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"To convert from measured XY linear cross-correlation products to the Fourier transform of the Stokes parameters, we use:","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" beginpmatrix\n tildeI tildeQ tildeU tildeV\n endpmatrix\n =frac12\n beginpmatrix\n leftXX^*right + leftYY^*right \n leftXY^*right + leftYX^*right \n i(leftYX^*right - leftXY^*right)\n leftXX^*right - leftYY^*right\n endpmatrix","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"The inverse transformation is then:","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" C = \n beginpmatrix\n tildeI + tildeQ tildeU + itildeV\n tildeU - itildeV tildeI - tildeQ\n endpmatrix","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"where e.g., leftXY^*right = 2leftv_pXv^*_pYright.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"EditURL = \"../../../examples/hybrid_imaging.jl\"","category":"page"},{"location":"examples/hybrid_imaging/#Hybrid-Imaging-of-a-Black-Hole","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"In this tutorial, we will use hybrid imaging to analyze the 2017 EHT data. By hybrid imaging, we mean decomposing the model into simple geometric models, e.g., rings and such, plus a rasterized image model to soak up the additional structure. This approach was first developed in BB20 and applied to EHT 2017 data. We will use a similar model in this tutorial.","category":"page"},{"location":"examples/hybrid_imaging/#Introduction-to-Hybrid-modeling-and-imaging","page":"Hybrid Imaging of a Black Hole","title":"Introduction to Hybrid modeling and imaging","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"The benefit of using a hybrid-based modeling approach is the effective compression of information/parameters when fitting the data. Hybrid modeling requires the user to incorporate specific knowledge of how you expect the source to look like. For instance for M87, we expect the image to be dominated by a ring-like structure. Therefore, instead of using a high-dimensional raster to recover the ring, we can use a ring model plus a raster to soak up the additional degrees of freedom. This is the approach we will take in this tutorial to analyze the April 6 2017 EHT data of M87.","category":"page"},{"location":"examples/hybrid_imaging/#Loading-the-Data","page":"Hybrid Imaging of a Black Hole","title":"Loading the Data","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"To get started we will load Comrade","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using Comrade","category":"page"},{"location":"examples/hybrid_imaging/#Load-the-Data","page":"Hybrid Imaging of a Black Hole","title":"Load the Data","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\n\nusing Pyehtim","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using StableRNGs\nrng = StableRNG(42)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"To download the data visit https://doi.org/10.25739/g85n-f134 To load the eht-imaging obsdata object we do:","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"obs = ehtim.obsdata.load_uvfits(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we do some minor preprocessing:","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Scan average the data since the data have been preprocessed so that the gain phases coherent.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"obs = scan_average(obs).add_fractional_noise(0.01)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For this tutorial we will once again fit complex visibilities since they provide the most information once the telescope/instrument model are taken into account.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"dvis = extract_table(obs, ComplexVisibilities())","category":"page"},{"location":"examples/hybrid_imaging/#Building-the-Model/Posterior","page":"Hybrid Imaging of a Black Hole","title":"Building the Model/Posterior","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we build our intensity/visibility model. That is, the model that takes in a named tuple of parameters and perhaps some metadata required to construct the model. For our model, we will use a raster or ContinuousImage model, an m-ring model, and a large asymmetric Gaussian component to model the unresolved short-baseline flux.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"function sky(θ, metadata)\n (;c, σimg, f, r, σ, τ, ξτ, ma1, mp1, ma2, mp2, fg) = θ\n (;ftot, grid, cache) = metadata\n # Form the image model\n # First transform to simplex space first applying the non-centered transform\n rast = to_simplex(CenteredLR(), σimg.*c)\n img = IntensityMap((ftot*f*(1-fg))*rast, grid)\n mimg = ContinuousImage(img, cache)\n # Form the ring model\n s1,c1 = sincos(mp1)\n s2,c2 = sincos(mp2)\n α = (ma1*c1, ma2*c2)\n β = (ma1*s1, ma2*s2)\n ring = smoothed(modify(MRing(α, β), Stretch(r, r*(1+τ)), Rotate(ξτ), Renormalize((ftot*(1-f)*(1-fg)))), σ)\n gauss = modify(Gaussian(), Stretch(μas2rad(250.0)), Renormalize(ftot*f*fg))\n # We group the geometric models together for improved efficiency. This will be\n # automated in future versions.\n return mimg + (ring + gauss)\nend","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Unlike other imaging examples (e.g., Imaging a Black Hole using only Closure Quantities) we also need to include a model for the instrument, i.e., gains as well. The gains will be broken into two components","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Gain amplitudes which are typically known to 10-20%, except for LMT, which has amplitudes closer to 50-100%.\nGain phases which are more difficult to constrain and can shift rapidly.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"function instrument(θ, metadata)\n (; lgamp, gphase) = θ\n (; gcache, gcachep) = metadata\n # Now form our instrument model\n gvis = exp.(lgamp)\n gphase = exp.(1im.*gphase)\n jgamp = jonesStokes(gvis, gcache)\n jgphase = jonesStokes(gphase, gcachep)\n return JonesModel(jgamp*jgphase)\nend","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Before we move on, let's go into the model function a bit. This function takes two arguments θ and metadata. The θ argument is a named tuple of parameters that are fit to the data. The metadata argument is all the ancillary information we need to construct the model. For our hybrid model, we will need two variables for the metadata, a grid that specifies the locations of the image pixels and a cache that defines the algorithm used to calculate the visibilities given the image model. This is required since ContinuousImage is most easily computed using number Fourier transforms like the NFFT or FFT. To combine the models, we use Comrade's overloaded + operators, which will combine the images such that their intensities and visibilities are added pointwise.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now let's define our metadata. First we will define the cache for the image. This is required to compute the numerical Fourier transform.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"fovxy = μas2rad(150.0)\nnpix = 32\ngrid = imagepixels(fovxy, fovxy, npix, npix)\nbuffer = IntensityMap(zeros(npix,npix), grid)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For our image, we will use the non-uniform Fourier transform (NFFTAlg) to compute the numerical FT. The last argument to the create_cache call is the image kernel or pulse defines the continuous function we convolve our image with to produce a continuous on-sky image.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"cache = create_cache(NFFTAlg(dvis), buffer, BSplinePulse{3}())","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"The next step is defining our image priors. For our raster c, we will use a Gaussian markov random field prior, with the softmax or centered log-ratio transform so that it lives on the simplex. That is, the sum of all the numbers from a Dirichlet distribution always equals unity. First we load VLBIImagePriors which containts a large number of priors and transformations that are useful for imaging.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using VLBIImagePriors","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we form the metadata","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"skymetadata = (;ftot=1.1, grid, cache)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Second, we now construct our instrument model cache. This tells us how to map from the gains to the model visibilities. However, to construct this map, we also need to specify the observation segmentation over which we expect the gains to change. This is specified in the second argument to jonescache, and currently, there are two options","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"FixedSeg(val): Fixes the corruption to the value val for all time. This is usefule for reference stations\nScanSeg(): which forces the corruptions to only change from scan-to-scan\nTrackSeg(): which forces the corruptions to be constant over a night's observation","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For this work, we use the scan segmentation for the gain amplitudes since that is roughly the timescale we expect them to vary. For the phases we need to set a reference station for each scan to prevent a global phase offset degeneracy. To do this we select a reference station for each scan based on the SEFD of each telescope. The telescope with the lowest SEFD that is in each scan is selected. For M87 2017 this is almost always ALMA.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"gcache = jonescache(dvis, ScanSeg())\ngcachep = jonescache(dvis, ScanSeg(), autoref=SEFDReference(1.0 + 0.0im))\n\nintmetadata = (;gcache, gcachep)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"This is everything we need to form our likelihood. Note the first two arguments must be the model and then the metadata for the likelihood. The rest of the arguments are required to be Comrade.EHTObservation","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"lklhd = RadioLikelihood(sky, instrument, dvis;\n skymeta=skymetadata, instrumentmeta=intmetadata)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Part of hybrid imaging is to force a scale separation between the different model components to make them identifiable. To enforce this we will set the length scale of the raster component equal to the beam size of the telescope in units of pixel length, which is given by","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"beam = beamsize(dvis)\nrat = (beam/(step(grid.X)))\ncprior = GaussMarkovRandomField(10*rat, 1.0, size(grid))","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"additionlly we will fix the standard deviation of the field to unity and instead use a pseudo non-centered parameterization for the field. GaussMarkovRandomField(meanpr, 0.1*rat, 1.0, crcache)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we can construct the instrument model prior Each station requires its own prior on both the amplitudes and phases. For the amplitudes we assume that the gains are apriori well calibrated around unit gains (or 0 log gain amplitudes) which corresponds to no instrument corruption. The gain dispersion is then set to 10% for all stations except LMT, representing that we expect 10% deviations from scan-to-scan. For LMT we let the prior expand to 100% due to the known pointing issues LMT had in 2017.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using Distributions\nusing DistributionsAD\ndistamp = station_tuple(dvis, Normal(0.0, 0.1); LM = Normal(0.0, 1.0))","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For the phases, as mentioned above, we will use a segmented gain prior. This means that rather than the parameters being directly the gains, we fit the first gain for each site, and then the other parameters are the segmented gains compared to the previous time. To model this","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"#, we break the gain phase prior into two parts. The first is the prior","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"for the first observing timestamp of each site, distphase0, and the second is the prior for segmented gain ϵₜ from time i to i+1, given by distphase. For the EHT, we are dealing with pre-calibrated data, so often, the gain phase jumps from scan to scan are minor. As such, we can put a more informative prior on distphase.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"warning: Warning\nWe use AA (ALMA) as a reference station so we do not have to specify a gain prior for it.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"distphase = station_tuple(dvis, DiagonalVonMises(0.0, inv(π^2)))\n\nusing VLBIImagePriors","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Finally we can put form the total model prior","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"prior = NamedDist(\n c = cprior,\n # We use a strong smoothing prior since we want to limit the amount of high-frequency structure in the raster.\n σimg = truncated(Normal(0.0, 0.1); lower=0.01),\n f = Uniform(0.0, 1.0),\n r = Uniform(μas2rad(10.0), μas2rad(30.0)),\n σ = Uniform(μas2rad(0.1), μas2rad(10.0)),\n τ = truncated(Normal(0.0, 0.1); lower=0.0, upper=1.0),\n ξτ = Uniform(-π/2, π/2),\n ma1 = Uniform(0.0, 0.5),\n mp1 = Uniform(0.0, 2π),\n ma2 = Uniform(0.0, 0.5),\n mp2 = Uniform(0.0, 2π),\n fg = Uniform(0.0, 1.0),\n lgamp = CalPrior(distamp, gcache),\n gphase = CalPrior(distphase, gcachep),\n )","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"This is everything we need to specify our posterior distribution, which our is the main object of interest in image reconstructions when using Bayesian inference.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"post = Posterior(lklhd, prior)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"To sample from our prior we can do","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"xrand = prior_sample(rng, post)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"and then plot the results","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"import CairoMakie as CM\ng = imagepixels(μas2rad(150.0), μas2rad(150.0), 128, 128)\nimageviz(intensitymap(skymodel(post, xrand), g))","category":"page"},{"location":"examples/hybrid_imaging/#Reconstructing-the-Image","page":"Hybrid Imaging of a Black Hole","title":"Reconstructing the Image","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"To sample from this posterior, it is convenient to first move from our constrained parameter space to an unconstrained one (i.e., the support of the transformed posterior is (-∞, ∞)). This is done using the asflat function.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"tpost = asflat(post)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"We can now also find the dimension of our posterior or the number of parameters we will sample.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"warning: Warning\nThis can often be different from what you would expect. This is especially true when using angular variables, where we often artificially increase the dimension of the parameter space to make sampling easier.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"ndim = dimension(tpost)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we optimize using LBFGS","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using ComradeOptimization\nusing OptimizationOptimJL\nusing Zygote\nf = OptimizationFunction(tpost, Optimization.AutoZygote())\nprob = Optimization.OptimizationProblem(f, prior_sample(rng, tpost), nothing)\nsol = solve(prob, LBFGS(); maxiters=5_000);\nnothing #hide","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Before we analyze our solution we first need to transform back to parameter space.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"xopt = transform(tpost, sol)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"First we will evaluate our fit by plotting the residuals","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using Plots\nresidual(vlbimodel(post, xopt), dvis, ylabel=\"Correlated Flux Residual\")","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"and now closure phases","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now these residuals look a bit high. However, it turns out this is because the MAP is typically not a great estimator and will not provide very predictive measurements of the data. We will show this below after sampling from the posterior.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"CM.image(g, skymodel(post, xopt), axis=(aspect=1, xreversed=true, title=\"MAP\"), colormap=:afmhot)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"We will now move directly to sampling at this point.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using ComradeAHMC\nusing Zygote\nmetric = DiagEuclideanMetric(ndim)\nchain, stats = sample(rng, post, AHMC(;metric, autodiff=Val(:Zygote)), 700; nadapts=500, init_params=xopt)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"We then remove the adaptation/warmup phase from our chain","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"chain = chain[501:end]\nstats = stats[501:end]","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"warning: Warning\nThis should be run for 2-3x more steps to properly estimate expectations of the posterior","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now lets plot the mean image and standard deviation images. To do this we first clip the first 250 MCMC steps since that is during tuning and so the posterior is not sampling from the correct stationary distribution.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using StatsBase\nmsamples = skymodel.(Ref(post), chain[begin:2:end]);\nnothing #hide","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"The mean image is then given by","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"imgs = intensitymap.(msamples, fovxy, fovxy, 128, 128)\nimageviz(mean(imgs), colormap=:afmhot)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"imageviz(std(imgs), colormap=:batlow)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"We can also split up the model into its components and analyze each separately","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"comp = Comrade.components.(msamples)\nring_samples = getindex.(comp, 2)\nrast_samples = first.(comp)\nring_imgs = intensitymap.(ring_samples, fovxy, fovxy, 128, 128)\nrast_imgs = intensitymap.(rast_samples, fovxy, fovxy, 128, 128)\n\nring_mean, ring_std = mean_and_std(ring_imgs)\nrast_mean, rast_std = mean_and_std(rast_imgs)\n\nfig = CM.Figure(; resolution=(800, 800))\naxes = [CM.Axis(fig[i, j], xreversed=true, aspect=CM.DataAspect()) for i in 1:2, j in 1:2]\nCM.image!(axes[1,1], ring_mean, colormap=:afmhot); axes[1,1].title = \"Ring Mean\"\nCM.image!(axes[1,2], ring_std, colormap=:afmhot); axes[1,2].title = \"Ring Std. Dev.\"\nCM.image!(axes[2,1], rast_mean, colormap=:afmhot); axes[2,1].title = \"Rast Mean\"\nCM.image!(axes[2,2], rast_std, colormap=:afmhot); axes[2,2].title = \"Rast Std. Dev.\"\nfig","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Finally, let's take a look at some of the ring parameters","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"figd = CM.Figure(;resolution=(900, 600))\np1 = CM.density(figd[1,1], rad2μas(chain.r)*2, axis=(xlabel=\"Ring Diameter (μas)\",))\np2 = CM.density(figd[1,2], rad2μas(chain.σ)*2*sqrt(2*log(2)), axis=(xlabel=\"Ring FWHM (μas)\",))\np3 = CM.density(figd[1,3], -rad2deg.(chain.mp1) .+ 360.0, axis=(xlabel = \"Ring PA (deg) E of N\",))\np4 = CM.density(figd[2,1], 2*chain.ma1, axis=(xlabel=\"Brightness asymmetry\",))\np5 = CM.density(figd[2,2], 1 .- chain.f, axis=(xlabel=\"Ring flux fraction\",))\nfigd","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now let's check the residuals using draws from the posterior","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"p = plot();\nfor s in sample(chain, 10)\n residual!(p, vlbimodel(post, s), dvis)\nend\np","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"And everything looks pretty good! Now comes the hard part: interpreting the results...","category":"page"},{"location":"examples/hybrid_imaging/#Computing-information","page":"Hybrid Imaging of a Black Hole","title":"Computing information","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Julia Version 1.8.5\nCommit 17cfb8e65ea (2023-01-08 06:45 UTC)\nPlatform Info:\n OS: Linux (x86_64-linux-gnu)\n CPU: 32 × AMD Ryzen 9 7950X 16-Core Processor\n WORD_SIZE: 64\n LIBM: libopenlibm\n LLVM: libLLVM-13.0.1 (ORCJIT, znver3)\n Threads: 1 on 32 virtual cores\nEnvironment:\n JULIA_EDITOR = code\n JULIA_NUM_THREADS = 1","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"EditURL = \"../../../examples/data.jl\"","category":"page"},{"location":"examples/data/#Loading-Data-into-Comrade","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"","category":"section"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"The VLBI field does not have a standardized data format, and the EHT uses a particular uvfits format similar to the optical interferometry oifits format. As a result, we reuse the excellent eht-imaging package to load data into Comrade.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"Once the data is loaded, we then convert the data into the tabular format Comrade expects. Note that this may change to a Julia package as the Julia radio astronomy group grows.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"To get started, we will load Comrade and Plots to enable visualizations of the data","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"using Comrade\n\nusing Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\n\n\nusing Plots","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"We also load Pyehtim since it loads eht-imaging into Julia using PythonCall and exports the variable ehtim","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"using Pyehtim","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"To load the data we will use eht-imaging. We will use the 2017 public M87 data which can be downloaded from cyverse","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"obseht = ehtim.obsdata.load_uvfits(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"Now we will average the data over telescope scans. Note that the EHT data has been pre-calibrated so this averaging doesn't induce large coherence losses.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"obs = Pyehtim.scan_average(obseht)","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"warning: Warning\nWe use a custom scan-averaging function to ensure that the scan-times are homogenized.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"We can now extract data products that Comrade can use","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"vis = extract_table(obs, ComplexVisibilities()) #complex visibilites\namp = extract_table(obs, VisibilityAmplitudes()) # visibility amplitudes\ncphase = extract_table(obs, ClosurePhases(; snrcut=3.0)) # extract minimal set of closure phases\nlcamp = extract_table(obs, LogClosureAmplitudes(; snrcut=3.0)) # extract minimal set of log-closure amplitudes","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"For polarization we first load the data in the cirular polarization basis Additionally, we load the array table at the same time to load the telescope mounts.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"obseht = Pyehtim.load_uvfits_and_array(\n joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/polarized_gaussian_all_corruptions.uvfits\"),\n joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/array.txt\"),\n polrep=\"circ\"\n )\nobs = Pyehtim.scan_average(obseht)\ncoh = extract_table(obs, Coherencies())","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"warning: Warning\nAlways use our extract_cphase and extract_lcamp functions to find the closures eht-imaging will sometimes incorrectly calculate a non-redundant set of closures.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"We can also recover the array used in the observation using","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"ac = arrayconfig(vis)\nplot(ac) # Plot the baseline coverage","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"To plot the data we just call","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"l = @layout [a b; c d]\npv = plot(vis)\npa = plot(amp)\npcp = plot(cphase)\nplc = plot(lcamp)\n\nplot(pv, pa, pcp, plc; layout=l)","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"And also the coherency matrices","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"plot(coh)","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"EditURL = \"../../../examples/imaging_vis.jl\"","category":"page"},{"location":"examples/imaging_vis/#Stokes-I-Simultaneous-Image-and-Instrument-Modeling","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"","category":"section"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"In this tutorial, we will create a preliminary reconstruction of the 2017 M87 data on April 6 by simultaneously creating an image and model for the instrument. By instrument model, we mean something akin to self-calibration in traditional VLBI imaging terminology. However, unlike traditional self-cal, we will at each point in our parameter space effectively explore the possible self-cal solutions. This will allow us to constrain and marginalize over the instrument effects, such as time variable gains.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To get started we load Comrade.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Comrade\n\n\nusing Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Pyehtim\nusing LinearAlgebra","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using StableRNGs\nrng = StableRNG(42)","category":"page"},{"location":"examples/imaging_vis/#Load-the-Data","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Load the Data","text":"","category":"section"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To download the data visit https://doi.org/10.25739/g85n-f134 First we will load our data:","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"obs = ehtim.obsdata.load_uvfits(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_hi_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we do some minor preprocessing:","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Scan average the data since the data have been preprocessed so that the gain phases coherent.\nAdd 1% systematic noise to deal with calibration issues that cause 1% non-closing errors.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"obs = scan_average(obs.add_fractional_noise(0.01))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we extract our complex visibilities.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"dvis = extract_table(obs, ComplexVisibilities())","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"##Building the Model/Posterior","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now, we must build our intensity/visibility model. That is, the model that takes in a named tuple of parameters and perhaps some metadata required to construct the model. For our model, we will use a raster or ContinuousImage for our image model. The model is given below:","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"function sky(θ, metadata)\n (;fg, c, σimg) = θ\n (;ftot, K, meanpr, grid, cache) = metadata\n # Transform to the log-ratio pixel fluxes\n cp = meanpr .+ σimg.*c.params\n # Transform to image space\n rast = (ftot*(1-fg))*K(to_simplex(CenteredLR(), cp))\n img = IntensityMap(rast, grid)\n m = ContinuousImage(img, cache)\n # Add a large-scale gaussian to deal with the over-resolved mas flux\n g = modify(Gaussian(), Stretch(μas2rad(250.0), μas2rad(250.0)), Renormalize(ftot*fg))\n return m + g\nend","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Unlike other imaging examples (e.g., Imaging a Black Hole using only Closure Quantities) we also need to include a model for the instrument, i.e., gains as well. The gains will be broken into two components","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Gain amplitudes which are typically known to 10-20%, except for LMT, which has amplitudes closer to 50-100%.\nGain phases which are more difficult to constrain and can shift rapidly.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"function instrument(θ, metadata)\n (; lgamp, gphase) = θ\n (; gcache, gcachep) = metadata\n # Now form our instrument model\n gvis = exp.(lgamp)\n gphase = exp.(1im.*gphase)\n jgamp = jonesStokes(gvis, gcache)\n jgphase = jonesStokes(gphase, gcachep)\n return JonesModel(jgamp*jgphase)\nend","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"The model construction is very similar to Imaging a Black Hole using only Closure Quantities, except we include a large scale gaussian since we want to model the zero baselines. For more information about the image model please read the closure-only example. Let's discuss the instrument model Comrade.JonesModel. Thanks to the EHT pre-calibration, the gains are stable over scans. Therefore, we can model the gains on a scan-by-scan basis. To form the instrument model, we need our","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Our (log) gain amplitudes and phases are given below by lgamp and gphase\nOur function or cache that maps the gains from a list to the stations they impact gcache.\nThe set of Comrade.JonesPairs produced by jonesStokes","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"These three ingredients then specify our instrument model. The instrument model can then be combined with our image model cimg to form the total JonesModel.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now, let's set up our image model. The EHT's nominal resolution is 20-25 μas. Additionally, the EHT is not very sensitive to a larger field of view. Typically 60-80 μas is enough to describe the compact flux of M87. Given this, we only need to use a small number of pixels to describe our image.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"npix = 32\nfovx = μas2rad(150.0)\nfovy = μas2rad(150.0)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now let's form our cache's. First, we have our usual image cache which is needed to numerically compute the visibilities.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"grid = imagepixels(fovx, fovy, npix, npix)\nbuffer = IntensityMap(zeros(npix, npix), grid)\ncache = create_cache(NFFTAlg(dvis), buffer, BSplinePulse{3}())","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Second, we now construct our instrument model cache. This tells us how to map from the gains to the model visibilities. However, to construct this map, we also need to specify the observation segmentation over which we expect the gains to change. This is specified in the second argument to jonescache, and currently, there are two options","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"FixedSeg(val): Fixes the corruption to the value val for all time. This is usefule for reference stations\nScanSeg(): which forces the corruptions to only change from scan-to-scan\nTrackSeg(): which forces the corruptions to be constant over a night's observation","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"For this work, we use the scan segmentation for the gain amplitudes since that is roughly the timescale we expect them to vary. For the phases we use a station specific scheme where we set AA to be fixed to unit gain because it will function as a reference station.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gcache = jonescache(dvis, ScanSeg())\ngcachep = jonescache(dvis, ScanSeg(); autoref=SEFDReference((complex(1.0))))\n\nusing VLBIImagePriors","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we need to specify our image prior. For this work we will use a Gaussian Markov Random field prior Since we are using a Gaussian Markov random field prior we need to first specify our mean image. This behaves somewhat similary to a entropy regularizer in that it will start with an initial guess for the image structure. For this tutorial we will use a a symmetric Gaussian with a FWHM of 60 μas","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"fwhmfac = 2*sqrt(2*log(2))\nmpr = modify(Gaussian(), Stretch(μas2rad(50.0)./fwhmfac))\nimgpr = intensitymap(mpr, grid)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now since we are actually modeling our image on the simplex we need to ensure that our mean image has unit flux","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"imgpr ./= flux(imgpr)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"and since our prior is not on the simplex we need to convert it to unconstrained or real space.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"meanpr = to_real(CenteredLR(), Comrade.baseimage(imgpr))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we can form our metadata we need to fully define our model.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"metadata = (;ftot=1.1, K=CenterImage(imgpr), meanpr, grid, cache, gcache, gcachep)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"We will also fix the total flux to be the observed value 1.1. This is because total flux is degenerate with a global shift in the gain amplitudes making the problem degenerate. To fix this we use the observed total flux as our value.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Moving onto our prior, we first focus on the instrument model priors. Each station requires its own prior on both the amplitudes and phases. For the amplitudes we assume that the gains are apriori well calibrated around unit gains (or 0 log gain amplitudes) which corresponds to no instrument corruption. The gain dispersion is then set to 10% for all stations except LMT, representing that we expect 10% deviations from scan-to-scan. For LMT we let the prior expand to 100% due to the known pointing issues LMT had in 2017.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Distributions\nusing DistributionsAD\ndistamp = station_tuple(dvis, Normal(0.0, 0.1); LM = Normal(1.0))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"For the phases, as mentioned above, we will use a segmented gain prior. This means that rather than the parameters being directly the gains, we fit the first gain for each site, and then the other parameters are the segmented gains compared to the previous time. To model this we break the gain phase prior into two parts. The first is the prior for the first observing timestamp of each site, distphase0, and the second is the prior for segmented gain ϵₜ from time i to i+1, given by distphase. For the EHT, we are dealing with pre-2*rand(rng, ndim) .- 1.5calibrated data, so often, the gain phase jumps from scan to scan are minor. As such, we can put a more informative prior on distphase.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"warning: Warning\nWe use AA (ALMA) as a reference station so we do not have to specify a gain prior for it.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"distphase = station_tuple(dvis, DiagonalVonMises(0.0, inv(π^2)))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"In addition we want a reasonable guess for what the resolution of our image should be. For radio astronomy this is given by roughly the longest baseline in the image. To put this into pixel space we then divide by the pixel size.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"beam = beamsize(dvis)\nrat = (beam/(step(grid.X)))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To make the Gaussian Markov random field efficient we first precompute a bunch of quantities that allow us to scale things linearly with the number of image pixels. This drastically improves the usual N^3 scaling you get from usual Gaussian Processes.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"crcache = MarkovRandomFieldCache(meanpr)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"One of the benefits of the Bayesian approach is that we can fit for the hyperparameters of our prior/regularizers unlike traditional RML appraoches. To construct this heirarchical prior we will first make a map that takes in our regularizer hyperparameters and returns the image prior given those hyperparameters.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"fmap = let crcache=crcache\n x->GaussMarkovRandomField(x, 1.0, crcache)\nend","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we can finally form our image prior. For this we use a heirarchical prior where the inverse correlation length is given by a Half-Normal distribution whose peak is at zero and standard deviation is 0.1/rat where recall rat is the beam size per pixel. For the variance of the random field we use another half normal prior with standard deviation 0.1. The reason we use the half-normal priors is to prefer \"simple\" structures. Gaussian Markov random fields are extremly flexible models, and to prevent overfitting it is common to use priors that penalize complexity. Therefore, we want to use priors that enforce similarity to our mean image. If the data wants more complexity then it will drive us away from the prior.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"cprior = HierarchicalPrior(fmap, InverseGamma(1.0, -log(0.01*rat)))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"We can now form our model parameter priors. Like our other imaging examples, we use a Dirichlet prior for our image pixels. For the log gain amplitudes, we use the CalPrior which automatically constructs the prior for the given jones cache gcache.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"prior = NamedDist(\n fg = Uniform(0.0, 1.0),\n σimg = truncated(Normal(0.0, 1.0); lower=0.01),\n c = cprior,\n lgamp = CalPrior(distamp, gcache),\n gphase = CalPrior(distphase, gcachep),\n )","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Putting it all together we form our likelihood and posterior objects for optimization and sampling.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"lklhd = RadioLikelihood(sky, instrument, dvis; skymeta=metadata, instrumentmeta=metadata)\npost = Posterior(lklhd, prior)","category":"page"},{"location":"examples/imaging_vis/#Reconstructing-the-Image-and-Instrument-Effects","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Reconstructing the Image and Instrument Effects","text":"","category":"section"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To sample from this posterior, it is convenient to move from our constrained parameter space to an unconstrained one (i.e., the support of the transformed posterior is (-∞, ∞)). This is done using the asflat function.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"tpost = asflat(post)\nndim = dimension(tpost)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Our Posterior and TransformedPosterior objects satisfy the LogDensityProblems interface. This allows us to easily switch between different AD backends and many of Julia's statistical inference packages use this interface as well.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using LogDensityProblemsAD\nusing Zygote\ngtpost = ADgradient(Val(:Zygote), tpost)\nx0 = randn(rng, ndim)\nLogDensityProblemsAD.logdensity_and_gradient(gtpost, x0)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"We can now also find the dimension of our posterior or the number of parameters we are going to sample.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"warning: Warning\nThis can often be different from what you would expect. This is especially true when using angular variables where we often artificially increase the dimension of the parameter space to make sampling easier.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To initialize our sampler we will use optimize using LBFGS","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using ComradeOptimization\nusing OptimizationOptimJL\nf = OptimizationFunction(tpost, Optimization.AutoZygote())\nprob = Optimization.OptimizationProblem(f, prior_sample(rng, tpost), nothing)\nℓ = logdensityof(tpost)\nsol = solve(prob, LBFGS(), maxiters=1_000, g_tol=1e-1);\nnothing #hide","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now transform back to parameter space","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"xopt = transform(tpost, sol.u)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"warning: Warning\nFitting gains tends to be very difficult, meaning that optimization can take a lot longer. The upside is that we usually get nicer images.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"First we will evaluate our fit by plotting the residuals","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Plots\nresidual(vlbimodel(post, xopt), dvis)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"These look reasonable, although there may be some minor overfitting. This could be improved in a few ways, but that is beyond the goal of this quick tutorial. Plotting the image, we see that we have a much cleaner version of the closure-only image from Imaging a Black Hole using only Closure Quantities.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"import CairoMakie as CM\nimg = intensitymap(skymodel(post, xopt), fovx, fovy, 128, 128)\nCM.image(img, axis=(xreversed=true, aspect=1, title=\"MAP Image\"), colormap=:afmhot)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Because we also fit the instrument model, we can inspect their parameters. To do this, Comrade provides a caltable function that converts the flattened gain parameters to a tabular format based on the time and its segmentation.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gt = Comrade.caltable(gcachep, xopt.gphase)\nplot(gt, layout=(3,3), size=(600,500))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"The gain phases are pretty random, although much of this is due to us picking a random reference station for each scan.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Moving onto the gain amplitudes, we see that most of the gain variation is within 10% as expected except LMT, which has massive variations.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gt = Comrade.caltable(gcache, exp.(xopt.lgamp))\nplot(gt, layout=(3,3), size=(600,500))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To sample from the posterior, we will use HMC, specifically the NUTS algorithm. For information about NUTS, see Michael Betancourt's notes.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"note: Note\nFor our metric, we use a diagonal matrix due to easier tuning","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"However, due to the need to sample a large number of gain parameters, constructing the posterior is rather time-consuming. Therefore, for this tutorial, we will only do a quick preliminary run, and any posterior inferences should be appropriately skeptical.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using ComradeAHMC\nmetric = DiagEuclideanMetric(ndim)\nchain, stats = sample(rng, post, AHMC(;metric, autodiff=Val(:Zygote)), 700; nadapts=500, init_params=xopt)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"note: Note\nThe above sampler will store the samples in memory, i.e. RAM. For large models this can lead to out-of-memory issues. To fix that you can include the keyword argument saveto = DiskStore() which periodically saves the samples to disk limiting memory useage. You can load the chain using load_table(diskout) where diskout is the object returned from sample. For more information please see ComradeAHMC.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we prune the adaptation phase","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"chain = chain[501:end]","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"warning: Warning\nThis should be run for likely an order of magnitude more steps to properly estimate expectations of the posterior","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now that we have our posterior, we can put error bars on all of our plots above. Let's start by finding the mean and standard deviation of the gain phases","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gphase = hcat(chain.gphase...)\nmgphase = mean(gphase, dims=2)\nsgphase = std(gphase, dims=2)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"and now the gain amplitudes","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gamp = exp.(hcat(chain.lgamp...))\nmgamp = mean(gamp, dims=2)\nsgamp = std(gamp, dims=2)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we can use the measurements package to automatically plot everything with error bars. First we create a caltable the same way but making sure all of our variables have errors attached to them.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Measurements\ngmeas_am = measurement.(mgamp, sgamp)\nctable_am = caltable(gcache, vec(gmeas_am)) # caltable expects gmeas_am to be a Vector\ngmeas_ph = measurement.(mgphase, sgphase)\nctable_ph = caltable(gcachep, vec(gmeas_ph))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now let's plot the phase curves","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"plot(ctable_ph, layout=(3,3), size=(600,500))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"and now the amplitude curves","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"plot(ctable_am, layout=(3,3), size=(600,500))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Finally let's construct some representative image reconstructions.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"samples = skymodel.(Ref(post), chain[begin:2:end])\nimgs = intensitymap.(samples, fovx, fovy, 128, 128)\n\nmimg = mean(imgs)\nsimg = std(imgs)\nfig = CM.Figure(;resolution=(800, 800))\nCM.image(fig[1,1], mimg,\n axis=(xreversed=true, aspect=1, title=\"Mean Image\"),\n colormap=:afmhot)\nCM.image(fig[1,2], simg./(max.(mimg, 1e-5)),\n axis=(xreversed=true, aspect=1, title=\"1/SNR\",),\n colormap=:afmhot)\nCM.image(fig[2,1], imgs[1],\n axis=(xreversed=true, aspect=1,title=\"Draw 1\"),\n colormap=:afmhot)\nCM.image(fig[2,2], imgs[end],\n axis=(xreversed=true, aspect=1,title=\"Draw 2\"),\n colormap=:afmhot)\nfig","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now let's check the residuals","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"p = plot();\nfor s in sample(chain, 10)\n residual!(p, vlbimodel(post, s), dvis)\nend\np","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"And viola, you have just finished making a preliminary image and instrument model reconstruction. In reality, you should run the sample step for many more MCMC steps to get a reliable estimate for the reconstructed image and instrument model parameters.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"EditURL = \"../../../examples/imaging_pol.jl\"","category":"page"},{"location":"examples/imaging_pol/#Polarized-Image-and-Instrumental-Modeling","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"In this tutorial, we will analyze a simulated simple polarized dataset to demonstrate Comrade's polarized imaging capabilities.","category":"page"},{"location":"examples/imaging_pol/#Introduction-to-Polarized-Imaging","page":"Polarized Image and Instrumental Modeling","title":"Introduction to Polarized Imaging","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"The EHT is a polarized interferometer. However, like all VLBI interferometers, it does not directly measure the Stokes parameters (I, Q, U, V). Instead, it measures components related to the electric field at the telescope along two directions using feeds. There are two types of feeds at telescopes: circular, which measure RL components of the electric field, and linear feeds, which measure XY components of the electric field. Most sites in the EHT use circular feeds, meaning they measure the right (R) and left electric field (L) at each telescope. These circular electric field measurements are then correlated, producing coherency matrices,","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" C_ij = beginpmatrix\n RR^* RL^*\n LR^* LL^*\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"These coherency matrices are the fundamental object in interferometry and what the telescope observes. For a perfect interferometer, these coherency matrices are related to the usual Fourier transform of the stokes parameters by","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" beginpmatrix\n tildeI tildeQ tildeU tildeV\n endpmatrix\n =frac12\n beginpmatrix\n RR^* + LL^* \n RL^* + LR^* \n i(LR^* - RL^*)\n RR^* - LL^*\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"for circularly polarized measurements.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"note: Note\nIn this tutorial, we stick to circular feeds but Comrade has the capabilities to model linear (XX,XY, ...) and mixed basis coherencies (e.g., RX, RY, ...).","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"In reality, the measure coherencies are corrupted by both the atmosphere and the telescope itself. In Comrade we use the RIME formalism [1] to represent these corruptions, namely our measured coherency matrices V_ij are given by","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" V_ij = J_iC_ijJ_j^dagger","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"where J is known as a Jones matrix and ij denotes the baseline ij with sites i and j.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Comrade is highly flexible with how the Jones matrices are formed and provides several convenience functions that parameterize standard Jones matrices. These matrices include:","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"jonesG which builds the set of complex gain Jones matrices","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" G = beginpmatrix\n g_a 0\n 0 g_b\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"jonesD which builds the set of complex d-terms Jones matrices","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" D = beginpmatrix\n 1 d_a\n d_b 1\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"jonesT is the basis transform matrix T. This transformation is special and combines two things using the decomposition T=FB. The first, B, is the transformation from some reference basis to the observed coherency basis (this allows for mixed basis measurements). The second is the feed rotation, F, that transforms from some reference axis to the axis of the telescope as the source moves in the sky. The feed rotation matrix F in terms of the per station feed rotation angle varphi is","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" F = beginpmatrix\n e^-ivarphi 0\n 0 e^ivarphi\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"In the rest of the tutorial, we are going to solve for all of these instrument model terms on in addition to our image structure to reconstruct a polarized image of a synthetic dataset.","category":"page"},{"location":"examples/imaging_pol/#Load-the-Data","page":"Polarized Image and Instrumental Modeling","title":"Load the Data","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"To get started we will load Comrade","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using Comrade","category":"page"},{"location":"examples/imaging_pol/#Load-the-Data-2","page":"Polarized Image and Instrumental Modeling","title":"Load the Data","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\nusing Pyehtim","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using StableRNGs\nrng = StableRNG(123)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now we will load some synthetic polarized data.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"obs = Pyehtim.load_uvfits_and_array(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/polarized_gaussian_all_corruptions.uvfits\"),\n joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/array.txt\"), polrep=\"circ\")","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Notice that, unlike other non-polarized tutorials, we need to include a second argument. This is the array file of the observation and is required to determine the feed rotation of the array.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now we scan average the data since the data to boost the SNR and reduce the total data volume.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"obs = scan_average(obs)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now we extract our observed/corrupted coherency matrices.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"dvis = extract_table(obs, Coherencies())","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"##Building the Model/Posterior","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"To build the model, we first break it down into two parts:","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"The image or sky model. In Comrade, all polarized image models are written in terms of the Stokes parameters. The reason for using Stokes parameters is that it is usually what physical models consider and is the often easiest to reason about since they are additive. In this tutorial, we will use a polarized image model based on Pesce (2021)[2]. This model parameterizes the polarized image in terms of the Poincare sphere, and allows us to easily incorporate physical restrictions such as I^2 Q^2 + U^2 + V^2.\nThe instrument model. The instrument model specifies the model that describes the impact of instrumental and atmospheric effects. We will be using the J = GDT decomposition we described above. However, to parameterize the R/L complex gains, we will be using a gain product and ratio decomposition. The reason for this decomposition is that in realistic measurements, the gain ratios and products have different temporal characteristics. Namely, many of the EHT observations tend to demonstrate constant R/L gain ratios across an nights observations, compared to the gain products, which vary every scan. Additionally, the gain ratios tend to be smaller (i.e., closer to unity) than the gain products. Using this apriori knowledge, we can build this into our model and reduce the total number of parameters we need to model.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"function sky(θ, metadata)\n (;c, f, p, angparams) = θ\n (;K, grid, cache) = metadata\n # Construct the image model\n # produce Stokes images from parameters\n imgI = f*K(c)\n # Converts from poincare sphere parameterization of polzarization to Stokes Parameters\n pimg = PoincareSphere2Map(imgI, p, angparams, grid)\n m = ContinuousImage(pimg, cache)\n return m\nend","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"note: Note\nIf you want to add a geometric polarized model please see the PolarizedModel docstring. For instance to create a stokes I only Gaussian component to the above model we can do pg = PolarizedModel(modify(Gaussian(), Stretch(1e-10)), ZeroModel(), ZeroModel(), ZeroModel()).","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"function instrument(θ, metadata)\n (; lgp, gpp, lgr, gpr, dRx, dRy, dLx, dLy) = θ\n (; tcache, scancache, phasecache, trackcache) = metadata\n # Now construct the basis transformation cache\n jT = jonesT(tcache)\n\n # Gain product parameters\n gPa = exp.(lgp)\n gRa = exp.(lgp .+ lgr)\n Gp = jonesG(gPa, gRa, scancache)\n # Gain ratio\n gPp = exp.(1im.*(gpp))\n gRp = exp.(1im.*(gpp.+gpr))\n Gr = jonesG(gPp, gRp, phasecache)\n ##D-terms\n D = jonesD(complex.(dRx, dRy), complex.(dLx, dLy), trackcache)\n # sandwich all the jones matrices together\n J = Gp*Gr*D*jT\n # form the complete Jones or RIME model. We use tcache here\n # to set the reference basis of the model.\n return JonesModel(J, tcache)\nend","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now, we define the model metadata required to build the model. We specify our image grid and cache model needed to define the polarimetric image model.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"fovx = μas2rad(50.0)\nfovy = μas2rad(50.0)\nnx = 6\nny = floor(Int, fovy/fovx*nx)\ngrid = imagepixels(fovx, fovy, nx, ny) # image grid\nbuffer = IntensityMap(zeros(nx, ny), grid) # buffer to store temporary image\npulse = BSplinePulse{3}() # pulse we will be using\ncache = create_cache(NFFTAlg(dvis), buffer, pulse) # cache to define the NFFT transform","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Finally we compute a center projector that forces the centroid to live at the image origin","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using VLBIImagePriors\nK = CenterImage(grid)\nskymeta = (;K, cache, grid)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"To define the instrument models, T, G, D, we need to build some Jones caches (see JonesCache) that map from a flat vector of gain/dterms to the specific sites for each baseline.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"First, we will define our deterministic transform cache. Note that this dataset has need been pre-corrected for feed rotation, so we need to add those into the tcache.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"tcache = ResponseCache(dvis; add_fr=true, ehtim_fr_convention=false)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Next we define our cache that maps quantities e.g., gain products, that change from scan-to-scan.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"scancache = jonescache(dvis, ScanSeg())","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"In addition we will assign a reference station. This is necessary for gain phases due to a trivial degeneracy being present. To do this we will select ALMA AA as the reference station as is standard in EHT analyses.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"phase_segs = station_tuple(dvis, ScanSeg(); AA=FixedSeg(1.0 + 0.0im))\nphasecache = jonescache(dvis, phase_segs)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Finally, we define our cache that maps quantities, e.g., gain ratios and d-terms, that are constant across a observation night, and we collect everything together.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"trackcache = jonescache(dvis, TrackSeg())\ninstrumentmeta = (;tcache, scancache, trackcache, phasecache)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Moving onto our prior, we first focus on the instrument model priors. Each station gain requires its own prior on both the amplitudes and phases. For the amplitudes, we assume that the gains are apriori well calibrated around unit gains (or 0 log gain amplitudes) which corresponds to no instrument corruption. The gain dispersion is then set to 10% for all stations except LMT, representing that we expect 10% deviations from scan-to-scan. For LMT, we let the prior expand to 100% due to the known pointing issues LMT had in 2017.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using Distributions\nusing DistributionsAD\ndistamp = station_tuple(dvis, Normal(0.0, 0.1))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"For the phases, we assume that the atmosphere effectively scrambles the gains. Since the gain phases are periodic, we also use broad von Mises priors for all stations. Notice that we don't assign a prior for AA since we have already fixed it.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"distphase = station_tuple(dvis, DiagonalVonMises(0.0, inv(π^2)); reference=:AA)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"However, we can now also use a little additional information about the phase offsets where in most cases, they are much better behaved than the products","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"distphase_ratio = station_tuple(dvis, DiagonalVonMises(0.0, inv(0.1)); reference=:AA)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Moving onto the d-terms, here we directly parameterize the real and complex components of the d-terms since they are expected to be complex numbers near the origin. To help enforce this smallness, a weakly informative Normal prior is used.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"distD = station_tuple(dvis, Normal(0.0, 0.1))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Our image priors are:","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"We use a Dirichlet prior, ImageDirichlet, with unit concentration for our stokes I image pixels, c.\nFor the total polarization fraction, p, we assume an uncorrelated uniform prior ImageUniform for each pixel.\nTo specify the orientation of the polarization, angparams, on the Poincare sphere, we use a uniform spherical distribution, ImageSphericalUniform.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"For all the calibration parameters, we use a helper function CalPrior which builds the prior given the named tuple of station priors and a JonesCache that specifies the segmentation scheme. For the gain products, we use the scancache, while for every other quantity, we use the trackcache.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"prior = NamedDist(\n c = ImageDirichlet(2.0, nx, ny),\n f = Uniform(0.7, 1.2),\n p = ImageUniform(nx, ny),\n angparams = ImageSphericalUniform(nx, ny),\n dRx = CalPrior(distD, trackcache),\n dRy = CalPrior(distD, trackcache),\n dLx = CalPrior(distD, trackcache),\n dLy = CalPrior(distD, trackcache),\n lgp = CalPrior(distamp, scancache),\n gpp = CalPrior(distphase, phasecache),\n lgr = CalPrior(distamp, scancache),\n gpr = CalPrior(distphase,phasecache),\n )","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Putting it all together, we form our likelihood and posterior objects for optimization and sampling.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"lklhd = RadioLikelihood(sky, instrument, dvis; skymeta, instrumentmeta)\npost = Posterior(lklhd, prior)","category":"page"},{"location":"examples/imaging_pol/#Reconstructing-the-Image-and-Instrument-Effects","page":"Polarized Image and Instrumental Modeling","title":"Reconstructing the Image and Instrument Effects","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"To sample from this posterior, it is convenient to move from our constrained parameter space to an unconstrained one (i.e., the support of the transformed posterior is (-∞, ∞)). This transformation is done using the asflat function.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"tpost = asflat(post)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"We can now also find the dimension of our posterior or the number of parameters we will sample.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"warning: Warning\nThis can often be different from what you would expect. This difference is especially true when using angular variables, where we often artificially increase the dimension of the parameter space to make sampling easier.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"ndim = dimension(tpost)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now we optimize. Unlike other imaging examples, we move straight to gradient optimizers due to the higher dimension of the space.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using ComradeOptimization\nusing OptimizationOptimJL\nusing Zygote\nf = OptimizationFunction(tpost, Optimization.AutoZygote())\nℓ = logdensityof(tpost)\nprob = Optimization.OptimizationProblem(f, prior_sample(tpost), nothing)\nsol = solve(prob, LBFGS(), maxiters=15_000, g_tol=1e-1);\nnothing #hide","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"warning: Warning\nFitting polarized images is generally much harder than Stokes I imaging. This difficulty means that optimization can take a long time, and starting from a good starting location is often required.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Before we analyze our solution, we need to transform it back to parameter space.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"xopt = transform(tpost, sol)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now let's evaluate our fits by plotting the residuals","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using Plots\nresidual(vlbimodel(post, xopt), dvis)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"These look reasonable, although there may be some minor overfitting. Let's compare our results to the ground truth values we know in this example. First, we will load the polarized truth","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using AxisKeys\nimgtrue = Comrade.load(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/polarized_gaussian.fits\"), StokesIntensityMap)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Select a reasonable zoom in of the image.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"imgtruesub = imgtrue(Interval(-fovx/2, fovx/2), Interval(-fovy/2, fovy/2))\nimg = intensitymap!(copy(imgtruesub), skymodel(post, xopt))\nimport CairoMakie as CM\nfig = CM.Figure(;resolution=(900, 400))\npolimage(fig[1,1], imgtruesub,\n axis=(xreversed=true, aspect=1, title=\"Truth\", limits=((-20.0,20.0), (-20.0, 20.0))),\n length_norm=1, plot_total=true,\n pcolorrange=(-0.25, 0.25), pcolormap=CM.Reverse(:jet))\npolimage(fig[1,2], img,\n axis=(xreversed=true, aspect=1, title=\"Recon.\", limits=((-20.0,20.0), (-20.0, 20.0))),\n length_norm=1, plot_total=true,\n pcolorrange=(-0.25, 0.25), pcolormap=CM.Reverse(:jet))\nCM.Colorbar(fig[1,3], colormap=CM.Reverse(:jet), colorrange=(-0.25, 0.25), label=\"Signed Polarization Fraction sign(V)*|p|\")\nCM.colgap!(fig.layout, 1)\nfig","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Let's compare some image statics, like the total linear polarization fraction","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"ftrue = flux(imgtruesub);\n@info \"Linear polarization true image: $(abs(linearpol(ftrue))/ftrue.I)\"\nfrecon = flux(img);\n@info \"Linear polarization recon image: $(abs(linearpol(frecon))/frecon.I)\"","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"And the Circular polarization fraction","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"@info \"Circular polarization true image: $(ftrue.V/ftrue.I)\"\n@info \"Circular polarization recon image: $(frecon.V/frecon.I)\"","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Because we also fit the instrument model, we can inspect their parameters. To do this, Comrade provides a caltable function that converts the flattened gain parameters to a tabular format based on the time and its segmentation.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"dR = caltable(trackcache, complex.(xopt.dRx, xopt.dRy))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"We can compare this to the ground truth d-terms","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"time AA AP AZ JC LM PV SM\n0.0 0.01-0.02im -0.08+0.07im 0.09-0.10im -0.04+0.05im 0.03-0.02im -0.01+0.02im 0.08-0.07im","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"And same for the left-handed dterms","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"dL = caltable(trackcache, complex.(xopt.dLx, xopt.dLy))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"time AA AP AZ JC LM PV SM\n0.0 0.03-0.04im -0.06+0.05im 0.09-0.08im -0.06+0.07im 0.01-0.00im -0.03+0.04im 0.06-0.05im","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Looking at the gain phase ratio","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"gphase_ratio = caltable(phasecache, xopt.gpr)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"we see that they are all very small. Which should be the case since this data doesn't have gain corruptions! Similarly our gain ratio amplitudes are also very close to unity as expected.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"gamp_ratio = caltable(scancache, exp.(xopt.lgr))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Plotting the gain phases, we see some offsets from zero. This is because the prior on the gain product phases is very broad, so we can't phase center the image. For realistic data this is always the case since the atmosphere effectively scrambles the phases.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"gphase_prod = caltable(phasecache, xopt.gpp)\nplot(gphase_prod, layout=(3,3), size=(650,500))\nplot!(gphase_ratio, layout=(3,3), size=(650,500))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Finally, the product gain amplitudes are all very close to unity as well, as expected since gain corruptions have not been added to the data.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"gamp_prod = caltable(scancache, exp.(xopt.lgp))\nplot(gamp_prod, layout=(3,3), size=(650,500))\nplot!(gamp_ratio, layout=(3,3), size=(650,500))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"At this point, you should run the sampler to recover an uncertainty estimate, which is identical to every other imaging example (see, e.g., Stokes I Simultaneous Image and Instrument Modeling. However, due to the time it takes to sample, we will skip that for this tutorial. Note that on the computer environment listed below, 20_000 MCMC steps take 4 hours.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"[1]: Hamaker J.P, Bregman J.D., Sault R.J. (1996) [https://articles.adsabs.harvard.edu/pdf/1996A%26AS..117..137H]","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"[2]: Pesce D. (2021) [https://ui.adsabs.harvard.edu/abs/2021AJ....161..178P/abstract]","category":"page"},{"location":"examples/imaging_pol/#Computing-information","page":"Polarized Image and Instrumental Modeling","title":"Computing information","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Julia Version 1.8.5\nCommit 17cfb8e65ea (2023-01-08 06:45 UTC)\nPlatform Info:\n OS: Linux (x86_64-linux-gnu)\n CPU: 32 × AMD Ryzen 9 7950X 16-Core Processor\n WORD_SIZE: 64\n LIBM: libopenlibm\n LLVM: libLLVM-13.0.1 (ORCJIT, znver3)\n Threads: 1 on 32 virtual cores\nEnvironment:\n JULIA_EDITOR = code\n JULIA_NUM_THREADS = 1","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"EditURL = \"../../../examples/geometric_modeling.jl\"","category":"page"},{"location":"examples/geometric_modeling/#Geometric-Modeling-of-EHT-Data","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Comrade has been designed to work with the EHT and ngEHT. In this tutorial, we will show how to reproduce some of the results from EHTC VI 2019.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"In EHTC VI, they considered fitting simple geometric models to the data to estimate the black hole's image size, shape, brightness profile, etc. In this tutorial, we will construct a similar model and fit it to the data in under 50 lines of code (sans comments). To start, we load Comrade and some other packages we need.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using Comrade","category":"page"},{"location":"examples/geometric_modeling/#Load-the-Data","page":"Geometric Modeling of EHT Data","title":"Load the Data","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\n\nusing Pyehtim","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using StableRNGs\nrng = StableRNG(42)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"The next step is to load the data. We will use the publically available M 87 data which can be downloaded from cyverse. For an introduction to data loading, see Loading Data into Comrade.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"obs = load_uvfits_and_array(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Now we will kill 0-baselines since we don't care about large-scale flux and since we know that the gains in this dataset are coherent across a scan, we make scan-average data","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"obs = Pyehtim.scan_average(obs.flag_uvdist(uv_min=0.1e9))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Now we extract the data products we want to fit","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3.0), ClosurePhases(;snrcut=3.0))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"!!!warn We remove the low-snr closures since they are very non-gaussian. This can create rather large biases in the model fitting since the likelihood has much heavier tails that the usual Gaussian approximation.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"For the image model, we will use a modified MRing, a infinitely thin delta ring with an azimuthal structure given by a Fourier expansion. To give the MRing some width, we will convolve the ring with a Gaussian and add an additional gaussian to the image to model any non-ring flux. Comrade expects that any model function must accept a named tuple and returns must always return an object that implements the VLBISkyModels Interface","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"function model(θ)\n (;radius, width, ma, mp, τ, ξτ, f, σG, τG, ξG, xG, yG) = θ\n α = ma.*cos.(mp .- ξτ)\n β = ma.*sin.(mp .- ξτ)\n ring = f*smoothed(modify(MRing(α, β), Stretch(radius, radius*(1+τ)), Rotate(ξτ)), width)\n g = (1-f)*shifted(rotated(stretched(Gaussian(), σG, σG*(1+τG)), ξG), xG, yG)\n return ring + g\nend","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"To construct our likelihood p(V|M) where V is our data and M is our model, we use the RadioLikelihood function. The first argument of RadioLikelihood is always a function that constructs our Comrade model from the set of parameters θ.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"lklhd = RadioLikelihood(model, dlcamp, dcphase)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"We now need to specify the priors for our model. The easiest way to do this is to specify a NamedTuple of distributions:","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using Distributions, VLBIImagePriors\nprior = NamedDist(\n radius = Uniform(μas2rad(10.0), μas2rad(30.0)),\n width = Uniform(μas2rad(1.0), μas2rad(10.0)),\n ma = (Uniform(0.0, 0.5), Uniform(0.0, 0.5)),\n mp = (Uniform(0, 2π), Uniform(0, 2π)),\n τ = Uniform(0.0, 1.0),\n ξτ= Uniform(0.0, π),\n f = Uniform(0.0, 1.0),\n σG = Uniform(μas2rad(1.0), μas2rad(100.0)),\n τG = Uniform(0.0, 1.0),\n ξG = Uniform(0.0, 1π),\n xG = Uniform(-μas2rad(80.0), μas2rad(80.0)),\n yG = Uniform(-μas2rad(80.0), μas2rad(80.0))\n )","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Note that for α and β we use a product distribution to signify that we want to use a multivariate uniform for the mring components α and β. In general the structure of the variables is specified by the prior. Note that this structure must be compatible with the model definition model(θ).","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"To form the posterior we now call","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"post = Posterior(lklhd, prior)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"!!!warn As of Comrade 0.9 we have switched to the proper covariant closure likelihood. This is slower than the naieve diagonal liklelihood, but takes into account the correlations between closures that share the same baselines.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"This constructs a posterior density that can be evaluated by calling logdensityof. For example,","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"logdensityof(post, (radius = μas2rad(20.0),\n width = μas2rad(10.0),\n ma = (0.3, 0.3),\n mp = (π/2, π),\n τ = 0.1,\n ξτ= π/2,\n f = 0.6,\n σG = μas2rad(50.0),\n τG = 0.1,\n ξG = 0.5,\n xG = 0.0,\n yG = 0.0))","category":"page"},{"location":"examples/geometric_modeling/#Reconstruction","page":"Geometric Modeling of EHT Data","title":"Reconstruction","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Now that we have fully specified our model, we now will try to find the optimal reconstruction of our model given our observed data.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Currently, post is in parameter space. Often optimization and sampling algorithms want it in some modified space. For example, nested sampling algorithms want the parameters in the unit hypercube. To transform the posterior to the unit hypercube, we can use the ascube function","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"cpost = ascube(post)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"If we want to flatten the parameter space and move from constrained parameters to (-∞, ∞) support we can use the asflat function","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"fpost = asflat(post)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"These transformed posterior expect a vector of parameters. That is we can evaluate the transformed log density by calling","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"logdensityof(cpost, rand(rng, dimension(cpost)))\nlogdensityof(fpost, randn(rng, dimension(fpost)))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"note that cpost logdensity vector expects that each element lives in [0,1].","category":"page"},{"location":"examples/geometric_modeling/#Finding-the-Optimal-Image","page":"Geometric Modeling of EHT Data","title":"Finding the Optimal Image","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Typically, most VLBI modeling codes only care about finding the optimal or best guess image of our posterior post To do this, we will use Optimization.jl and specifically the BlackBoxOptim.jl package. For Comrade, this workflow is very similar to the usual Optimization.jl workflow. The only thing to keep in mind is that Optimization.jl expects that the function we are evaluating expects the parameters to be represented as a flat Vector of float. Therefore, we must use one of our transformed posteriors, cpost or fpost. For this example","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"#, we will use `cpost` since it restricts the domain to live within the compact unit hypercube\n#, which is easier to explore for non-gradient-based optimizers like `BBO`.\n\nusing ComradeOptimization\nusing OptimizationBBO\n\nndim = dimension(fpost)\nf = OptimizationFunction(fpost)\nprob = Optimization.OptimizationProblem(f, randn(rng, ndim), nothing, lb=fill(-5.0, ndim), ub=fill(5.0, ndim))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Now we solve for our optimial image.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited(); maxiters=50_000);\nnothing #hide","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"The sol vector is in the transformed space, so first we need to transform back to parameter space to that we can interpret the solution.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"xopt = transform(fpost, sol)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Given this we can now plot the optimal image or the maximum a posteriori (MAP) image.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"import CairoMakie as CM\ng = imagepixels(μas2rad(200.0), μas2rad(200.0), 256, 256)\nfig, ax, plt = CM.image(g, model(xopt); axis=(xreversed=true, aspect=1, xlabel=\"RA (μas)\", ylabel=\"Dec (μas)\"), figure=(;resolution=(650,500),) ,colormap=:afmhot)","category":"page"},{"location":"examples/geometric_modeling/#Quantifying-the-Uncertainty-of-the-Reconstruction","page":"Geometric Modeling of EHT Data","title":"Quantifying the Uncertainty of the Reconstruction","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"While finding the optimal image is often helpful, in science, the most important thing is to quantify the certainty of our inferences. This is the goal of Comrade. In the language of Bayesian statistics, we want to find a representation of the posterior of possible image reconstructions given our choice of model and the data.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Comrade provides several sampling and other posterior approximation tools. To see the list, please see the Libraries section of the docs. For this example, we will be using AdvancedHMC.jl, which uses an adaptive Hamiltonian Monte Carlo sampler called NUTS to approximate the posterior. Most of Comrade's external libraries follow a similar interface. To use AdvancedHMC do the following:","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using ComradeAHMC, Zygote\nchain, stats = sample(rng, post, AHMC(metric=DiagEuclideanMetric(ndim), autodiff=Val(:Zygote)), 2000; nadapts=1000, init_params=xopt)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"That's it! To finish it up we can then plot some simple visual fit diagnostics.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"First to plot the image we call","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"imgs = intensitymap.(skymodel.(Ref(post), sample(chain[1000:end], 100)), μas2rad(200.0), μas2rad(200.0), 128, 128)\nimageviz(imgs[end], colormap=:afmhot)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"What about the mean image? Well let's grab 100 images from the chain, where we first remove the adaptation steps since they don't sample from the correct posterior distribution","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"meanimg = mean(imgs)\nimageviz(meanimg, colormap=:afmhot)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"That looks similar to the EHTC VI, and it took us no time at all!. To see how well the model is fitting the data we can plot the model and data products","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using Plots\nplot(model(xopt), dlcamp, label=\"MAP\")","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"We can also plot random draws from the posterior predictive distribution. The posterior predictive distribution create a number of synthetic observations that are marginalized over the posterior.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"p = plot(dlcamp);\nuva = [sqrt.(uvarea(dlcamp[i])) for i in 1:length(dlcamp)]\nfor i in 1:10\n m = simulate_observation(post, chain[rand(rng, 1000:2000)])[1]\n scatter!(uva, m, color=:grey, label=:none, alpha=0.1)\nend\np","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Finally, we can also put everything onto a common scale and plot the normalized residuals. The normalied residuals are the difference between the data and the model, divided by the data's error:","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"residual(model(xopt), dlcamp)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"All diagnostic plots suggest that the model is missing some emission sources. In fact, this model is too simple to explain the data. Check out EHTC VI 2019 for some ideas about what features need to be added to the model to get a better fit!","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"For a real run we should also check that the MCMC chain has converged. For this we can use MCMCDiagnosticTools","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using MCMCDiagnosticTools, Tables","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"First, lets look at the effective sample size (ESS) and R̂. This is important since the Monte Carlo standard error for MCMC estimates is proportional to 1/√ESS (for some problems) and R̂ is a measure of chain convergence. To find both, we can use:","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"compute_ess(x::NamedTuple) = map(compute_ess, x)\ncompute_ess(x::AbstractVector{<:Number}) = ess_rhat(x)\ncompute_ess(x::AbstractVector{<:Tuple}) = map(ess_rhat, Tables.columns(x))\ncompute_ess(x::Tuple) = map(compute_ess, x)\nessrhat = compute_ess(Tables.columns(chain))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Here, the first value is the ESS, and the second is the R̂. Note that we typically want R̂ < 1.01 for all parameters, but you should also be running the problem at least four times from four different starting locations. In the future we will write an extension that works with Arviz.jl.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"In our example here, we see that we have an ESS > 100 for all parameters and the R̂ < 1.01 meaning that our MCMC chain is a reasonable approximation of the posterior. For more diagnostics, see MCMCDiagnosticTools.jl.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"This page was generated using Literate.jl.","category":"page"},{"location":"vlbi_imaging_problem/#Introduction-to-the-VLBI-Imaging-Problem","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"","category":"section"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Very-long baseline interferometry (VLBI) is capable of taking the highest resolution images in the world, achieving angular resolutions of ~20 μas. In 2019, the first-ever image of a black hole was produced by the Event Horizon Telescope (EHT). However, while the EHT has unprecedented resolution, it is also a sparse interferometer. As a result, the sampling in the uv or Fourier space of the image is incomplete. This incompleteness makes the imaging problem uncertain. Namely, infinitely many images are possible, given the data. Comrade is a imaging/modeling package that aims to quantify this uncertainty using Bayesian inference.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"If we denote visibilities by V and the image structure/model by I, Comrade will then compute the posterior or the probability of an image given the visibility data or in an equation","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"p(IV) = fracp(VI)p(I)p(V)","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Here p(VI) is known as the likelihood and describes the probability distribution of the data given some image I. The prior p(I) encodes prior knowledge of the image structure. This prior includes distributions of model parameters and even the model itself. Finally, the denominator p(V) is a normalization term and is known as the marginal likelihood or evidence and can be used to assess how well particular models fit the data.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Therefore, we must specify the likelihood and prior to construct our posterior. Below we provide a brief description of the likelihoods and models/priors that Comrade uses. However, if the user wants to see how everything works first, they should check out the Geometric Modeling of EHT Data tutorial.","category":"page"},{"location":"vlbi_imaging_problem/#Likelihood","page":"Introduction to the VLBI Imaging Problem","title":"Likelihood","text":"","category":"section"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Following TMS[TMS], we note that the likelihood for a single complex visibility at baseline u_ij v_ij is","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"p(V_ij I) = (2pi sigma^2_ij)^-12expleft(-frac V_ij - g_ig_j^*tildeI_ij(I)^22sigma^2_ijright)","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"In this equation, tildeI is the Fourier transform of the image I, and g_ij are complex numbers known as gains. The gains arise due to atmospheric and telescope effects and corrupt the incoming signal. Therefore, if a user attempts to model the complex visibilities, they must also model the complex gains. An example showing how to model gains in Comrade can be found in Stokes I Simultaneous Image and Instrument Modeling.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Modeling the gains can be computationally expensive, especially if our image model is simple. For instance, in Comrade, we have a wide variety of geometric models. These models tend to have a small number of parameters and are simple to evaluate. Solving for gains then drastically increases the amount of time it takes to sample the posterior. As a result, part of the typical EHT analysis[M87P6][SgrAP4] instead uses closure products as its data. The two forms of closure products are:","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Closure Phases,\nLog-Closure Amplitudes.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Closure Phases psi are constructed by selecting three baselines (ijk) and finding the argument of the bispectrum:","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":" psi_ijk = arg V_ijV_jkV_ki","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Similar log-closure amplitudes are found by selecting four baselines (ijkl) and forming the closure amplitudes:","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":" A_ijkl = frac V_ijV_klV_jkV_li","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Instead of directly fitting closure amplitudes, it turns out that the statistically better-behaved data product is the log-closure amplitude. ","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"The benefit of fitting closure products is that they are independent of complex gains, so we can leave them out when modeling the data. However, the downside is that they effectively put uniform improper priors on the gains[Blackburn], meaning that we often throw away information about the telescope's performance. On the other hand, we can then view closure fitting as a very conservative estimate about what image structures are consistent with the data. Another downside of using closure products is that their likelihoods are complex. In the high-signal-to-noise limit, however, they do reduce to Gaussian likelihoods, and this is the limit we are usually in for the EHT. For the explicit likelihood Comrade uses, we refer the reader to appendix F in paper IV of the first Sgr A* EHT publications[SgrAP4]. The computational implementation of these likelihoods can be found in VLBILikelihoods.jl.","category":"page"},{"location":"vlbi_imaging_problem/#Prior-Model","page":"Introduction to the VLBI Imaging Problem","title":"Prior Model","text":"","category":"section"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Comrade has included a large number of possible models (see Comrade API for a list). These can be broken down into two categories:","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Parametric or geometric models\nNon-parametric or image models","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Comrade's geometric model interface is built using VLBISkyModels and is different from other EHT modeling packages because we don't directly provide fully formed models. Instead, we offer simple geometric models, which we call primitives. These primitive models can then be modified and combined to form complicated image structures. For more information, we refer the reader to the VLBISkyModels docs.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Additionally, we include an interface to Bayesian imaging methods, where we directly fit a rasterized image to the data. These models are highly flexible and assume very little about the image structure. In that sense, these methods are an excellent way to explore the data first and see what kinds of image structures are consistent with observations. For an example of how to fit an image model to closure products, we refer the reader to the other tutorial included in the docs.","category":"page"},{"location":"vlbi_imaging_problem/#References","page":"Introduction to the VLBI Imaging Problem","title":"References","text":"","category":"section"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"[TMS]: Thompson, A., Moran, J., Swenson, G. (2017). Interferometry and Synthesis in Radio Astronomy (Third). Springer Cham","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"[M87P6]: Event Horizon Telescope Collaboration, (2022). First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole. ApJL 875 L6 doi","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"[SgrAP4]: Event Horizon Telescope Collaboration, (2022). First Sagittarius A* Event Horizon Telscope Results. IV. Variability, Morphology, and Black Hole Mass. ApJL 930 L15 arXiv","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"[Blackburn]: Blackburn, L., et. al. (2020). Closure statistics in interferometric data. ApJ, 894(1), 31.","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = Comrade","category":"page"},{"location":"#Comrade","page":"Home","title":"Comrade","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Comrade is a Bayesian differentiable modular modeling framework for use with very long baseline interferometry. The goal is to allow the user to easily combine and modify a set of primitive models to construct complicated source structures. The benefit of this approach is that it is straightforward to construct different source models out of these primitives. Namely, an end-user does not have to create a separate source \"model\" every time they change the model specification. Additionally, most models currently implemented are differentiable with at Zygote and sometimes ForwardDiff[2]. This allows for gradient accelerated optimization and sampling (e.g., HMC) to be used with little effort by the end user. To sample from the posterior, we provide a somewhat barebones interface since, most of the time, and we don't require the additional features offered by most PPLs. Additionally, the overhead introduced by PPLs tends to be rather large. In the future, we may revisit this as Julia's PPL ecosystem matures.","category":"page"},{"location":"","page":"Home","title":"Home","text":"note: Note\nThe primitives the Comrade defines, however, would allow for it to be easily included in PPLs like Turing.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Our tutorial section currently has a large number of examples. The simplest example is fitting simple geometric models to the 2017 M87 data and is detailed in the Geometric Modeling of EHT Data tutorial. We also include \"non-parametric\" modeling or imaging examples in Imaging a Black Hole using only Closure Quantities, and Stokes I Simultaneous Image and Instrument Modeling. There is also an introduction to hybrid geometric and image modeling in Hybrid Imaging of a Black Hole, which combines physically motivated geometric modeling with the flexibility of image-based models.","category":"page"},{"location":"","page":"Home","title":"Home","text":"As of 0.7, Comrade also can simultaneously reconstruct polarized image models and instrument corruptions through the RIME[1] formalism. A short example explaining these features can be found in Polarized Image and Instrumental Modeling.","category":"page"},{"location":"#Contributing","page":"Home","title":"Contributing","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This repository has recently moved to ColPrac. If you would like to contribute please feel free to open a issue or pull-request.","category":"page"},{"location":"","page":"Home","title":"Home","text":"[2]: As of 0.9 Comrade switched to using full covariance closures. As a result this requires a sparse cholesky solve in the likelihood evaluation which requires ","category":"page"},{"location":"","page":"Home","title":"Home","text":"a Dual number overload. As a result we recommend using Zygote which does work and often is similarly performant (reverse 3-6x slower compared to the forward pass).","category":"page"},{"location":"#Requirements","page":"Home","title":"Requirements","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"The minimum Julia version we require is 1.7. In the future we may increase this as Julia advances.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Pages = [\n \"index.md\",\n \"vlbi_imaging_problem.md\",\n \"conventions.md\",\n \"Tutorials\",\n \"Libraries\",\n \"interface.md\",\n \"base_api.md\",\n \"api.md\"\n]","category":"page"},{"location":"#References","page":"Home","title":"References","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"[1]: Hamaker J.P and Bregman J.D. and Sault R.J. Understanding radio polarimetry. I. Mathematical foundations ADS. ","category":"page"}] +[{"location":"base_api/#ComradeBase-API","page":"ComradeBase API","title":"ComradeBase API","text":"","category":"section"},{"location":"base_api/#Contents","page":"ComradeBase API","title":"Contents","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"Pages = [\"base_api.md\"]","category":"page"},{"location":"base_api/#Index","page":"ComradeBase API","title":"Index","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"Pages = [\"base_api.md\"]","category":"page"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"CurrentModule = ComradeBase","category":"page"},{"location":"base_api/#Model-API","page":"ComradeBase API","title":"Model API","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"ComradeBase.flux\nComradeBase.visibility\nComradeBase.visibilities\nComradeBase.visibilities!\nComradeBase.intensitymap\nComradeBase.intensitymap!\nComradeBase.IntensityMap\nComradeBase.amplitude(::Any, ::Any)\nComradeBase.amplitudes\nComradeBase.bispectrum\nComradeBase.bispectra\nComradeBase.closure_phase\nComradeBase.closure_phases\nComradeBase.logclosure_amplitude\nComradeBase.logclosure_amplitudes","category":"page"},{"location":"base_api/#ComradeBase.flux","page":"ComradeBase API","title":"ComradeBase.flux","text":"flux(im::IntensityMap)\nflux(img::StokesIntensityMap)\n\nComputes the flux of a intensity map\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibility","page":"ComradeBase API","title":"ComradeBase.visibility","text":"visibility(mimg, p)\n\nComputes the complex visibility of model m at coordinates p. p corresponds to the coordinates of the model. These need to have the properties U, V and sometimes Ti for time and Fr for frequency.\n\nNotes\n\nIf you want to compute the visibilities at a large number of positions consider using the visibilities.\n\n\n\n\n\nvisibility(d::EHTVisibilityDatum)\n\nReturn the complex visibility of the visibility datum\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities","page":"ComradeBase API","title":"ComradeBase.visibilities","text":"visibilities(model::AbstractModel, args...)\n\nComputes the complex visibilities at the locations given by args...\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities!","page":"ComradeBase API","title":"ComradeBase.visibilities!","text":"visibilities!(vis::AbstractArray, model::AbstractModel, args...)\n\nComputes the complex visibilities vis in place at the locations given by args...\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap","page":"ComradeBase API","title":"ComradeBase.intensitymap","text":"intensitymap(model::AbstractModel, p::AbstractDims)\n\nComputes the intensity map of model. For the inplace version see intensitymap!\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap!","page":"ComradeBase API","title":"ComradeBase.intensitymap!","text":"intensitymap!(buffer::AbstractDimArray, model::AbstractModel)\n\nComputes the intensity map of model by modifying the buffer\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.IntensityMap","page":"ComradeBase API","title":"ComradeBase.IntensityMap","text":"IntensityMap(data::AbstractArray, dims::NamedTuple)\nIntensityMap(data::AbstractArray, grid::AbstractDims)\n\nConstructs an intensitymap using the image dimensions given by dims. This returns a KeyedArray with keys given by an ImageDimensions object.\n\ndims = (X=range(-10.0, 10.0, length=100), Y = range(-10.0, 10.0, length=100),\n T = [0.1, 0.2, 0.5, 0.9, 1.0], F = [230e9, 345e9]\n )\nimgk = IntensityMap(rand(100,100,5,1), dims)\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.amplitude-Tuple{Any, Any}","page":"ComradeBase API","title":"ComradeBase.amplitude","text":"amplitude(model, p)\n\nComputes the visibility amplitude of model m at the coordinate p. The coordinate p is expected to have the properties U, V, and sometimes Ti and Fr.\n\nIf you want to compute the amplitudes at a large number of positions consider using the amplitudes function.\n\n\n\n\n\n","category":"method"},{"location":"base_api/#ComradeBase.amplitudes","page":"ComradeBase API","title":"ComradeBase.amplitudes","text":"amplitudes(m::AbstractModel, u::AbstractArray, v::AbstractArray)\n\nComputes the visibility amplitudes of the model m at the coordinates p. The coordinates p are expected to have the properties U, V, and sometimes Ti and Fr.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.bispectrum","page":"ComradeBase API","title":"ComradeBase.bispectrum","text":"bispectrum(model, p1, p2, p3)\n\nComputes the complex bispectrum of model m at the uv-triangle p1 -> p2 -> p3\n\nIf you want to compute the bispectrum over a number of triangles consider using the bispectra function.\n\n\n\n\n\nbispectrum(d1::T, d2::T, d3::T) where {T<:EHTVisibilityDatum}\n\nFinds the bispectrum of three visibilities. We will assume these form closed triangles, i.e. the phase of the bispectrum is a closure phase.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.bispectra","page":"ComradeBase API","title":"ComradeBase.bispectra","text":"bispectra(m, p1, p2, p3)\n\nComputes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.closure_phase","page":"ComradeBase API","title":"ComradeBase.closure_phase","text":"closure_phase(model, p1, p2, p3, p4)\n\nComputes the closure phase of model m at the uv-triangle u1,v1 -> u2,v2 -> u3,v3\n\nIf you want to compute closure phases over a number of triangles consider using the closure_phases function.\n\n\n\n\n\nclosure_phase(D1::EHTVisibilityDatum,\n D2::EHTVisibilityDatum,\n D3::EHTVisibilityDatum\n )\n\nComputes the closure phase of the three visibility datums.\n\nNotes\n\nWe currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.closure_phases","page":"ComradeBase API","title":"ComradeBase.closure_phases","text":"closure_phases(m,\n p1::AbstractArray\n p2::AbstractArray\n p3::AbstractArray\n )\n\nComputes the closure phases of the model m at the triangles p1, p2, p3, where pi are coordinates.\n\n\n\n\n\nclosure_phases(m::AbstractModel, ac::ClosureConfig)\n\nComputes the closure phases of the model m using the array configuration ac.\n\nNotes\n\nThis is faster than the closure_phases(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]\n\n[1]: Blackburn L., et al \"Closure Statistics in Interferometric Data\" ApJ 2020\n\n\n\n\n\nclosure_phases(vis::AbstractArray, ac::ArrayConfiguration)\n\nCompute the closure phases for a set of visibilities and an array configuration\n\nNotes\n\nThis uses a closure design matrix for the computation.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.logclosure_amplitude","page":"ComradeBase API","title":"ComradeBase.logclosure_amplitude","text":"logclosure_amplitude(model, p1, p2, p3, p4)\n\nComputes the log-closure amplitude of model m at the uv-quadrangle u1,v1 -> u2,v2 -> u3,v3 -> u4,v4 using the formula\n\nC = logleftfracV(u1v1)V(u2v2)V(u3v3)V(u4v4)right\n\nIf you want to compute log closure amplitudes over a number of triangles consider using the logclosure_amplitudes function.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.logclosure_amplitudes","page":"ComradeBase API","title":"ComradeBase.logclosure_amplitudes","text":"logclosure_amplitudes(m::AbstractModel,\n p1,\n p2,\n p3,\n p4\n )\n\nComputes the log closure amplitudes of the model m at the quadrangles p1, p2, p3, p4.\n\n\n\n\n\nlogclosure_amplitudes(m::AbstractModel, ac::ClosureConfig)\n\nComputes the log closure amplitudes of the model m using the array configuration ac.\n\nNotes\n\nThis is faster than the logclosure_amplitudes(m, u1, v1, ...) method since it only computes as many visibilities as required thanks to the closure design matrix formalism from Blackburn et al.[1]\n\n[1]: Blackburn L., et al \"Closure Statistics in Interferometric Data\" ApJ 2020\n\n\n\n\n\nlogclosure_amplitudes(vis::AbstractArray, ac::ArrayConfiguration)\n\nCompute the log-closure amplitudes for a set of visibilities and an array configuration\n\nNotes\n\nThis uses a closure design matrix for the computation.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#Model-Interface","page":"ComradeBase API","title":"Model Interface","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"ComradeBase.AbstractModel\nComradeBase.isprimitive\nComradeBase.visanalytic\nComradeBase.imanalytic\nComradeBase.ispolarized\nComradeBase.radialextent\nComradeBase.PrimitiveTrait\nComradeBase.IsPrimitive\nComradeBase.NotPrimitive\nComradeBase.DensityAnalytic\nComradeBase.IsAnalytic\nComradeBase.NotAnalytic\nComradeBase.visibility_point\nComradeBase.visibilities_analytic\nComradeBase.visibilities_analytic!\nComradeBase.visibilities_numeric\nComradeBase.visibilities_numeric!\nComradeBase.intensity_point\nComradeBase.intensitymap_analytic\nComradeBase.intensitymap_analytic!\nComradeBase.intensitymap_numeric\nComradeBase.intensitymap_numeric!","category":"page"},{"location":"base_api/#ComradeBase.AbstractModel","page":"ComradeBase API","title":"ComradeBase.AbstractModel","text":"AbstractModel\n\nThe Comrade abstract model type. To instantiate your own model type you should subtybe from this model. Additionally you need to implement the following methods to satify the interface:\n\nMandatory Methods\n\nisprimitive: defines whether a model is standalone or is defined in terms of other models. is the model is primitive then this should return IsPrimitive() otherwise it returns NotPrimitive()\nvisanalytic: defines whether the model visibilities can be computed analytically. If yes then this should return IsAnalytic() and the user must to define visibility_point. If not analytic then visanalytic should return NotAnalytic().\nimanalytic: defines whether the model intensities can be computed pointwise. If yes then this should return IsAnalytic() and the user must to define intensity_point. If not analytic then imanalytic should return NotAnalytic().\nradialextent: Provides a estimate of the radial extent of the model in the image domain. This is used for estimating the size of the image, and for plotting.\nflux: Returns the total flux of the model.\nintensity_point: Defines how to compute model intensities pointwise. Note this is must be defined if imanalytic(::Type{YourModel})==IsAnalytic().\nvisibility_point: Defines how to compute model visibilties pointwise. Note this is must be defined if visanalytic(::Type{YourModel})==IsAnalytic().\n\nOptional Methods:\n\nispolarized: Specified whether a model is intrinsically polarized (returns IsPolarized()) or is not (returns NotPolarized()), by default a model is NotPolarized()\nvisibilities_analytic: Vectorized version of visibility_point for models where visanalytic returns IsAnalytic()\nvisibilities_numeric: Vectorized version of visibility_point for models where visanalytic returns NotAnalytic() typically these are numerical FT's\nintensitymap_analytic: Computes the entire image for models where imanalytic returns IsAnalytic()\nintensitymap_numeric: Computes the entire image for models where imanalytic returns NotAnalytic()\nintensitymap_analytic!: Inplace version of intensitymap\nintensitymap_numeric!: Inplace version of intensitymap\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.isprimitive","page":"ComradeBase API","title":"ComradeBase.isprimitive","text":"isprimitive(::Type)\n\nDispatch function that specifies whether a type is a primitive Comrade model. This function is used for dispatch purposes when composing models.\n\nNotes\n\nIf a user is specifying their own model primitive model outside of Comrade they need to specify if it is primitive\n\nstruct MyPrimitiveModel end\nComradeBase.isprimitive(::Type{MyModel}) = ComradeBase.IsPrimitive()\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visanalytic","page":"ComradeBase API","title":"ComradeBase.visanalytic","text":"visanalytic(::Type{<:AbstractModel})\n\nDetermines whether the model is pointwise analytic in Fourier domain, i.e. we can evaluate its fourier transform at an arbritrary point.\n\nIf IsAnalytic() then it will try to call visibility_point to calculate the complex visibilities. Otherwise it fallback to using the FFT that works for all models that can compute an image.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.imanalytic","page":"ComradeBase API","title":"ComradeBase.imanalytic","text":"imanalytic(::Type{<:AbstractModel})\n\nDetermines whether the model is pointwise analytic in the image domain, i.e. we can evaluate its intensity at an arbritrary point.\n\nIf IsAnalytic() then it will try to call intensity_point to calculate the intensity.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.ispolarized","page":"ComradeBase API","title":"ComradeBase.ispolarized","text":"ispolarized(::Type)\n\nTrait function that defines whether a model is polarized or not.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.radialextent","page":"ComradeBase API","title":"ComradeBase.radialextent","text":"radialextent(model::AbstractModel)\n\nProvides an estimate of the radial size/extent of the model. This is used internally to estimate image size when plotting and using modelimage\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.PrimitiveTrait","page":"ComradeBase API","title":"ComradeBase.PrimitiveTrait","text":"abstract type PrimitiveTrait\n\nThis trait specifies whether the model is a primitive\n\nNotes\n\nThis will likely turn into a trait in the future so people can inject their models into Comrade more easily.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.IsPrimitive","page":"ComradeBase API","title":"ComradeBase.IsPrimitive","text":"struct IsPrimitive\n\nTrait for primitive model\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.NotPrimitive","page":"ComradeBase API","title":"ComradeBase.NotPrimitive","text":"struct NotPrimitive\n\nTrait for not-primitive model\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.DensityAnalytic","page":"ComradeBase API","title":"ComradeBase.DensityAnalytic","text":"DensityAnalytic\n\nInternal type for specifying the nature of the model functions. Whether they can be easily evaluated pointwise analytic. This is an internal type that may change.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.IsAnalytic","page":"ComradeBase API","title":"ComradeBase.IsAnalytic","text":"struct IsAnalytic <: ComradeBase.DensityAnalytic\n\nDefines a trait that a states that a model is analytic. This is usually used with an abstract model where we use it to specify whether a model has a analytic fourier transform and/or image.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.NotAnalytic","page":"ComradeBase API","title":"ComradeBase.NotAnalytic","text":"struct NotAnalytic <: ComradeBase.DensityAnalytic\n\nDefines a trait that a states that a model is analytic. This is usually used with an abstract model where we use it to specify whether a model has does not have a easy analytic fourier transform and/or intensity function.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.visibility_point","page":"ComradeBase API","title":"ComradeBase.visibility_point","text":"visibility_point(model::AbstractModel, p)\n\nFunction that computes the pointwise visibility. This must be implemented in the model interface if visanalytic(::Type{MyModel}) == IsAnalytic()\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities_analytic","page":"ComradeBase API","title":"ComradeBase.visibilities_analytic","text":"visibilties_analytic(model, u, v, time, freq)\n\nComputes the visibilties of a model using using the analytic visibility expression given by visibility_point.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities_analytic!","page":"ComradeBase API","title":"ComradeBase.visibilities_analytic!","text":"visibilties_analytic!(vis, model, u, v, time, freq)\n\nComputes the visibilties of a model in-place, using using the analytic visibility expression given by visibility_point.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities_numeric","page":"ComradeBase API","title":"ComradeBase.visibilities_numeric","text":"visibilties_numeric(model, u, v, time, freq)\n\nComputes the visibilties of a model using a numerical fourier transform. Note that none of these are implemented in ComradeBase. For implementations please see Comrade.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.visibilities_numeric!","page":"ComradeBase API","title":"ComradeBase.visibilities_numeric!","text":"visibilties_numeric!(vis, model, u, v, time, freq)\n\nComputes the visibilties of a model in-place using a numerical fourier transform. Note that none of these are implemented in ComradeBase. For implementations please see Comrade.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensity_point","page":"ComradeBase API","title":"ComradeBase.intensity_point","text":"intensity_point(model::AbstractModel, p)\n\nFunction that computes the pointwise intensity if the model has the trait in the image domain IsAnalytic(). Otherwise it will use construct the image in visibility space and invert it.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap_analytic","page":"ComradeBase API","title":"ComradeBase.intensitymap_analytic","text":"intensitymap_analytic(m::AbstractModel, p::AbstractDims)\n\nComputes the IntensityMap of a model m using the image dimensions p by broadcasting over the analytic intensity_point method.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap_analytic!","page":"ComradeBase API","title":"ComradeBase.intensitymap_analytic!","text":"intensitymap_analytic!(img::IntensityMap, m::AbstractModel)\nintensitymap_analytic!(img::StokesIntensityMap, m::AbstractModel)\n\nUpdates the img using the model m by broadcasting over the analytic intensity_point method.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap_numeric","page":"ComradeBase API","title":"ComradeBase.intensitymap_numeric","text":"intensitymap_numeric(m::AbstractModel, p::AbstractDims)\n\nComputes the IntensityMap of a model m at the image positions p using a numerical method. This has to be specified uniquely for every model m if imanalytic(typeof(m)) === NotAnalytic(). See Comrade.jl for example implementations.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.intensitymap_numeric!","page":"ComradeBase API","title":"ComradeBase.intensitymap_numeric!","text":"intensitymap_numeric!(img::IntensityMap, m::AbstractModel)\nintensitymap_numeric!(img::StokesIntensityMap, m::AbstractModel)\n\nUpdates the img using the model m using a numerical method. This has to be specified uniquely for every model m if imanalytic(typeof(m)) === NotAnalytic(). See Comrade.jl for example implementations.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#Image-Types","page":"ComradeBase API","title":"Image Types","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"ComradeBase.IntensityMap(::AbstractArray, ::AbstractDims)\nComradeBase.StokesIntensityMap\nComradeBase.imagepixels\nComradeBase.GriddedKeys\nComradeBase.dims\nComradeBase.named_dims\nComradeBase.axisdims\nComradeBase.stokes\nComradeBase.imagegrid\nComradeBase.fieldofview\nComradeBase.pixelsizes\nComradeBase.phasecenter\nComradeBase.centroid\nComradeBase.second_moment\nComradeBase.header\nComradeBase.NoHeader\nComradeBase.MinimalHeader\nComradeBase.load\nComradeBase.save","category":"page"},{"location":"base_api/#ComradeBase.IntensityMap-Tuple{AbstractArray, ComradeBase.AbstractDims}","page":"ComradeBase API","title":"ComradeBase.IntensityMap","text":"IntensityMap(data::AbstractArray, dims::NamedTuple)\nIntensityMap(data::AbstractArray, grid::AbstractDims)\n\nConstructs an intensitymap using the image dimensions given by dims. This returns a KeyedArray with keys given by an ImageDimensions object.\n\ndims = (X=range(-10.0, 10.0, length=100), Y = range(-10.0, 10.0, length=100),\n T = [0.1, 0.2, 0.5, 0.9, 1.0], F = [230e9, 345e9]\n )\nimgk = IntensityMap(rand(100,100,5,1), dims)\n\n\n\n\n\n","category":"method"},{"location":"base_api/#ComradeBase.StokesIntensityMap","page":"ComradeBase API","title":"ComradeBase.StokesIntensityMap","text":"struct StokesIntensityMap{T, N, SI, SQ, SU, SV}\n\nGeneral struct that holds intensity maps for each stokes parameter. Each image I, Q, U, V must share the same axis dimensions. This type also obeys much of the usual array interface in Julia. The following methods have been implemented:\n\nsize\neltype (returns StokesParams)\nndims\ngetindex\nsetindex!\npixelsizes\nfieldofview\nimagepixels\nimagegrid\nstokes\n\nwarning: Warning\nThis may eventually be phased out for IntensityMaps whose base types are StokesParams, but currently we use this for speed reasons with Zygote.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.imagepixels","page":"ComradeBase API","title":"ComradeBase.imagepixels","text":"imagepixels(img::IntensityMap)\nimagepixels(img::IntensityMapTypes)\n\nReturns a abstract spatial dimension with the image pixels locations X and Y.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.GriddedKeys","page":"ComradeBase API","title":"ComradeBase.GriddedKeys","text":"struct GriddedKeys{N, G, Hd<:ComradeBase.AbstractHeader, T} <: ComradeBase.AbstractDims{N, T}\n\nThis struct holds the dimensions that the EHT expect. The first type parameter N defines the names of each dimension. These names are usually one of - (:X, :Y, :T, :F) - (:X, :Y, :F, :T) - (:X, :Y) # spatial only where :X,:Y are the RA and DEC spatial dimensions respectively, :T is the the time direction and :F is the frequency direction.\n\nFieldnames\n\ndims\nheader\n\nNotes\n\nWarning it is rare you need to access this constructor directly. Instead use the direct IntensityMap function.\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.dims","page":"ComradeBase API","title":"ComradeBase.dims","text":"dims(g::AbstractDims)\n\nReturns a tuple containing the dimensions of g. For a named version see ComradeBase.named_dims\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.named_dims","page":"ComradeBase API","title":"ComradeBase.named_dims","text":"named_dims(g::AbstractDims)\n\nReturns a named tuple containing the dimensions of g. For a unnamed version see dims\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.axisdims","page":"ComradeBase API","title":"ComradeBase.axisdims","text":"axisdims(img::IntensityMap)\n\nReturns the keys of the IntensityMap as the actual internal AbstractDims object.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.stokes","page":"ComradeBase API","title":"ComradeBase.stokes","text":"stokes(m::AbstractPolarizedModel, p::Symbol)\n\nExtract the specific stokes component p from the polarized model m\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.imagegrid","page":"ComradeBase API","title":"ComradeBase.imagegrid","text":"imagegrid(k::IntensityMap)\n\nReturns the grid the IntensityMap is defined as. Note that this is unallocating since it lazily computes the grid. The grid is an example of a KeyedArray and works similarly. This is useful for broadcasting a model across an abritrary grid.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.fieldofview","page":"ComradeBase API","title":"ComradeBase.fieldofview","text":"fieldofview(img::IntensityMap)\nfieldofview(img::IntensityMapTypes)\n\nReturns a named tuple with the field of view of the image.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.pixelsizes","page":"ComradeBase API","title":"ComradeBase.pixelsizes","text":"pixelsizes(img::IntensityMap)\npixelsizes(img::IntensityMapTypes)\n\nReturns a named tuple with the spatial pixel sizes of the image.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.phasecenter","page":"ComradeBase API","title":"ComradeBase.phasecenter","text":"phasecenter(img::IntensityMap)\nphasecenter(img::StokesIntensitymap)\n\nComputes the phase center of an intensity map. Note this is the pixels that is in the middle of the image.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.centroid","page":"ComradeBase API","title":"ComradeBase.centroid","text":"centroid(im::AbstractIntensityMap)\n\nComputes the image centroid aka the center of light of the image.\n\nFor polarized maps we return the centroid for Stokes I only.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.second_moment","page":"ComradeBase API","title":"ComradeBase.second_moment","text":"second_moment(im::AbstractIntensityMap; center=true)\n\nComputes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.\n\nFor polarized maps we return the second moment for Stokes I only.\n\n\n\n\n\nsecond_moment(im::AbstractIntensityMap; center=true)\n\nComputes the image second moment tensor of the image. By default we really return the second cumulant or centered second moment, which is specified by the center argument.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.header","page":"ComradeBase API","title":"ComradeBase.header","text":"header(g::AbstractDims)\n\nReturns the headerinformation of the dimensions g\n\n\n\n\n\nheader(img::IntensityMap)\n\nRetrieves the header of an IntensityMap\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.NoHeader","page":"ComradeBase API","title":"ComradeBase.NoHeader","text":"NoHeader\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.MinimalHeader","page":"ComradeBase API","title":"ComradeBase.MinimalHeader","text":"MinimalHeader{T}\n\nA minimal header type for ancillary image information.\n\nFields\n\nsource: Common source name\n\nra: Right ascension of the image in degrees (J2000)\n\ndec: Declination of the image in degrees (J2000)\n\nmjd: Modified Julian Date in days\n\nfrequency: Frequency of the image in Hz\n\n\n\n\n\n","category":"type"},{"location":"base_api/#ComradeBase.load","page":"ComradeBase API","title":"ComradeBase.load","text":"ComradeBase.load(fitsfile::String, IntensityMap)\n\nThis loads in a fits file that is more robust to the various imaging algorithms in the EHT, i.e. is works with clean, smili, eht-imaging. The function returns an tuple with an intensitymap and a second named tuple with ancillary information about the image, like the source name, location, mjd, and radio frequency.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#ComradeBase.save","page":"ComradeBase API","title":"ComradeBase.save","text":"ComradeBase.save(file::String, img::IntensityMap, obs)\n\nSaves an image to a fits file. You can optionally pass an EHTObservation so that ancillary information will be added.\n\n\n\n\n\n","category":"function"},{"location":"base_api/#Polarization","page":"ComradeBase API","title":"Polarization","text":"","category":"section"},{"location":"base_api/","page":"ComradeBase API","title":"ComradeBase API","text":"ComradeBase.AbstractPolarizedModel","category":"page"},{"location":"base_api/#ComradeBase.AbstractPolarizedModel","page":"ComradeBase API","title":"ComradeBase.AbstractPolarizedModel","text":"abstract type AbstractPolarizedModel <: ComradeBase.AbstractModel\n\nType the classifies a model as being intrinsically polarized. This means that any call to visibility must return a StokesParams to denote the full stokes polarization of the model.\n\n\n\n\n\n","category":"type"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"EditURL = \"../../../examples/imaging_closures.jl\"","category":"page"},{"location":"examples/imaging_closures/#Imaging-a-Black-Hole-using-only-Closure-Quantities","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"In this tutorial, we will create a preliminary reconstruction of the 2017 M87 data on April 6 using closure-only imaging. This tutorial is a general introduction to closure-only imaging in Comrade. For an introduction to simultaneous image and instrument modeling, see Stokes I Simultaneous Image and Instrument Modeling","category":"page"},{"location":"examples/imaging_closures/#Introduction-to-Closure-Imaging","page":"Imaging a Black Hole using only Closure Quantities","title":"Introduction to Closure Imaging","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"The EHT is the highest-resolution telescope ever created. Its resolution is equivalent to roughly tracking a hockey puck on the moon when viewing it from the earth. However, the EHT is also a unique interferometer. For one, the data it produces is incredibly sparse. The array is formed from only eight geographic locations around the planet, each with its unique telescope. Additionally, the EHT observes at a much higher frequency than typical interferometers. As a result, it is often difficult to directly provide calibrated data since the source model can be complicated. This implies there can be large instrumental effects often called gains that can corrupt our signal. One way to deal with this is to fit quantities that are independent of gains. These are often called closure quantities. The types of closure quantities are briefly described in Introduction to the VLBI Imaging Problem.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"In this tutorial, we will do closure-only modeling of M87 to produce preliminary images of M87.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To get started, we will load Comrade","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using Comrade\n\n\nusing Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\n\nusing Pyehtim","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using StableRNGs\nrng = StableRNG(123)","category":"page"},{"location":"examples/imaging_closures/#Load-the-Data","page":"Imaging a Black Hole using only Closure Quantities","title":"Load the Data","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To download the data visit https://doi.org/10.25739/g85n-f134 To load the eht-imaging obsdata object we do:","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"obs = ehtim.obsdata.load_uvfits(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now we do some minor preprocessing:","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Scan average the data since the data have been preprocessed so that the gain phases are coherent.\nAdd 1% systematic noise to deal with calibration issues that cause 1% non-closing errors.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"obs = scan_average(obs).add_fractional_noise(0.015)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now, we extract our closure quantities from the EHT data set.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3), ClosurePhases(;snrcut=3))","category":"page"},{"location":"examples/imaging_closures/#Build-the-Model/Posterior","page":"Imaging a Black Hole using only Closure Quantities","title":"Build the Model/Posterior","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"For our model, we will be using an image model that consists of a raster of point sources, convolved with some pulse or kernel to make a ContinuousImage object with it Comrade's. generic image model.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"function sky(θ, metadata)\n (;fg, c, σimg) = θ\n (;K, meanpr, grid, cache) = metadata\n # Construct the image model we fix the flux to 0.6 Jy in this case\n cp = meanpr .+ σimg.*c.params\n rast = ((1-fg))*K(to_simplex(CenteredLR(), cp))\n img = IntensityMap(rast, grid)\n m = ContinuousImage(img, cache)\n # Add a large-scale gaussian to deal with the over-resolved mas flux\n g = modify(Gaussian(), Stretch(μas2rad(250.0), μas2rad(250.0)), Renormalize(fg))\n return m + g\nend","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now, let's set up our image model. The EHT's nominal resolution is 20-25 μas. Additionally, the EHT is not very sensitive to a larger field of views; typically, 60-80 μas is enough to describe the compact flux of M87. Given this, we only need to use a small number of pixels to describe our image.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"npix = 32\nfovxy = μas2rad(150.0)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now, we can feed in the array information to form the cache","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"grid = imagepixels(fovxy, fovxy, npix, npix)\nbuffer = IntensityMap(zeros(npix,npix), grid)\ncache = create_cache(NFFTAlg(dlcamp), buffer, BSplinePulse{3}())","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now we need to specify our image prior. For this work we will use a Gaussian Markov Random field prior","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using VLBIImagePriors, Distributions, DistributionsAD","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Since we are using a Gaussian Markov random field prior we need to first specify our mean image. For this work we will use a symmetric Gaussian with a FWHM of 50 μas","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"fwhmfac = 2*sqrt(2*log(2))\nmpr = modify(Gaussian(), Stretch(μas2rad(50.0)./fwhmfac))\nimgpr = intensitymap(mpr, grid)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now since we are actually modeling our image on the simplex we need to ensure that our mean image has unit flux","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"imgpr ./= flux(imgpr)\n\nmeanpr = to_real(CenteredLR(), Comrade.baseimage(imgpr))\nmetadata = (;meanpr,K=CenterImage(imgpr), grid, cache)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"In addition we want a reasonable guess for what the resolution of our image should be. For radio astronomy this is given by roughly the longest baseline in the image. To put this into pixel space we then divide by the pixel size.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"beam = beamsize(dlcamp)\nrat = (beam/(step(grid.X)))","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To make the Gaussian Markov random field efficient we first precompute a bunch of quantities that allow us to scale things linearly with the number of image pixels. This drastically improves the usual N^3 scaling you get from usual Gaussian Processes.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"crcache = MarkovRandomFieldCache(meanpr)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"One of the benefits of the Bayesian approach is that we can fit for the hyperparameters of our prior/regularizers unlike traditional RML appraoches. To construct this heirarchical prior we will first make a map that takes in our regularizer hyperparameters and returns the image prior given those hyperparameters.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"fmap = let crcache=crcache\n x->GaussMarkovRandomField(x, 1.0, crcache)\nend","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now we can finally form our image prior. For this we use a heirarchical prior where the correlation length is given by a inverse gamma prior to prevent overfitting. Gaussian Markov random fields are extremly flexible models. To prevent overfitting it is common to use priors that penalize complexity. Therefore, we want to use priors that enforce similarity to our mean image, and prefer smoothness.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"cprior = HierarchicalPrior(fmap, InverseGamma(1.0, -log(0.01*rat)))\n\nprior = NamedDist(c = cprior, σimg = truncated(Normal(0.0, 1.0); lower=0.01), fg=Uniform(0.0, 1.0))\n\nlklhd = RadioLikelihood(sky, dlcamp, dcphase;\n skymeta = metadata)\npost = Posterior(lklhd, prior)","category":"page"},{"location":"examples/imaging_closures/#Reconstructing-the-Image","page":"Imaging a Black Hole using only Closure Quantities","title":"Reconstructing the Image","text":"","category":"section"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To sample from this posterior, it is convenient to first move from our constrained parameter space to an unconstrained one (i.e., the support of the transformed posterior is (-∞, ∞)). This is done using the asflat function.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"tpost = asflat(post)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"We can now also find the dimension of our posterior or the number of parameters we will sample.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"warning: Warning\nThis can often be different from what you would expect. This is especially true when using angular variables, where we often artificially increase the dimension of the parameter space to make sampling easier.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"ndim = dimension(tpost)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now we optimize using LBFGS","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using ComradeOptimization\nusing OptimizationOptimJL\nusing Zygote\nf = OptimizationFunction(tpost, Optimization.AutoZygote())\nprob = Optimization.OptimizationProblem(f, prior_sample(rng, tpost), nothing)\nsol = solve(prob, LBFGS(); maxiters=5_00);\nnothing #hide","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Before we analyze our solution we first need to transform back to parameter space.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"xopt = transform(tpost, sol)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"First we will evaluate our fit by plotting the residuals","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using Plots\nresidual(skymodel(post, xopt), dlcamp, ylabel=\"Log Closure Amplitude Res.\")","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"and now closure phases","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"residual(skymodel(post, xopt), dcphase, ylabel=\"|Closure Phase Res.|\")","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now let's plot the MAP estimate.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"import CairoMakie as CM\nimg = intensitymap(skymodel(post, xopt), μas2rad(150.0), μas2rad(150.0), 100, 100)\nCM.image(img, axis=(xreversed=true, aspect=1, title=\"MAP Image\"), colormap=:afmhot)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To sample from the posterior we will use HMC and more specifically the NUTS algorithm. For information about NUTS see Michael Betancourt's notes.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"note: Note\nFor our metric we use a diagonal matrix due to easier tuning.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using ComradeAHMC\nusing Zygote\nmetric = DiagEuclideanMetric(ndim)\nchain, stats = sample(post, AHMC(;metric, autodiff=Val(:Zygote)), 700; nadapts=500, init_params=xopt)","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"warning: Warning\nThis should be run for longer!","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now that we have our posterior, we can assess which parts of the image are strongly inferred by the data. This is rather unique to Comrade where more traditional imaging algorithms like CLEAN and RML are inherently unable to assess uncertainty in their reconstructions.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"To explore our posterior let's first create images from a bunch of draws from the posterior","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"msamples = skymodel.(Ref(post), chain[501:2:end]);\nnothing #hide","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"The mean image is then given by","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"using StatsBase\nimgs = intensitymap.(msamples, μas2rad(150.0), μas2rad(150.0), 128, 128)\nmimg = mean(imgs)\nsimg = std(imgs)\nfig = CM.Figure(;resolution=(800, 800))\nCM.image(fig[1,1], mimg,\n axis=(xreversed=true, aspect=1, title=\"Mean Image\"),\n colormap=:afmhot)\nCM.image(fig[1,2], simg./(max.(mimg, 1e-5)),\n axis=(xreversed=true, aspect=1, title=\"1/SNR\",), colorrange=(0.0, 2.0),\n colormap=:afmhot)\nCM.image(fig[2,1], imgs[1],\n axis=(xreversed=true, aspect=1,title=\"Draw 1\"),\n colormap=:afmhot)\nCM.image(fig[2,2], imgs[end],\n axis=(xreversed=true, aspect=1,title=\"Draw 2\"),\n colormap=:afmhot)\nfig","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Now let's see whether our residuals look better.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"p = plot();\nfor s in sample(chain[501:end], 10)\n residual!(p, vlbimodel(post, s), dlcamp)\nend\nylabel!(\"Log-Closure Amplitude Res.\");\np","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"p = plot();\nfor s in sample(chain[501:end], 10)\n residual!(p, vlbimodel(post, s), dcphase)\nend\nylabel!(\"|Closure Phase Res.|\");\np","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"And viola, you have a quick and preliminary image of M87 fitting only closure products. For a publication-level version we would recommend","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"Running the chain longer and multiple times to properly assess things like ESS and R̂ (see Geometric Modeling of EHT Data)\nFitting gains. Typically gain amplitudes are good to 10-20% for the EHT not the infinite uncertainty closures implicitly assume\nMaking sure the posterior is unimodal (hint for this example it isn't!). The EHT image posteriors can be pretty complicated, so typically you want to use a sampler that can deal with multi-modal posteriors. Check out the package Pigeons.jl for an in-development package that should easily enable this type of sampling.","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"","category":"page"},{"location":"examples/imaging_closures/","page":"Imaging a Black Hole using only Closure Quantities","title":"Imaging a Black Hole using only Closure Quantities","text":"This page was generated using Literate.jl.","category":"page"},{"location":"libs/adaptmcmc/#ComradeAdaptMCMC","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"","category":"section"},{"location":"libs/adaptmcmc/","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"Interface to the `AdaptiveMCMC.jl MCMC package. This uses parallel tempering to sample from the posterior. We typically recommend using one of the nested sampling packages. This interface follows Comrade's usual sampling interface for uniformity.","category":"page"},{"location":"libs/adaptmcmc/#Example","page":"ComradeAdaptMCMC","title":"Example","text":"","category":"section"},{"location":"libs/adaptmcmc/","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"using Comrade\nusing ComradeAdaptMCMC\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\n\nsmplr = AdaptMCMC(ntemp=5) # use 5 tempering levels\n\nsamples, endstate = sample(post, smplr, 500_000, 300_000)","category":"page"},{"location":"libs/adaptmcmc/#API","page":"ComradeAdaptMCMC","title":"API","text":"","category":"section"},{"location":"libs/adaptmcmc/","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"CurrentModule = ComradeAdaptMCMC","category":"page"},{"location":"libs/adaptmcmc/","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC","text":"Modules = [ComradeAdaptMCMC]","category":"page"},{"location":"libs/adaptmcmc/#ComradeAdaptMCMC.AdaptMCMC","page":"ComradeAdaptMCMC","title":"ComradeAdaptMCMC.AdaptMCMC","text":"AdaptMCMC(;ntemp,\n swap=:nonrev,\n algorithm = :ram,\n fulladapt = true,\n acc_sw = 0.234,\n all_levels = false\n )\n\nCreate an AdaptMCMC.jl sampler. This sampler uses the AdaptiveMCMC.jl package to sample from the posterior. Namely, this is a parallel tempering algorithm with an adaptive exploration and tempering sampler. For more information please see [https://github.com/mvihola/AdaptiveMCMC.jl].\n\nThe arguments of the function are:\n\nntemp: Number of temperature to run in parallel tempering\nswap: Which temperature swapping strategy to use, options are:\n:norev (default) uses a non-reversible tempering scheme (still ergodic)\n:single single randomly picked swap\n:randperm swap in random order\n:sweep upward or downward sweeps picked at random\nalgorithm: exploration MCMC algorithm (default is :ram which uses robust adaptive metropolis-hastings) options are:\n:ram (default) Robust adaptive metropolis\n:am Adaptive metropolis\n:asm Adaptive scaling metropolis\n:aswam Adaptive scaling within adaptive metropolis\nfulladapt: whether we adapt both the tempering ladder and the exploration kernel (default is true, i.e. adapt everything)\nacc_sw: The target acceptance rate for temperature swaps\nall_levels: Store all tempering levels to memory (warning this can use a lot of memory)\n\n\n\n\n\n","category":"type"},{"location":"libs/adaptmcmc/#StatsBase.sample","page":"ComradeAdaptMCMC","title":"StatsBase.sample","text":"sample(post::Posterior, sampler::AdaptMCMC, nsamples, burnin=nsamples÷2, args...; init_params=nothing, kwargs...)\n\nSample the posterior post using the AdaptMCMC sampler. This will produce nsamples with the first burnin steps removed. The init_params indicate where to start the sampler from and it is expected to be a NamedTuple of parameters.\n\nPossible additional kwargs are:\n\nthin::Int = 1: which says to save only every thin sample to memory\nrng: Specify a random number generator (default uses GLOBAL_RNG)\n\nThis return a tuple where:\n\nFirst element are the chains from the sampler. If all_levels=false the only the unit temperature (posterior) chain is returned\nSecond element is the additional ancilliary information about the samples including the loglikelihood logl, sampler state state, average exploration kernel acceptance rate accexp for each tempering level, and average temperate swap acceptance rates accswp for each tempering level.\n\n\n\n\n\n","category":"function"},{"location":"benchmarks/#Benchmarks","page":"Benchmarks","title":"Benchmarks","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Comrade was partially designed with performance in mind. Solving imaging inverse problems is traditionally very computationally expensive, especially since Comrade uses Bayesian inference. To benchmark Comrade we will compare it to two of the most common modeling or imaging packages within the EHT:","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"eht-imaging\nThemis","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"eht-imaging[1] or ehtim is a Python package that is widely used within the EHT for its imaging and modeling interfaces. It is easy to use and is commonly used in the EHT. However, to specify the model, the user must specify how to calculate the model's complex visibilities and its gradients, allowing eht-imaging's modeling package to achieve acceptable speeds.","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Themis is a C++ package focused on providing Bayesian estimates of the image structure. In fact, Comrade took some design cues from Themis. Themis has been used in various EHT publications and is the standard Bayesian modeling tool used in the EHT. However, Themis is quite challenging to use and requires a high level of knowledge from its users, requiring them to understand makefile, C++, and the MPI standard. Additionally, Themis provides no infrastructure to compute gradients, instead relying on finite differencing, which scales poorly for large numbers of model parameters. ","category":"page"},{"location":"benchmarks/#Benchmarking-Problem","page":"Benchmarks","title":"Benchmarking Problem","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"For our benchmarking problem, we analyze a situation very similar to the one explained in Geometric Modeling of EHT Data. Namely, we will consider fitting 2017 M87 April 6 data using an m-ring and a single Gaussian component. Please see the end of this page to see the code we used for Comrade and eht-imaging.","category":"page"},{"location":"benchmarks/#Results","page":"Benchmarks","title":"Results","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"All tests were run using the following system","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Julia Version 1.7.3\nPython Version 3.10.5\nComrade Version 0.4.0\neht-imaging Version 1.2.4\nCommit 742b9abb4d (2022-05-06 12:58 UTC)\nPlatform Info:\n OS: Linux (x86_64-pc-linux-gnu)\n CPU: 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz\n WORD_SIZE: 64\n LIBM: libopenlibm\n LLVM: libLLVM-12.0.1 (ORCJIT, tigerlake)","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Our benchmark results are the following:","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":" Comrade (micro sec) eht-imaging (micro sec) Themis (micro sec)\nposterior eval (min) 31 445 55\nposterior eval (mean) 36 476 60\ngrad posterior eval (min) 105 (ForwardDiff) 1898 1809\ngrad posterior eval (mean) 119 (ForwardDiff) 1971 1866","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"Therefore, for this test we found that Comrade was the fastest method in all tests. For the posterior evaluation we found that Comrade is > 10x faster than eht-imaging, and 2x faster then Themis. For gradient evaluations we have Comrade is > 15x faster than both eht-imaging and Themis.","category":"page"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"[1]: Chael A, et al. Inteferometric Imaging Directly with Closure Phases 2018 ApJ 857 1 arXiv:1803/07088","category":"page"},{"location":"benchmarks/#Code","page":"Benchmarks","title":"Code","text":"","category":"section"},{"location":"benchmarks/#Julia-Code","page":"Benchmarks","title":"Julia Code","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"using Pyehtim\nusing Comrade\nusing Distributions\nusing BenchmarkTools\nusing ForwardDiff\nusing VLBIImagePriors\nusing Zygote\n\n# To download the data visit https://doi.org/10.25739/g85n-f134\nobs = ehtim.obsdata.load_uvfits(joinpath(@__DIR__, \"assets/SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))\nobs = scan_average(obs)\namp = extract_table(obs, VisibilityAmplitudes())\n\nfunction model(θ)\n (;rad, wid, a, b, f, sig, asy, pa, x, y) = θ\n ring = f*smoothed(modify(MRing((a,), (b,)), Stretch(μas2rad(rad))), μas2rad(wid))\n g = modify(Gaussian(), Stretch(μas2rad(sig)*asy, μas2rad(sig)), Rotate(pa), Shift(μas2rad(x), μas2rad(y)), Renormalize(1-f))\n return ring + g\nend\n\nlklhd = RadioLikelihood(model, amp)\nprior = NamedDist(\n rad = Uniform(10.0, 30.0),\n wid = Uniform(1.0, 10.0),\n a = Uniform(-0.5, 0.5), b = Uniform(-0.5, 0.5),\n f = Uniform(0.0, 1.0),\n sig = Uniform((1.0), (60.0)),\n asy = Uniform(0.0, 0.9),\n pa = Uniform(0.0, 1π),\n x = Uniform(-(80.0), (80.0)),\n y = Uniform(-(80.0), (80.0))\n )\n\nθ = (rad= 22.0, wid= 3.0, a = 0.0, b = 0.15, f=0.8, sig = 20.0, asy=0.2, pa=π/2, x=20.0, y=20.0)\nm = model(θ)\n\npost = Posterior(lklhd, prior)\ntpost = asflat(post)\n\n# Transform to the unconstrained space\nx0 = inverse(tpost, θ)\n\n# Lets benchmark the posterior evaluation\nℓ = logdensityof(tpost)\n@benchmark ℓ($x0)\n\nusing LogDensityProblemsAD\n# Now we benchmark the gradient\ngℓ = ADgradient(Val(:Zygote), tpost)\n@benchmark LogDensityProblemsAD.logdensity_and_gradient($gℓ, $x0)","category":"page"},{"location":"benchmarks/#eht-imaging-Code","page":"Benchmarks","title":"eht-imaging Code","text":"","category":"section"},{"location":"benchmarks/","page":"Benchmarks","title":"Benchmarks","text":"# To download the data visit https://doi.org/10.25739/g85n-f134\nobs = ehtim.obsdata.load_uvfits(joinpath(@__DIR__, \"assets/SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))\nobs = scan_average(obs)\n\n\n\nmeh = ehtim.model.Model()\nmeh = meh.add_thick_mring(F0=θ.f,\n d=2*μas2rad(θ.rad),\n alpha=2*sqrt(2*log(2))*μas2rad(θ.wid),\n x0 = 0.0,\n y0 = 0.0,\n beta_list=[0.0+θ.b]\n )\nmeh = meh.add_gauss(F0=1-θ.f,\n FWHM_maj=2*sqrt(2*log(2))*μas2rad(θ.sig),\n FWHM_min=2*sqrt(2*log(2))*μas2rad(θ.sig)*θ.asy,\n PA = θ.pa,\n x0 = μas2rad(20.0),\n y0 = μas2rad(20.0)\n )\n\npreh = meh.default_prior()\npreh[1][\"F0\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>0.0, \"max\"=>1.0)\npreh[1][\"d\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>μas2rad(20.0), \"max\"=>μas2rad(60.0))\npreh[1][\"alpha\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>μas2rad(2.0), \"max\"=>μas2rad(25.0))\npreh[1][\"x0\"] = Dict(\"prior_type\"=>\"fixed\")\npreh[1][\"y0\"] = Dict(\"prior_type\"=>\"fixed\")\n\npreh[2][\"F0\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>0.0, \"max\"=>1.0)\npreh[2][\"FWHM_maj\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>μas2rad(2.0), \"max\"=>μas2rad(120.0))\npreh[2][\"FWHM_min\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>μas2rad(2.0), \"max\"=>μas2rad(120.0))\npreh[2][\"x0\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>-μas2rad(40.0), \"max\"=>μas2rad(40.0))\npreh[2][\"y0\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>-μas2rad(40.0), \"max\"=>μas2rad(40.0))\npreh[2][\"PA\"] = Dict(\"prior_type\"=>\"flat\", \"min\"=>-1π, \"max\"=>1π)\n\nusing PyCall\npy\"\"\"\nimport ehtim\nimport numpy as np\ntransform_param = ehtim.modeling.modeling_utils.transform_param\ndef make_paraminit(param_map, meh, trial_model, model_prior):\n model_init = meh.copy()\n param_init = []\n for j in range(len(param_map)):\n pm = param_map[j]\n if param_map[j][1] in trial_model.params[param_map[j][0]].keys():\n param_init.append(transform_param(model_init.params[pm[0]][pm[1]]/pm[2], model_prior[pm[0]][pm[1]],inverse=False))\n else: # In this case, the parameter is a list of complex numbers, so the real/imaginary or abs/arg components need to be assigned\n if param_map[j][1].find('cpol') != -1:\n param_type = 'beta_list_cpol'\n idx = int(param_map[j][1].split('_')[0][8:])\n elif param_map[j][1].find('pol') != -1:\n param_type = 'beta_list_pol'\n idx = int(param_map[j][1].split('_')[0][7:]) + (len(trial_model.params[param_map[j][0]][param_type])-1)//2\n elif param_map[j][1].find('beta') != -1:\n param_type = 'beta_list'\n idx = int(param_map[j][1].split('_')[0][4:]) - 1\n else:\n raise Exception('Unsure how to interpret ' + param_map[j][1])\n\n curval = model_init.params[param_map[j][0]][param_type][idx]\n if '_' not in param_map[j][1]:\n param_init.append(transform_param(np.real( model_init.params[pm[0]][param_type][idx]/pm[2]), model_prior[pm[0]][pm[1]],inverse=False))\n elif param_map[j][1][-2:] == 're':\n param_init.append(transform_param(np.real( model_init.params[pm[0]][param_type][idx]/pm[2]), model_prior[pm[0]][pm[1]],inverse=False))\n elif param_map[j][1][-2:] == 'im':\n param_init.append(transform_param(np.imag( model_init.params[pm[0]][param_type][idx]/pm[2]), model_prior[pm[0]][pm[1]],inverse=False))\n elif param_map[j][1][-3:] == 'abs':\n param_init.append(transform_param(np.abs( model_init.params[pm[0]][param_type][idx]/pm[2]), model_prior[pm[0]][pm[1]],inverse=False))\n elif param_map[j][1][-3:] == 'arg':\n param_init.append(transform_param(np.angle(model_init.params[pm[0]][param_type][idx])/pm[2], model_prior[pm[0]][pm[1]],inverse=False))\n else:\n if not quiet: print('Parameter ' + param_map[j][1] + ' not understood!')\n n_params = len(param_init)\n return n_params, param_init\n\"\"\"\n\n# make the python param map and use optimize so we flatten the parameter space.\npmap, pmask = ehtim.modeling.modeling_utils.make_param_map(meh, preh, \"scipy.optimize.dual_annealing\", fit_model=true)\ntrial_model = meh.copy()\n\n# get initial parameters\nn_params, pinit = py\"make_paraminit\"(pmap, meh, trial_model, preh)\n\n# make data products for the globdict\ndata1, sigma1, uv1, _ = ehtim.modeling.modeling_utils.chisqdata(obs, \"amp\")\ndata2, sigma2, uv2, _ = ehtim.modeling.modeling_utils.chisqdata(obs, false)\ndata3, sigma3, uv3, _ = ehtim.modeling.modeling_utils.chisqdata(obs, false)\n\n# now set the ehtim modeling globdict\n\nehtim.modeling.modeling_utils.globdict = Dict(\"trial_model\"=>trial_model,\n \"d1\"=>\"amp\", \"d2\"=>false, \"d3\"=>false,\n \"pol1\"=>\"I\", \"pol2\"=>\"I\", \"pol3\"=>\"I\",\n \"data1\"=>data1, \"sigma1\"=>sigma1, \"uv1\"=>uv1, \"jonesdict1\"=>nothing,\n \"data2\"=>data2, \"sigma2\"=>sigma2, \"uv2\"=>uv2, \"jonesdict2\"=>nothing,\n \"data3\"=>data3, \"sigma3\"=>sigma3, \"uv3\"=>uv3, \"jonesdict3\"=>nothing,\n \"alpha_d1\"=>0, \"alpha_d2\"=>0, \"alpha_d3\"=>0,\n \"n_params\"=> n_params, \"n_gains\"=>0, \"n_leakage\"=>0,\n \"model_prior\"=>preh, \"param_map\"=>pmap, \"param_mask\"=>pmask,\n \"gain_prior\"=>nothing, \"gain_list\"=>[], \"gain_init\"=>nothing,\n \"fit_leakage\"=>false, \"leakage_init\"=>[], \"leakage_fit\"=>[],\n \"station_leakages\"=>nothing, \"leakage_prior\"=>nothing,\n \"show_updates\"=>false, \"update_interval\"=>1,\n \"gains_t1\"=>nothing, \"gains_t2\"=>nothing,\n \"minimizer_func\"=>\"scipy.optimize.dual_annealing\",\n \"Obsdata\"=>obs,\n \"fit_pol\"=>false, \"fit_cpol\"=>false,\n \"flux\"=>1.0, \"alpha_flux\"=>0, \"fit_gains\"=>false,\n \"marginalize_gains\"=>false, \"ln_norm\"=>1314.33,\n \"param_init\"=>pinit, \"test_gradient\"=>false\n )\n\n# This is the negative log-posterior\nfobj = ehtim.modeling.modeling_utils.objfunc\n\n# This is the gradient of the negative log-posterior\ngfobj = ehtim.modeling.modeling_utils.objgrad\n\n# Lets benchmark the posterior evaluation\n@benchmark fobj($pinit)\n\n# Now we benchmark the gradient\n@benchmark gfobj($pinit)","category":"page"},{"location":"libs/ahmc/#ComradeAHMC","page":"ComradeAHMC","title":"ComradeAHMC","text":"","category":"section"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"The first choice when sampling from the model/image posterior, is AdvancedHMC ), which uses Hamiltonian Monte Carlo to sample from the posterior. Specifically, we usually use the NUTS algorithm. ","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"The interface to AdvancedHMC is very powerful and general. To simplify the procedure for Comrade users, we have provided a thin interface. A user needs to specify a sampler and then call the sample function.","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"For AdvancedHMC, the user can create the sampler by calling the AHMC function. This only has one mandatory argument, the metric the sampler uses. There are currently two options:","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"- `DiagEuclideanMetric` which uses a diagonal metric for covariance adaptation\n- `DenseEuclideanMetric` which uses a dense or full rank metric for covariance adaptation","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"We recommend that a user starts with DiagEuclideanMetric since the dense metric typically requires many more samples to tune correctly. The other options for AHMC (sans autodiff) specify which version of HMC to use. Our default options match the choices made by the Stan programming language. The final option to consider is the autodiff optional argument. This specifies which auto differentiation package to use. Currently Val(:Zygote) is the recommended default for all models. If you model doesn't work with Zygote please file an issue. Eventually we will move entirely to Enzyme.","category":"page"},{"location":"libs/ahmc/#Example","page":"ComradeAHMC","title":"Example","text":"","category":"section"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"using Comrade\nusing ComradeAHMC\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\nmetric = DiagEuclideanMetric(dimension(post))\nsmplr = AHMC(metric=metric, autodiff=Val(:Zygote))\n\nsamples, stats = sample(post, smplr, 2_000; nadapts=1_000)","category":"page"},{"location":"libs/ahmc/#API","page":"ComradeAHMC","title":"API","text":"","category":"section"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"CurrentModule = ComradeAHMC","category":"page"},{"location":"libs/ahmc/","page":"ComradeAHMC","title":"ComradeAHMC","text":"Modules = [ComradeAHMC]","category":"page"},{"location":"libs/ahmc/#ComradeAHMC.AHMC","page":"ComradeAHMC","title":"ComradeAHMC.AHMC","text":"AHMC\n\nCreates a sampler that uses the AdvancedHMC framework to construct an Hamiltonian Monte Carlo NUTS sampler.\n\nThe user must specify the metric they want to use. Typically we recommend DiagEuclideanMetric as a reasonable starting place. The other options are chosen to match the Stan languages defaults and should provide a good starting point. Please see the AdvancedHMC docs for more information.\n\nNotes\n\nFor autodiff the must provide a Val(::Symbol) that specifies the AD backend. Currently, we use LogDensityProblemsAD.\n\nFields\n\nmetric: AdvancedHMC metric to use\n\nintegrator: AdvancedHMC integrator Defaults to AdvancedHMC.Leapfrog\n\ntrajectory: HMC trajectory sampler Defaults to AdvancedHMC.MultinomialTS\n\ntermination: HMC termination condition Defaults to AdvancedHMC.StrictGeneralisedNoUTurn\n\nadaptor: Adaptation strategy for mass matrix and stepsize Defaults to AdvancedHMC.StanHMCAdaptor\n\ntargetacc: Target acceptance rate for all trajectories on the tree Defaults to 0.85\n\ninit_buffer: The number of steps for the initial tuning phase. Defaults to 75 which is the Stan default\n\nterm_buffer: The number of steps for the final fast step size adaptation Default if 50 which is the Stan default\n\nwindow_size: The number of steps to tune the covariance before the first doubling Default is 25 which is the Stan default\n\nautodiff: autodiff backend see LogDensitProblemsAD.jl for possible backends. The default is Zygote which is appropriate for high dimensional problems.\n\n\n\n\n\n","category":"type"},{"location":"libs/ahmc/#ComradeAHMC.DiskStore","page":"ComradeAHMC","title":"ComradeAHMC.DiskStore","text":"Disk\n\nType that specifies to save the HMC results to disk.\n\nFields\n\nname: Path of the directory where the results will be saved. If the path does not exist it will be automatically created.\n\nstride: The output stride, i.e. every stride steps the MCMC output will be dumped to disk.\n\n\n\n\n\n","category":"type"},{"location":"libs/ahmc/#ComradeAHMC.MemoryStore","page":"ComradeAHMC","title":"ComradeAHMC.MemoryStore","text":"Memory\n\nStored the HMC samplers in memory or ram.\n\n\n\n\n\n","category":"type"},{"location":"libs/ahmc/#ComradeAHMC.load_table","page":"ComradeAHMC","title":"ComradeAHMC.load_table","text":"load_table(out::DiskOutput, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table=\"samples\")\nload_table(out::String, indices::Union{Base.Colon, UnitRange, StepRange}=Base.Colon(); table=\"samples\")\n\nThe the results from a HMC run saved to disk. To read in the output the user can either pass the resulting out object, or the path to the directory that the results were saved, i.e. the path specified in DiskStore.\n\nArguments\n\nout::Union{String, DiskOutput}: If out is a string is must point to the direct that the DiskStore pointed to. Otherwise it is what is directly returned from sample.\nindices: The indices of the that you want to load into memory. The default is to load the entire table.\n\nKeyword Arguments\n\ntable: A string specifying the table you wish to read in. There are two options: \"samples\" which corresponds to the actual MCMC chain, and stats which corresponds to additional information about the sampler, e.g., the log density of each sample and tree statistics.\n\n\n\n\n\n","category":"function"},{"location":"libs/ahmc/#StatsBase.sample-Tuple{Random.AbstractRNG, Comrade.TransformedPosterior, AHMC, Any, Vararg{Any}}","page":"ComradeAHMC","title":"StatsBase.sample","text":"AbstractMCMC.sample(post::Comrade.Posterior,\n sampler::AHMC,\n nsamples;\n init_params=nothing,\n saveto::Union{Memory, Disk}=Memory(),\n kwargs...)\n\nSamples the posterior post using the AdvancedHMC sampler specified by AHMC. This will run the sampler for nsamples.\n\nTo initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.\n\nWith saveto the user can optionally specify whether to store the samples in memory MemoryStore or save directly to disk with DiskStore(filename, stride). The stride controls how often t he samples are dumped to disk.\n\nFor possible kwargs please see the AdvancedHMC.jl docs\n\nThis returns a tuple where the first element is a TypedTable of the MCMC samples in parameter space and the second argument is a set of ancilliary information about the sampler.\n\nNotes\n\nThis will automatically transform the posterior to the flattened unconstrained space.\n\n\n\n\n\n","category":"method"},{"location":"libs/ahmc/#StatsBase.sample-Union{Tuple{A}, Tuple{Random.AbstractRNG, Posterior, A, AbstractMCMC.AbstractMCMCEnsemble, Any, Any}} where A<:AHMC","page":"ComradeAHMC","title":"StatsBase.sample","text":"AbstractMCMC.sample(post::Comrade.Posterior,\n sampler::AHMC,\n parallel::AbstractMCMC.AbstractMCMCEnsemble,\n nsamples,\n nchainsl;\n init_params=nothing,\n kwargs...)\n\nSamples the posterior post using the AdvancedHMC sampler specified by AHMC. This will sample nchains copies of the posterior using the parallel scheme. Each chain will be sampled for nsamples.\n\nTo initialize the chain the user can set init_params to Vector{NamedTuple} whose elements are the starting locations for each of the nchains. If no starting location is specified nchains random samples from the prior will be chosen for the starting locations.\n\nFor possible kwargs please see the AdvancedHMC.jl docs\n\nThis returns a tuple where the first element is nchains of TypedTable's each which contains the MCMC samples of one of the parallel chain and the second argument is a set of ancilliary information about each set of samples.\n\nNotes\n\nThis will automatically transform the posterior to the flattened unconstrained space.\n\n\n\n\n\n","category":"method"},{"location":"interface/#Model-Interface","page":"Model Interface","title":"Model Interface","text":"","category":"section"},{"location":"interface/","page":"Model Interface","title":"Model Interface","text":"For the interface for sky models please see VLBISkyModels.","category":"page"},{"location":"libs/optimization/#ComradeOptimization","page":"ComradeOptimization","title":"ComradeOptimization","text":"","category":"section"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"To optimize our posterior, we use the Optimization.jl package. Optimization provides a global interface to several Julia optimizers. The Comrade wrapper for Optimization.jl is very thin. The only difference addition is that Comrade has provided a method:","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"OptimizationFunction(::TransformedPosterior, args...; kwargs...)","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"meaning we can pass it a posterior object and it will set up the OptimizationFunction for us. ","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"note: Note\n","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"We only specify this for a transformed version of the posterior. This is because Optimization.jl requires a flattened version of the posterior.","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"Additionally, different optimizers may prefer different parameter transformations. For example, if we use OptimizationBBO, using ascube is a good choice since it needs a compact region to search over, and ascube convert our parameter space to the unit hypercube. On the other hand, gradient-based optimizers work best without bounds, so a better choice would be the asflat transformation.","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"To see what optimizers are available and what options are available, please see the Optimizations.jl docs.","category":"page"},{"location":"libs/optimization/#Example","page":"ComradeOptimization","title":"Example","text":"","category":"section"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"using Comrade\nusing ComradeOptimization\nusing OptimizationOptimJL\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\n# Create a optimization problem using Zygote as the backend\nfflat = OptimizationProblem(asflat(post), Optimization.AutoZygote())\n\n# create the problem from a random point in the prior, nothing is b/c there are no additional arugments to our function.\nprob = OptimizationProblem(fflat, prior_sample(asflat(post)), nothing)\n\n# Now solve! Here we use LBFGS\nsol = solve(prob, LBFGS(); g_tol=1e-2)","category":"page"},{"location":"libs/optimization/#API","page":"ComradeOptimization","title":"API","text":"","category":"section"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"CurrentModule = ComradeOptimization","category":"page"},{"location":"libs/optimization/","page":"ComradeOptimization","title":"ComradeOptimization","text":"Modules = [ComradeOptimization]\nOrder = [:function, :type]","category":"page"},{"location":"libs/optimization/#ComradeOptimization.laplace-Tuple{OptimizationProblem, Any, Vararg{Any}}","page":"ComradeOptimization","title":"ComradeOptimization.laplace","text":"laplace(prob, opt, args...; kwargs...)\n\nCompute the Laplace or Quadratic approximation to the prob or posterior. The args and kwargs are passed the the SciMLBase.solve function. This will return a Distributions.MvNormal object that approximates the posterior in the transformed space.\n\nNote the quadratic approximation is in the space of the transformed posterior not the usual parameter space. This is better for constrained problems where we may run up against a boundary.\n\n\n\n\n\n","category":"method"},{"location":"libs/optimization/#SciMLBase.OptimizationFunction-Tuple{Comrade.TransformedPosterior, Vararg{Any}}","page":"ComradeOptimization","title":"SciMLBase.OptimizationFunction","text":"SciMLBase.OptimizationFunction(post::Posterior, args...; kwargs...)\n\nConstructs a OptimizationFunction from a Comrade.TransformedPosterior object. Note that a user must transform the posterior first. This is so we know which space is most amenable to optimization.\n\n\n\n\n\n","category":"method"},{"location":"libs/dynesty/#ComradeDynesty","page":"ComradeDynesty","title":"ComradeDynesty","text":"","category":"section"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"ComradeDynesty interfaces Comrade to the excellent dynesty package, more specifically the Dynesty.jl Julia wrapper.","category":"page"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"We follow Dynesty.jl interface closely. However, instead of having to pass a log-likelihood function and prior transform, we instead just pass a Comrade.Posterior object and Comrade takes care of defining the prior transformation and log-likelihood for us. For more information about Dynesty.jl, please see its docs and docstrings.","category":"page"},{"location":"libs/dynesty/#Example","page":"ComradeDynesty","title":"Example","text":"","category":"section"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"using Comrade\nusing ComradeDynesty\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\n# Create sampler using 1000 live points\nsmplr = NestedSampler(dimension(post), 1000)\n\nsamples, dyres = sample(post, smplr; dlogz=1.0)\n\n# Optionally resample the chain to create an equal weighted output\nusing StatsBase\nequal_weight_chain = ComradeDynesty.equalresample(samples, 10_000)","category":"page"},{"location":"libs/dynesty/#API","page":"ComradeDynesty","title":"API","text":"","category":"section"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"CurrentModule = ComradeDynesty","category":"page"},{"location":"libs/dynesty/","page":"ComradeDynesty","title":"ComradeDynesty","text":"Modules = [ComradeDynesty]\nOrder = [:function, :type]","category":"page"},{"location":"libs/dynesty/#StatsBase.sample-Tuple{Random.AbstractRNG, Comrade.TransformedPosterior, Union{DynamicNestedSampler, NestedSampler}}","page":"ComradeDynesty","title":"StatsBase.sample","text":"AbstractMCMC.sample(post::Comrade.Posterior, smplr::Dynesty.NestedSampler, args...; kwargs...)\nAbstractMCMC.sample(post::Comrade.Posterior, smplr::Dynesty.DynamicNestedSampler, args...; kwargs...)\n\nSample the posterior post using Dynesty.jl NestedSampler/DynamicNestedSampler sampler. The args/kwargs are forwarded to Dynesty for more information see its docs\n\nThis returns a tuple where the first element are the weighted samples from dynesty in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights. The final element of the tuple is the original dynesty output file.\n\nTo create equally weighted samples the user can use\n\nusing StatsBase\nchain, stats = sample(post, NestedSampler(dimension(post), 1000))\nequal_weighted_chain = sample(chain, Weights(stats.weights), 10_000)\n\n\n\n\n\n","category":"method"},{"location":"libs/nested/#ComradeNested","page":"ComradeNested","title":"ComradeNested","text":"","category":"section"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"ComradeNested interfaces Comrade to the excellent NestedSamplers.jl package.","category":"page"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"We follow NestedSamplers interface closely. The difference is that instead of creating a NestedModel, we pass a Comrade.Posterior object as our model. Internally, Comrade defines the prior transform and extracts the log-likelihood function.","category":"page"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"For more information about NestedSamplers.jl please see its docs.","category":"page"},{"location":"libs/nested/#Example","page":"ComradeNested","title":"Example","text":"","category":"section"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"using Comrade\nusing ComradeNested\n\n# Some stuff to create a posterior object\npost # of type Comrade.Posterior\n\n# Create sampler using 1000 live points\nsmplr = Nested(dimension(post), 1000)\n\nsamples = sample(post, smplr; d_logz=1.0)\n\n# Optionally resample the chain to create an equal weighted output\nusing StatsBase\nequal_weight_chain = ComradeNested.equalresample(samples, 10_000)","category":"page"},{"location":"libs/nested/#API","page":"ComradeNested","title":"API","text":"","category":"section"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"CurrentModule = ComradeNested","category":"page"},{"location":"libs/nested/","page":"ComradeNested","title":"ComradeNested","text":"Modules = [ComradeNested]\nOrder = [:function, :type]","category":"page"},{"location":"libs/nested/#StatsBase.sample-Tuple{Random.AbstractRNG, Comrade.TransformedPosterior, Nested, Vararg{Any}}","page":"ComradeNested","title":"StatsBase.sample","text":"AbstractMCMC.sample(post::Comrade.Posterior, smplr::Nested, args...; kwargs...)\n\nSample the posterior post using NestedSamplers.jl Nested sampler. The args/kwargs are forwarded to NestedSampler for more information see its docs\n\nThis returns a tuple where the first element are the weighted samples from NestedSamplers in a TypedTable. The second element includes additional information about the samples, like the log-likelihood, evidence, evidence error, and the sample weights.\n\nTo create equally weighted samples the user can use ```julia using StatsBase chain, stats = sample(post, NestedSampler(dimension(post), 1000)) equalweightedchain = sample(chain, Weights(stats.weights), 10_000)\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade-API","page":"Comrade API","title":"Comrade API","text":"","category":"section"},{"location":"api/#Contents","page":"Comrade API","title":"Contents","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Pages = [\"api.md\"]","category":"page"},{"location":"api/#Index","page":"Comrade API","title":"Index","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Pages = [\"api.md\"]","category":"page"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.Comrade","category":"page"},{"location":"api/#Comrade.Comrade","page":"Comrade API","title":"Comrade.Comrade","text":"Comrade\n\nComposable Modeling of Radio Emission\n\n\n\n\n\n","category":"module"},{"location":"api/#Model-Definitions","page":"Comrade API","title":"Model Definitions","text":"","category":"section"},{"location":"api/#Calibration-Models","page":"Comrade API","title":"Calibration Models","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.corrupt\nComrade.CalTable\nComrade.caltable(::Comrade.JonesCache, ::AbstractVector)\nComrade.caltable(::Comrade.EHTObservation, ::AbstractVector)\nComrade.DesignMatrix\nComrade.JonesCache\nComrade.ResponseCache\nComrade.JonesModel\nComrade.VLBIModel\nComrade.CalPrior\nComrade.CalPrior(::NamedTuple, ::JonesCache)\nComrade.CalPrior(::NamedTuple, ::NamedTuple, ::JonesCache)\nComrade.RIMEModel\nComrade.ObsSegmentation\nComrade.IntegSeg\nComrade.ScanSeg\nComrade.TrackSeg\nComrade.FixedSeg\nComrade.jonescache(::Comrade.EHTObservation, ::Comrade.ObsSegmentation)\nComrade.SingleReference\nComrade.RandomReference\nComrade.SEFDReference\nComrade.jonesStokes\nComrade.jonesG\nComrade.jonesD\nComrade.jonesT\nBase.map(::Any, ::Vararg{Comrade.JonesPairs})\nComrade.caltable\nComrade.JonesPairs\nComrade.GainSchema\nComrade.SegmentedJonesCache","category":"page"},{"location":"api/#Comrade.corrupt","page":"Comrade API","title":"Comrade.corrupt","text":"corrupt(vis, j1, j2)\n\nCorrupts the model coherency matrices with the Jones matrices j1 for station 1 and j2 for station 2.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.CalTable","page":"Comrade API","title":"Comrade.CalTable","text":"struct CalTable{T, G<:(AbstractVecOrMat)}\n\nA Tabes of calibration quantities. The columns of the table are the telescope station codes. The rows are the calibration quantities at a specific time stamp. This user should not use this struct directly. Instead that should call caltable.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.caltable-Tuple{JonesCache, AbstractVector}","page":"Comrade API","title":"Comrade.caltable","text":"caltable(g::JonesCache, jterms::AbstractVector)\n\nConvert the JonesCache g and recovered Jones/corruption elements jterms into a CalTable which satisfies the Tables.jl interface.\n\nExample\n\nct = caltable(gcache, gains)\n\n# Access a particular station (here ALMA)\nct[:AA]\nct.AA\n\n# Access a the first row\nct[1, :]\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.caltable-Tuple{Comrade.EHTObservation, AbstractVector}","page":"Comrade API","title":"Comrade.caltable","text":"caltable(obs::EHTObservation, gains::AbstractVector)\n\nCreate a calibration table for the observations obs with gains. This returns a CalTable object that satisfies the Tables.jl interface. This table is very similar to the DataFrames interface.\n\nExample\n\nct = caltable(obs, gains)\n\n# Access a particular station (here ALMA)\nct[:AA]\nct.AA\n\n# Access a the first row\nct[1, :]\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.DesignMatrix","page":"Comrade API","title":"Comrade.DesignMatrix","text":"struct DesignMatrix{X, M<:AbstractArray{X, 2}, T, S} <: AbstractArray{X, 2}\n\nInternal type that holds the gain design matrices for visibility corruption.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.JonesCache","page":"Comrade API","title":"Comrade.JonesCache","text":"struct JonesCache{D1, D2, S, Sc, R} <: Comrade.AbstractJonesCache\n\nHolds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope.\n\nFields\n\nm1: Design matrix for the first station\n\nm2: Design matrix for the second station\n\nseg: Segmentation schemes for this cache\n\nschema: Gain Schema\n\nreferences: List of Reference stations\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ResponseCache","page":"Comrade API","title":"Comrade.ResponseCache","text":"struct ResponseCache{M, B<:PolBasis} <: Comrade.AbstractJonesCache\n\nHolds various transformations that move from the measured telescope basis to the chosen on sky reference basis.\n\nFields\n\nT1: Transform matrices for the first stations\n\nT2: Transform matrices for the second stations\n\nrefbasis: Reference polarization basis\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.JonesModel","page":"Comrade API","title":"Comrade.JonesModel","text":"JonesModel(jones::JonesPairs, refbasis = CirBasis())\nJonesModel(jones::JonesPairs, tcache::ResponseCache)\n\nConstructs the intrument corruption model using pairs of jones matrices jones and a reference basis\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.VLBIModel","page":"Comrade API","title":"Comrade.VLBIModel","text":"VLBIModel(skymodel, instrumentmodel)\n\nConstructs a VLBIModel from a jones pairs that describe the intrument model and the model which describes the on-sky polarized visibilities. The third argument can either be the tcache that converts from the model coherency basis to the instrumental basis, or just the refbasis that will be used when constructing the model coherency matrices.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.CalPrior","page":"Comrade API","title":"Comrade.CalPrior","text":"CalPrior(dists, cache::JonesCache, reference=:none)\n\nCreates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.\n\nExample\n\nFor the 2017 observations of M87 a common CalPrior call is:\n\njulia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),\n AP = LogNormal(0.0, 0.1),\n JC = LogNormal(0.0, 0.1),\n SM = LogNormal(0.0, 0.1),\n AZ = LogNormal(0.0, 0.1),\n LM = LogNormal(0.0, 1.0),\n PV = LogNormal(0.0, 0.1)\n ), cache)\n\njulia> x = rand(gdist)\njulia> logdensityof(gdist, x)\n\n\n\n\n\nCalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)\n\nConstructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have\n\ndist0 = (AA = Normal(0.0, 1.0), )\ndistt = (AA = Normal(0.0, 0.1), )\n\nthen the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from\n\ng2 = g1 + ϵ1\n\nwhere ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.CalPrior-Tuple{NamedTuple, JonesCache}","page":"Comrade API","title":"Comrade.CalPrior","text":"CalPrior(dists, cache::JonesCache, reference=:none)\n\nCreates a distribution for the gain priors for gain cache cache. The dists should be a NamedTuple of Distributions, where each name corresponds to a telescope or station in the observation. The resulting type is a subtype of the Distributions.AbstractDistribution so the usual Distributions interface should work.\n\nExample\n\nFor the 2017 observations of M87 a common CalPrior call is:\n\njulia> gdist = CalPrior((AA = LogNormal(0.0, 0.1),\n AP = LogNormal(0.0, 0.1),\n JC = LogNormal(0.0, 0.1),\n SM = LogNormal(0.0, 0.1),\n AZ = LogNormal(0.0, 0.1),\n LM = LogNormal(0.0, 1.0),\n PV = LogNormal(0.0, 0.1)\n ), cache)\n\njulia> x = rand(gdist)\njulia> logdensityof(gdist, x)\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.CalPrior-Tuple{NamedTuple, NamedTuple, JonesCache}","page":"Comrade API","title":"Comrade.CalPrior","text":"CalPrior(dist0::NamedTuple, dist_transition::NamedTuple, jcache::SegmentedJonesCache)\n\nConstructs a calibration prior in two steps. The first two arguments have to be a named tuple of distributions, where each name corresponds to a site. The first argument is gain prior for the first time stamp. The second argument is the segmented gain prior for each subsequent time stamp. For instance, if we have\n\ndist0 = (AA = Normal(0.0, 1.0), )\ndistt = (AA = Normal(0.0, 0.1), )\n\nthen the gain prior for first time stamp that AA obserserves will be Normal(0.0, 1.0). The next time stamp gain is the construted from\n\ng2 = g1 + ϵ1\n\nwhere ϵ1 ~ Normal(0.0, 0.1) = distt.AA, and g1 is the gain from the first time stamp. In other words distt is the uncorrelated transition probability when moving from timestamp i to timestamp i+1. For the typical pre-calibrated dataset the gain prior on distt can be tighter than the prior on dist0.\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.RIMEModel","page":"Comrade API","title":"Comrade.RIMEModel","text":"abstract type RIMEModel <: ComradeBase.AbstractModel\n\nAbstract type that encompasses all RIME style corruptions.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ObsSegmentation","page":"Comrade API","title":"Comrade.ObsSegmentation","text":"abstract type ObsSegmentation\n\nThe data segmentation scheme to use. This is important for constructing a JonesCache\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.IntegSeg","page":"Comrade API","title":"Comrade.IntegSeg","text":"struct IntegSeg{S} <: Comrade.ObsSegmentation\n\nData segmentation such that the quantity is constant over a correlation integration.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ScanSeg","page":"Comrade API","title":"Comrade.ScanSeg","text":"struct ScanSeg{S} <: Comrade.ObsSegmentation\n\nData segmentation such that the quantity is constant over a scan.\n\nWarning\n\nCurrently we do not explicity track the telescope scans. This will be fixed in a future version. Right now ScanSeg and TrackSeg are the same\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.TrackSeg","page":"Comrade API","title":"Comrade.TrackSeg","text":"struct TrackSeg <: Comrade.ObsSegmentation\n\nData segmentation such that the quantity is constant over a track, i.e., the observation \"night\".\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.FixedSeg","page":"Comrade API","title":"Comrade.FixedSeg","text":"struct FixedSeg{T} <: Comrade.ObsSegmentation\n\nEnforces that the station calibraton value will have a fixed value. This is most commonly used when enforcing a reference station for gain phases.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.jonescache-Tuple{Comrade.EHTObservation, Comrade.ObsSegmentation}","page":"Comrade API","title":"Comrade.jonescache","text":"jonescache(obs::EHTObservation, segmentation::ObsSegmentation)\njonescache(obs::EHTObservatoin, segmentation::NamedTuple)\n\nConstructs a JonesCache from a given observation obs using the segmentation scheme segmentation. If segmentation is a named tuple it is assumed that each symbol in the named tuple corresponds to a segmentation for thes sites in obs.\n\nExample\n\n# coh is a EHTObservation\njulia> jonescache(coh, ScanSeg())\njulia> segs = (AA = ScanSeg(), AP = TrachSeg(), AZ=FixedSegSeg())\njulia> jonescache(coh, segs)\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.SingleReference","page":"Comrade API","title":"Comrade.SingleReference","text":"SingleReference(site::Symbol, val::Number)\n\nUse a single site as a reference. The station gain will be set equal to val.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.RandomReference","page":"Comrade API","title":"Comrade.RandomReference","text":"RandomReference(val::Number)\n\nFor each timestamp select a random reference station whose station gain will be set to val.\n\nNotes\n\nThis is useful when there isn't a single site available for all scans and you want to split up the choice of reference site. We recommend only using this option for Stokes I fitting.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.SEFDReference","page":"Comrade API","title":"Comrade.SEFDReference","text":"SiteOrderReference(val::Number, sefd_index = 1)\n\nSelects the reference site based on the SEFD of each telescope, where the smallest SEFD is preferentially selected. The reference gain is set to val and the user can select to use the n lowest SEFD site by passing sefd_index = n.\n\nNotes\n\nThis is done on a per-scan basis so if a site is missing from a scan the next highest SEFD site will be used.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.jonesStokes","page":"Comrade API","title":"Comrade.jonesStokes","text":"jonesStokes(g1::AbstractArray, gcache::AbstractJonesCache)\njonesStokes(f, g1::AbstractArray, gcache::AbstractJonesCache)\n\nConstruct the Jones Pairs for the stokes I image only. That is, we only need to pass a single vector corresponding to the gain for the stokes I visibility. This is for when you only want to image Stokes I. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.\n\nWarning\n\nIn the future this functionality may be removed when stokes I fitting is replaced with the more correct trace(coherency), i.e. RR+LL for a circular basis.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.jonesG","page":"Comrade API","title":"Comrade.jonesG","text":"jonesG(g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)\njonesG(f, g1::AbstractVector, g2::AbstractVector, jcache::AbstractJonesCache)\n\nConstructs the pairs Jones G matrices for each pair of stations. The g1 are the gains for the first polarization basis and g2 are the gains for the other polarization. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if g1 and g2 are the log-gains then f=exp will convert them into the gains.\n\nThe layout for each matrix is as follows:\n\n g1 0\n 0 g2\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.jonesD","page":"Comrade API","title":"Comrade.jonesD","text":"jonesD(d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)\njonesD(f, d1::AbstractVector, d2::AbstractVector, jcache::AbstractJonesCache)\n\nConstructs the pairs Jones D matrices for each pair of stations. The d1 are the d-termsfor the first polarization basis and d2 are the d-terms for the other polarization. The first argument is optional and denotes a function that is applied to every element of jones cache. For instance if d1 and d2 are the log-dterms then f=exp will convert them into the dterms.\n\nThe layout for each matrix is as follows:\n\n 1 d1\n d2 1\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.jonesT","page":"Comrade API","title":"Comrade.jonesT","text":"jonesT(tcache::ResponseCache)\n\nReturns a JonesPair of matrices that transform from the model coherency matrices basis to the on-sky coherency basis, this includes the feed rotation and choice of polarization feeds.\n\n\n\n\n\n","category":"function"},{"location":"api/#Base.map-Tuple{Any, Vararg{Comrade.JonesPairs}}","page":"Comrade API","title":"Base.map","text":"map(f, args::JonesPairs...) -> JonesPairs\n\nMaps over a set of JonesPairs applying the function f to each element. This returns a collected JonesPair. This us useful for more advanced operations on Jones matrices.\n\nExamples\n\nmap(G, D, F) do g, d, f\n return f'*exp.(g)*d*f\nend\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.caltable","page":"Comrade API","title":"Comrade.caltable","text":"caltable(args...)\n\nCreates a calibration table from a set of arguments. The specific arguments depend on what calibration you are applying.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.JonesPairs","page":"Comrade API","title":"Comrade.JonesPairs","text":"struct JonesPairs{T, M1<:AbstractArray{T, 1}, M2<:AbstractArray{T, 1}}\n\nHolds the pairs of Jones matrices for the first and second station of a baseline.\n\nFields\n\nm1: Vector of jones matrices for station 1\n\nm2: Vector of jones matrices for station 2\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.GainSchema","page":"Comrade API","title":"Comrade.GainSchema","text":"GainSchema(sites, times)\n\nConstructs a schema for the gains of an observation. The sites and times correspond to the specific site and time for each gain that will be modeled.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.SegmentedJonesCache","page":"Comrade API","title":"Comrade.SegmentedJonesCache","text":"struct SegmentedJonesCache{D, S<:Comrade.ObsSegmentation, ST, Ti} <: Comrade.AbstractJonesCache\n\nHolds the ancillary information for a the design matrix cache for Jones matrices. That is, it defines the cached map that moves from model visibilities to the corrupted voltages that are measured from the telescope. This uses a segmented decomposition so that the gain at a single timestamp is the sum of the previous gains. In this formulation the gains parameters are the segmented gain offsets from timestamp to timestamp\n\nFields\n\nm1: Design matrix for the first station\n\nm2: Design matrix for the second station\n\nseg: Segmentation scheme for this cache\n\nstations: station codes\n\ntimes: times\n\n\n\n\n\n","category":"type"},{"location":"api/#Models","page":"Comrade API","title":"Models","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"For the description of the model API see VLBISkyModels.","category":"page"},{"location":"api/#Data-Types","page":"Comrade API","title":"Data Types","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.extract_table\nComrade.ComplexVisibilities\nComrade.VisibilityAmplitudes\nComrade.ClosurePhases\nComrade.LogClosureAmplitudes\nComrade.Coherencies\nComrade.baselines\nComrade.arrayconfig\nComrade.closure_phase(::Comrade.EHTVisibilityDatum, ::Comrade.EHTVisibilityDatum, ::Comrade.EHTVisibilityDatum)\nComrade.getdata\nComrade.getuv\nComrade.getuvtimefreq\nComrade.scantable\nComrade.stations\nComrade.uvpositions\nComrade.ArrayConfiguration\nComrade.ClosureConfig\nComrade.AbstractInterferometryDatum\nComrade.ArrayBaselineDatum\nComrade.EHTObservation\nComrade.EHTArrayConfiguration\nComrade.EHTCoherencyDatum\nComrade.EHTVisibilityDatum\nComrade.EHTVisibilityAmplitudeDatum\nComrade.EHTLogClosureAmplitudeDatum\nComrade.EHTClosurePhaseDatum\nComrade.Scan\nComrade.ScanTable","category":"page"},{"location":"api/#Comrade.extract_table","page":"Comrade API","title":"Comrade.extract_table","text":"extract_table(obs, dataproducts::VLBIDataProducts)\n\nExtract an Comrade.EHTObservation table of data products dataproducts. To pass additional keyword for the data products you can pass them as keyword arguments to the data product type. For a list of potential data products see subtypes(Comrade.VLBIDataProducts).\n\nExample\n\njulia> dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3.0), ClosurePhases(;snrcut=3.0, cut_trivial=true))\njulia> dcoh = extract_table(obs, Coherencies())\njulia> dvis = extract_table(obs, VisibilityAmplitudes())\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.ComplexVisibilities","page":"Comrade API","title":"Comrade.ComplexVisibilities","text":"ComplexVisibilities(;kwargs...)\n\nType to specify to extract the complex visibilities table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nAny keyword arguments are ignored for now. Use eht-imaging directly to modify the data.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.VisibilityAmplitudes","page":"Comrade API","title":"Comrade.VisibilityAmplitudes","text":"ComplexVisibilities(;kwargs...)\n\nType to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nFor a list of potential keyword arguments see eht-imaging and add_amp command for obsdata.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ClosurePhases","page":"Comrade API","title":"Comrade.ClosurePhases","text":"ClosuresPhases(;kwargs...)\n\nType to specify to extract the closure phase table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nFor a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:\n\ncount: How the closures are formed, the available options are \"min-correct\", \"min\", \"max\"\n\nWarning\n\nThe count keyword argument is treated specially in Comrade. The default option is \"min-correct\" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count=\"min\" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.LogClosureAmplitudes","page":"Comrade API","title":"Comrade.LogClosureAmplitudes","text":"LogClosureAmplitudes(;kwargs...)\n\nType to specify to extract the log closure amplitudes table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nFor a list of potential keyword arguments see eht-imaging and add_cphase command for obsdata. In addition note we have changed the following:\n\ncount: How the closures are formed, the available options are \"min-correct\", \"min\", \"max\"\n\nReturns an EHTObservation with log-closure amp. datums\n\nWarning\n\nThe count keyword argument is treated specially in Comrade. The default option is \"min-correct\" and should almost always be used. This option construct a minimal set of closure phases that is valid even when the array isn't fully connected. For testing and legacy reasons we ehtim other count options are also included. However, the current ehtim count=\"min\" option is broken and does construct proper minimal sets of closure quantities if the array isn't fully connected.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.Coherencies","page":"Comrade API","title":"Comrade.Coherencies","text":"Coherencies(;kwargs...)\n\nType to specify to extract the coherency matrices table in the extract_table function. Optional keywords are passed through extract_table to specify additional option.\n\nSpecial keywords for eht-imaging with Pyehtim.jl\n\nAny keyword arguments are ignored for now. Use eht-imaging directly to modify the data.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.baselines","page":"Comrade API","title":"Comrade.baselines","text":"baselines(CP::EHTClosurePhaseDatum)\n\nReturns the baselines used for a single closure phase datum\n\n\n\n\n\nbaselines(CP::EHTLogClosureAmplitudeDatum)\n\nReturns the baselines used for a single closure phase datum\n\n\n\n\n\nbaselines(scan::Scan)\n\nReturn the baselines for each datum in a scan\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.arrayconfig","page":"Comrade API","title":"Comrade.arrayconfig","text":"arrayconfig(vis)\n\n\nExtract the array configuration from a EHT observation.\n\n\n\n\n\n","category":"function"},{"location":"api/#ComradeBase.closure_phase-Tuple{Comrade.EHTVisibilityDatum, Comrade.EHTVisibilityDatum, Comrade.EHTVisibilityDatum}","page":"Comrade API","title":"ComradeBase.closure_phase","text":"closure_phase(D1::EHTVisibilityDatum,\n D2::EHTVisibilityDatum,\n D3::EHTVisibilityDatum\n )\n\nComputes the closure phase of the three visibility datums.\n\nNotes\n\nWe currently use the high SNR Gaussian error approximation for the closure phase. In the future we may use the moment matching from Monte Carlo sampling.\n\n\n\n\n\n","category":"method"},{"location":"api/#Comrade.getdata","page":"Comrade API","title":"Comrade.getdata","text":"getdata(obs::EHTObservation, s::Symbol)\n\nPass-through function that gets the array of s from the EHTObservation. For example say you want the times of all measurement then\n\ngetdata(obs, :time)\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.getuv","page":"Comrade API","title":"Comrade.getuv","text":"getuv\n\nGet the u, v positions of the array.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.getuvtimefreq","page":"Comrade API","title":"Comrade.getuvtimefreq","text":"getuvtimefreq(ac)\n\n\nGet the u, v, time, freq of the array as a tuple.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.scantable","page":"Comrade API","title":"Comrade.scantable","text":"scantable(obs::EHTObservation)\n\nReorganizes the observation into a table of scans, where scan are defined by unique timestamps. To access the data you can use scalar indexing\n\nExample\n\nst = scantable(obs)\n# Grab the first scan\nscan1 = st[1]\n\n# Acess the detections in the scan\nscan1[1]\n\n# grab e.g. the baselines\nscan1[:baseline]\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.stations","page":"Comrade API","title":"Comrade.stations","text":"stations(d::EHTObservation)\n\nGet all the stations in a observation. The result is a vector of symbols.\n\n\n\n\n\nstations(g::CalTable)\n\nReturn the stations in the calibration table\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.uvpositions","page":"Comrade API","title":"Comrade.uvpositions","text":"uvpositions(datum::AbstractVisibilityDatum)\n\nGet the uvp positions of an inferometric datum.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.ArrayConfiguration","page":"Comrade API","title":"Comrade.ArrayConfiguration","text":"abstract type ArrayConfiguration\n\nThis defined the abstract type for an array configuration. Namely, baseline times, SEFD's, bandwidth, observation frequencies, etc.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ClosureConfig","page":"Comrade API","title":"Comrade.ClosureConfig","text":"struct ClosureConfig{A, D} <: Comrade.ArrayConfiguration\n\nArray config file for closure quantities. This stores the design matrix designmat that transforms from visibilties to closure products.\n\nFields\n\nac: Array configuration for visibilities\ndesignmat: Closure design matrix\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.AbstractInterferometryDatum","page":"Comrade API","title":"Comrade.AbstractInterferometryDatum","text":"abstract type AbstractInterferometryDatum{T}\n\nAn abstract type for all VLBI interfermetry data types. See Comrade.EHTVisibilityDatum for an example.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ArrayBaselineDatum","page":"Comrade API","title":"Comrade.ArrayBaselineDatum","text":"struct ArrayBaselineDatum{T, E, V}\n\nA single datum of an ArrayConfiguration\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTObservation","page":"Comrade API","title":"Comrade.EHTObservation","text":"struct EHTObservation{F, T<:Comrade.AbstractInterferometryDatum{F}, S<:(StructArrays.StructArray{T<:Comrade.AbstractInterferometryDatum{F}}), A, N} <: Comrade.Observation{F}\n\nThe main data product type in Comrade this stores the data which can be a StructArray of any AbstractInterferometryDatum type.\n\nFields\n\ndata: StructArray of data productts\n\nconfig: Array config holds ancillary information about array\n\nmjd: modified julia date of the observation\n\nra: RA of the observation in J2000 (deg)\n\ndec: DEC of the observation in J2000 (deg)\n\nbandwidth: bandwidth of the observation (Hz)\n\nsource: Common source name\n\ntimetype: Time zone used.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTArrayConfiguration","page":"Comrade API","title":"Comrade.EHTArrayConfiguration","text":"struct EHTArrayConfiguration{F, T, S, D<:AbstractArray} <: Comrade.ArrayConfiguration\n\nStores all the non-visibility data products for an EHT array. This is useful when evaluating model visibilities.\n\nFields\n\nbandwidth: Observing bandwith (Hz)\n\ntarr: Telescope array file\n\nscans: Scan times\n\ndata: A struct array of ArrayBaselineDatum holding time, freq, u, v, baselines.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTCoherencyDatum","page":"Comrade API","title":"Comrade.EHTCoherencyDatum","text":"struct EHTCoherencyDatum{S, B1, B2, M<:(StaticArraysCore.SArray{Tuple{2, 2}, Complex{S}, 2}), E<:(StaticArraysCore.SArray{Tuple{2, 2}, S, 2})} <: Comrade.AbstractInterferometryDatum{S}\n\nA Datum for a single coherency matrix\n\nFields\n\nmeasurement: coherency matrix, with entries in Jy\n\nerror: visibility uncertainty matrix, with entries in Jy\n\nU: x-direction baseline length, in λ\n\nV: y-direction baseline length, in λ\n\nT: Timestamp, in hours\n\nF: Frequency, in Hz\n\nbaseline: station baseline codes\n\npolbasis: polarization basis for each station\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTVisibilityDatum","page":"Comrade API","title":"Comrade.EHTVisibilityDatum","text":"struct EHTVisibilityDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}\n\nA struct holding the information for a single measured complex visibility.\n\nFIELDS\n\nmeasurement: Complex Vis. measurement (Jy)\n\nerror: error of the complex vis (Jy)\n\nU: u position of the data point in λ\n\nV: v position of the data point in λ\n\nT: time of the data point in (Hr)\n\nF: frequency of the data point (Hz)\n\nbaseline: station baseline codes\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTVisibilityAmplitudeDatum","page":"Comrade API","title":"Comrade.EHTVisibilityAmplitudeDatum","text":"struct EHTVisibilityAmplitudeDatum{S<:Number} <: Comrade.AbstractVisibilityDatum{S<:Number}\n\nA struct holding the information for a single measured visibility amplitude.\n\nFIELDS\n\nmeasurement: amplitude (Jy)\n\nerror: error of the visibility amplitude (Jy)\n\nU: u position of the data point in λ\n\nV: v position of the data point in λ\n\nT: time of the data point in (Hr)\n\nF: frequency of the data point (Hz)\n\nbaseline: station baseline codes\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTLogClosureAmplitudeDatum","page":"Comrade API","title":"Comrade.EHTLogClosureAmplitudeDatum","text":"struct EHTLogClosureAmplitudeDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}\n\nA Datum for a single log closure amplitude.\n\n\n\nmeasurement: log-closure amplitude\n\nerror: log-closure amplitude error in the high-snr limit\n\nU1: u (λ) of first station\n\nV1: v (λ) of first station\n\nU2: u (λ) of second station\n\nV2: v (λ) of second station\n\nU3: u (λ) of third station\n\nV3: v (λ) of third station\n\nU4: u (λ) of fourth station\n\nV4: v (λ) of fourth station\n\nT: Measured time of closure phase in hours\n\nF: Measured frequency of closure phase in Hz\n\nquadrangle: station codes for the quadrangle\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.EHTClosurePhaseDatum","page":"Comrade API","title":"Comrade.EHTClosurePhaseDatum","text":"struct EHTClosurePhaseDatum{S<:Number} <: Comrade.ClosureProducts{S<:Number}\n\nA Datum for a single closure phase.\n\nFields\n\nmeasurement: closure phase (rad)\n\nerror: error of the closure phase assuming the high-snr limit\n\nU1: u (λ) of first station\n\nV1: v (λ) of first station\n\nU2: u (λ) of second station\n\nV2: v (λ) of second station\n\nU3: u (λ) of third station\n\nV3: v (λ) of third station\n\nT: Measured time of closure phase in hours\n\nF: Measured frequency of closure phase in Hz\n\ntriangle: station baselines used\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.Scan","page":"Comrade API","title":"Comrade.Scan","text":"struct Scan{T, I, S}\n\nComposite type that holds information for a single scan of the telescope.\n\nFields\n\ntime: Scan time\n\nindex: Scan indices which are (scan index, data start index, data end index)\n\nscan: Scan data usually a StructArray of a <:AbstractVisibilityDatum\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.ScanTable","page":"Comrade API","title":"Comrade.ScanTable","text":"struct ScanTable{O<:Union{Comrade.ArrayConfiguration, Comrade.Observation}, T, S}\n\nWraps EHTObservation in a table that separates the observation into scans. This implements the table interface. You can access scans by directly indexing into the table. This will create a view into the table not copying the data.\n\nExample\n\njulia> st = scantable(obs)\njulia> st[begin] # grab first scan\njulia> st[end] # grab last scan\n\n\n\n\n\n","category":"type"},{"location":"api/#Model-Cache","page":"Comrade API","title":"Model Cache","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"VLBISkyModels.NFFTAlg(::Comrade.EHTObservation)\nVLBISkyModels.NFFTAlg(::Comrade.ArrayConfiguration)\nVLBISkyModels.DFTAlg(::Comrade.EHTObservation)\nVLBISkyModels.DFTAlg(::Comrade.ArrayConfiguration)","category":"page"},{"location":"api/#VLBISkyModels.NFFTAlg-Tuple{Comrade.EHTObservation}","page":"Comrade API","title":"VLBISkyModels.NFFTAlg","text":"NFFTAlg(obs::EHTObservation; kwargs...)\n\nCreate an algorithm object using the non-unform Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.\n\nThe possible optional arguments are given in the NFFTAlg struct.\n\n\n\n\n\n","category":"method"},{"location":"api/#VLBISkyModels.NFFTAlg-Tuple{Comrade.ArrayConfiguration}","page":"Comrade API","title":"VLBISkyModels.NFFTAlg","text":"NFFTAlg(ac::ArrayConfiguration; kwargs...)\n\nCreate an algorithm object using the non-unform Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.\n\nThe optional arguments are: padfac specifies how much to pad the image by, and m is an internal variable for NFFT.jl.\n\n\n\n\n\n","category":"method"},{"location":"api/#VLBISkyModels.DFTAlg-Tuple{Comrade.EHTObservation}","page":"Comrade API","title":"VLBISkyModels.DFTAlg","text":"DFTAlg(obs::EHTObservation)\n\nCreate an algorithm object using the direct Fourier transform object from the observation obs. This will extract the uv positions from the observation to allow for a more efficient FT cache.\n\n\n\n\n\n","category":"method"},{"location":"api/#VLBISkyModels.DFTAlg-Tuple{Comrade.ArrayConfiguration}","page":"Comrade API","title":"VLBISkyModels.DFTAlg","text":"DFTAlg(ac::ArrayConfiguration)\n\nCreate an algorithm object using the direct Fourier transform object from the array configuration ac. This will extract the uv positions from the observation to allow for a more efficient FT cache.\n\n\n\n\n\n","category":"method"},{"location":"api/#Bayesian-Tools","page":"Comrade API","title":"Bayesian Tools","text":"","category":"section"},{"location":"api/#Posterior-Constructions","page":"Comrade API","title":"Posterior Constructions","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.ascube\nComrade.asflat\nComrade.flatten\nComrade.inverse\nComrade.prior_sample\nComrade.likelihood\nComrade.simulate_observation\nComrade.dataproducts\nComrade.skymodel\nComrade.instrumentmodel\nComrade.vlbimodel\nComrade.sample(::Posterior)\nComrade.transform\nComrade.MultiRadioLikelihood\nComrade.Posterior\nComrade.TransformedPosterior\nComrade.RadioLikelihood\nComrade.IsFlat\nComrade.IsCube","category":"page"},{"location":"api/#HypercubeTransform.ascube","page":"Comrade API","title":"HypercubeTransform.ascube","text":"ascube(post::Posterior)\n\nConstruct a flattened version of the posterior where the parameters are transformed to live in (0, 1), i.e. the unit hypercube.\n\nThis returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.\n\nExample\n\njulia> tpost = ascube(post)\njulia> x0 = prior_sample(tpost)\njulia> logdensityof(tpost, x0)\n\nNotes\n\nThis is the transform that should be used if using typical NestedSampling methods, i.e. ComradeNested. For the transformation to unconstrained space see asflat\n\n\n\n\n\n","category":"function"},{"location":"api/#HypercubeTransform.asflat","page":"Comrade API","title":"HypercubeTransform.asflat","text":"asflat(post::Posterior)\n\nConstruct a flattened version of the posterior where the parameters are transformed to live in (-∞, ∞).\n\nThis returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.\n\nExample\n\njulia> tpost = ascube(post)\njulia> x0 = prior_sample(tpost)\njulia> logdensityof(tpost, x0)\n\nNotes\n\nThis is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube\n\n\n\n\n\n","category":"function"},{"location":"api/#ParameterHandling.flatten","page":"Comrade API","title":"ParameterHandling.flatten","text":"flatten(post::Posterior)\n\nConstruct a flattened version of the posterior but do not transform to any space, i.e. use the support specified by the prior.\n\nThis returns a TransformedPosterior that obeys the DensityInterface and can be evaluated in the usual manner, i.e. logdensityof. Note that the transformed posterior automatically includes the terms log-jacobian terms of the transformation.\n\nExample\n\njulia> tpost = flatten(post)\njulia> x0 = prior_sample(tpost)\njulia> logdensityof(tpost, x0)\n\nNotes\n\nThis is the transform that should be used if using typical MCMC methods, i.e. ComradeAHMC. For the transformation to the unit hypercube see ascube\n\n\n\n\n\n","category":"function"},{"location":"api/#TransformVariables.inverse","page":"Comrade API","title":"TransformVariables.inverse","text":"inverse(posterior::TransformedPosterior, x)\n\nTransforms the value y from parameter space to the transformed space (e.g. unit hypercube if using ascube).\n\nFor the inverse transform see transform\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.prior_sample","page":"Comrade API","title":"Comrade.prior_sample","text":"prior_sample([rng::AbstractRandom], post::Posterior, args...)\n\nSamples the prior distribution from the posterior. The args... are forwarded to the Base.rand method.\n\n\n\n\n\nprior_sample([rng::AbstractRandom], post::Posterior)\n\nReturns a single sample from the prior distribution.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.likelihood","page":"Comrade API","title":"Comrade.likelihood","text":"likelihood(d::ConditionedLikelihood, μ)\n\nReturns the likelihood of the model, with parameters μ. That is, we return the distribution of the data given the model parameters μ. This is an actual probability distribution.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.simulate_observation","page":"Comrade API","title":"Comrade.simulate_observation","text":"simulate_observation([rng::Random.AbstractRNG], post::Posterior, θ)\n\nCreate a simulated observation using the posterior and its data post using the parameter values θ. In Bayesian terminology this is a draw from the posterior predictive distribution.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.dataproducts","page":"Comrade API","title":"Comrade.dataproducts","text":"dataproducts(d::RadioLikelihood)\n\nReturns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.\n\n\n\n\n\ndataproducts(d::Posterior)\n\nReturns the data products you are fitting as a tuple. The order of the tuple corresponds to the order of the dataproducts argument in RadioLikelihood.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.skymodel","page":"Comrade API","title":"Comrade.skymodel","text":"skymodel(post::RadioLikelihood, θ)\n\nReturns the sky model or image of a posterior using the parameter valuesθ\n\n\n\n\n\nskymodel(post::Posterior, θ)\n\nReturns the sky model or image of a posterior using the parameter valuesθ\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.instrumentmodel","page":"Comrade API","title":"Comrade.instrumentmodel","text":"skymodel(lklhd::RadioLikelihood, θ)\n\nReturns the instrument model of a lklhderior using the parameter valuesθ\n\n\n\n\n\nskymodel(post::Posterior, θ)\n\nReturns the instrument model of a posterior using the parameter valuesθ\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.vlbimodel","page":"Comrade API","title":"Comrade.vlbimodel","text":"vlbimodel(post::Posterior, θ)\n\nReturns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ\n\n\n\n\n\nvlbimodel(post::Posterior, θ)\n\nReturns the instrument model and sky model as a VLBIModel of a posterior using the parameter values θ\n\n\n\n\n\n","category":"function"},{"location":"api/#StatsBase.sample-Tuple{Posterior}","page":"Comrade API","title":"StatsBase.sample","text":"sample(post::Posterior, sampler::S, args...; init_params=nothing, kwargs...)\n\nSample a posterior post using the sampler. You can optionally pass the starting location of the sampler using init_params, otherwise a random draw from the prior will be used.\n\n\n\n\n\n","category":"method"},{"location":"api/#TransformVariables.transform","page":"Comrade API","title":"TransformVariables.transform","text":"transform(posterior::TransformedPosterior, x)\n\nTransforms the value x from the transformed space (e.g. unit hypercube if using ascube) to parameter space which is usually encoded as a NamedTuple.\n\nFor the inverse transform see inverse\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.MultiRadioLikelihood","page":"Comrade API","title":"Comrade.MultiRadioLikelihood","text":"MultiRadioLikelihood(lklhd1, lklhd2, ...)\n\nCombines multiple likelihoods into one object that is useful for fitting multiple days/frequencies.\n\njulia> lklhd1 = RadioLikelihood(dcphase1, dlcamp1)\njulia> lklhd2 = RadioLikelihood(dcphase2, dlcamp2)\njulia> MultiRadioLikelihood(lklhd1, lklhd2)\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.Posterior","page":"Comrade API","title":"Comrade.Posterior","text":"Posterior(lklhd, prior)\n\nCreates a Posterior density that follows obeys DensityInterface. The lklhd object is expected to be a VLB object. For instance, these can be created using RadioLikelihood. prior\n\nNotes\n\nSince this function obeys DensityInterface you can evaluate it with\n\njulia> ℓ = logdensityof(post)\njulia> ℓ(x)\n\nor using the 2-argument version directly\n\njulia> logdensityof(post, x)\n\nwhere post::Posterior.\n\nTo generate random draws from the prior see the prior_sample function.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.TransformedPosterior","page":"Comrade API","title":"Comrade.TransformedPosterior","text":"struct TransformedPosterior{P<:Posterior, T} <: Comrade.AbstractPosterior\n\nA transformed version of a Posterior object. This is an internal type that an end user shouldn't have to directly construct. To construct a transformed posterior see the asflat, ascube, and flatten docstrings.\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.RadioLikelihood","page":"Comrade API","title":"Comrade.RadioLikelihood","text":"RadioLikelihood(skymodel, instumentmodel, dataproducts::EHTObservation...;\n skymeta=nothing,\n instrumentmeta=nothing)\n\nCreates a RadioLikelihood using the skymodel its related metadata skymeta and the instrumentmodel and its metadata instumentmeta. . The model is a function that converts from parameters θ to a Comrade AbstractModel which can be used to compute visibilities and a set of metadata that is used by model to compute the model.\n\nWarning\n\nThe model itself must be a two argument function where the first argument is the set of model parameters and the second is a container that holds all the additional information needed to construct the model. An example of this is when the model needs some precomputed cache to define the model.\n\nExample\n\ndlcamp, dcphase = extract_table(obs, LogClosureAmplitude(), ClosurePhases())\ncache = create_cache(NFFTAlg(dlcamp), IntensityMap(zeros(128,128), μas2rad(100.0), μas2rad(100.0)))\n\nfunction skymodel(θ, metadata)\n (; r, a) = θ\n (; cache) = metadata\n m = stretched(ExtendedRing(a), r, r)\n return modelimage(m, metadata.cache)\nend\n\nfunction instrumentmodel(g, metadata)\n (;lg, gp) = g\n (;gcache) = metadata\n jonesStokes(lg.*exp.(1im.*gp), gcache)\nend\n\nprior = (\n r = Uniform(μas2rad(10.0), μas2rad(40.0)),\n a = Uniform(0.1, 5.0)\n )\n\nRadioLikelihood(skymodel, instrumentmodel, dataproducts::EHTObservation...;\n skymeta=(;cache,),\n instrumentmeta=(;gcache))\n\n\n\n\n\nRadioLikelihood(skymodel, dataproducts::EHTObservation...; skymeta=nothing)\n\nForms a radio likelihood from a set of data products using only a sky model. This intrinsically assumes that the instrument model is not required since it is perfect. This is useful when fitting closure quantities which are independent of the instrument.\n\nIf you want to form a likelihood from multiple arrays such as when fitting different wavelengths or days, you can combine them using MultiRadioLikelihood\n\nExample\n\njulia> RadioLikelihood(skymodel, dcphase, dlcamp)\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.IsFlat","page":"Comrade API","title":"Comrade.IsFlat","text":"struct IsFlat\n\nSpecifies that the sampling algorithm usually expects a uncontrained transform\n\n\n\n\n\n","category":"type"},{"location":"api/#Comrade.IsCube","page":"Comrade API","title":"Comrade.IsCube","text":"struct IsCube\n\nSpecifies that the sampling algorithm usually expects a hypercube transform\n\n\n\n\n\n","category":"type"},{"location":"api/#Sampler-Tools","page":"Comrade API","title":"Sampler Tools","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.samplertype","category":"page"},{"location":"api/#Comrade.samplertype","page":"Comrade API","title":"Comrade.samplertype","text":"samplertype(::Type)\n\nSampler type specifies whether to use a unit hypercube or unconstrained transformation.\n\n\n\n\n\n","category":"function"},{"location":"api/#Misc","page":"Comrade API","title":"Misc","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.station_tuple\nComrade.dirty_image\nComrade.dirty_beam\nComrade.beamsize","category":"page"},{"location":"api/#Comrade.station_tuple","page":"Comrade API","title":"Comrade.station_tuple","text":"station_tuple(stations, default; reference=nothing kwargs...)\nstation_tuple(obs::EHTObservation, default; reference=nothing, kwargs...)\n\nConvienence function that will construct a NamedTuple of objects whose names are the stations in the observation obs or explicitly in the argument stations. The NamedTuple will be filled with default if no kwargs are defined otherwise each kwarg (key, value) pair denotes a station and value pair.\n\nOptionally the user can specify a reference station that will be dropped from the tuple. This is useful for selecting a reference station for gain phases\n\nExamples\n\njulia> stations = (:AA, :AP, :LM, :PV)\njulia> station_tuple(stations, ScanSeg())\n(AA = ScanSeg(), AP = ScanSeg(), LM = ScanSeg(), PV = ScanSeg())\njulia> station_tuple(stations, ScanSeg(); AA = FixedSeg(1.0))\n(AA = FixedSeg(1.0), AP = ScanSeg(), LM = ScanSeg(), PV = ScanSeg())\njulia> station_tuple(stations, ScanSeg(); AA = FixedSeg(1.0), PV = TrackSeg())\n(AA = FixedSeg(1.0), AP = ScanSeg(), LM = ScanSeg(), PV = TrackSeg())\njulia> station_tuple(stations, Normal(0.0, 0.1); reference=:AA, LM = Normal(0.0, 1.0))\n(AP = Normal(0.0, 0.1), LM = Normal(0.0, 1.0), PV = Normal(0.0, 0.1))\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.dirty_image","page":"Comrade API","title":"Comrade.dirty_image","text":"dirty_image(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T\n\nComputes the dirty image of the complex visibilities assuming a field of view of fov and number of pixels npix using the complex visibilities found in the observation obs.\n\nThe dirty image is the inverse Fourier transform of the measured visibilties assuming every other visibility is zero.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.dirty_beam","page":"Comrade API","title":"Comrade.dirty_beam","text":"dirty_beam(fov::Real, npix::Int, obs::EHTObservation{T,<:EHTVisibilityDatum}) where T\n\nComputes the dirty beam of the complex visibilities assuming a field of view of fov and number of pixels npix using baseline coverage found in obs.\n\nThe dirty beam is the inverse Fourier transform of the (u,v) coverage assuming every visibility is unity and everywhere else is zero.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.beamsize","page":"Comrade API","title":"Comrade.beamsize","text":"beamsize(ac::ArrayConfiguration)\n\nCalculate the approximate beam size of the array ac as the inverse of the longest baseline distance.\n\n\n\n\n\nbeamsize(obs::EHTObservation)\n\nCalculate the approximate beam size of the observation obs as the inverse of the longest baseline distance.\n\n\n\n\n\n","category":"function"},{"location":"api/#Internal-(Not-Public-API)","page":"Comrade API","title":"Internal (Not Public API)","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.extract_FRs\nComradeBase._visibilities!\nComradeBase._visibilities","category":"page"},{"location":"api/#Comrade.extract_FRs","page":"Comrade API","title":"Comrade.extract_FRs","text":"extract_FRs\n\nExtracts the feed rotation Jones matrices (returned as a JonesPair) from an EHT observation obs.\n\nWarning\n\neht-imaging can sometimes pre-rotate the coherency matrices. As a result the field rotation can sometimes be applied twice. To compensate for this we have added a ehtim_fr_convention which will fix this.\n\n\n\n\n\n","category":"function"},{"location":"api/#ComradeBase._visibilities!","page":"Comrade API","title":"ComradeBase._visibilities!","text":"_visibilities!(model::AbstractModel, args...)\n\nInternal method used for trait dispatch and unpacking of args arguments in visibilities!\n\nwarn: Warn\nNot part of the public API so it may change at any moment.\n\n\n\n\n\n","category":"function"},{"location":"api/#ComradeBase._visibilities","page":"Comrade API","title":"ComradeBase._visibilities","text":"_visibilities(model::AbstractModel, args...)\n\nInternal method used for trait dispatch and unpacking of args arguments in visibilities\n\nwarn: Warn\nNot part of the public API so it may change at any moment.\n\n\n\n\n\n","category":"function"},{"location":"api/#eht-imaging-interface-(Internal)","page":"Comrade API","title":"eht-imaging interface (Internal)","text":"","category":"section"},{"location":"api/","page":"Comrade API","title":"Comrade API","text":"Comrade.extract_amp\nComrade.extract_cphase\nComrade.extract_lcamp\nComrade.extract_vis\nComrade.extract_coherency","category":"page"},{"location":"api/#Comrade.extract_amp","page":"Comrade API","title":"Comrade.extract_amp","text":"extract_amp(obs; kwargs...)\n\nExtracts the visibility amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.extract_cphase","page":"Comrade API","title":"Comrade.extract_cphase","text":"extract_cphase(obs; kwargs...)\n\nExtracts the closure phases from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.extract_lcamp","page":"Comrade API","title":"Comrade.extract_lcamp","text":"extract_lcamp(obs; kwargs...)\n\nExtracts the log-closure amplitudes from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.extract_vis","page":"Comrade API","title":"Comrade.extract_vis","text":"extract_vis(obs; kwargs...)\n\nExtracts the stokes I complex visibilities from an obs. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"api/#Comrade.extract_coherency","page":"Comrade API","title":"Comrade.extract_coherency","text":"extract_coherency(obs; kwargs...)\n\nExtracts the full coherency matrix from an observation. This is an internal method for dispatch. Only use this if interfacing Comrade with a new data type.\n\n\n\n\n\n","category":"function"},{"location":"conventions/#Conventions","page":"Conventions","title":"Conventions","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"VLBI and radio astronomy has many non-standard conventions when coming from physics. Additionally, these conventions change from telescope to telescope, often making it difficult to know what assumptions different data sets and codes are making. We will detail the specific conventions that Comrade adheres to.","category":"page"},{"location":"conventions/#Rotation-Convention","page":"Conventions","title":"Rotation Convention","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"We follow the standard EHT and rotate starting from the upper y-axis and moving in a counter-clockwise direction. ","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"note: Note\nWe still use the standard astronomy definition where the positive x-axis is to the left.","category":"page"},{"location":"conventions/#Fourier-Transform-Convention","page":"Conventions","title":"Fourier Transform Convention","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"We use the positive exponent definition of the Fourier transform to define our visibilities. That is, we assume that the visibilities measured by a perfect interferometer are given by","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" V(u v) = int I(x y)e^2pi i(ux + vy)dx dy","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"This convention is consistent with the AIPS convention and what is used in other EHT codes, such as eht-imaging. ","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"warning: Warning\nThis is the opposite convention of what is written in the EHT papers, but it is the correct version for the released data.","category":"page"},{"location":"conventions/#Coherency-matrix-Convention","page":"Conventions","title":"Coherency matrix Convention","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"We use the factor of 2 definition when defining the coherency matrices. That is, the relation coherency matrix C is given by","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" C_pq = \n 2beginpmatrix\n leftv_pa v_qa^*right left v_pav_qb^*right \n leftv_pb v_qa^*right left v_pbv_qb^*right \n endpmatrix","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"where v_pa is the voltage measured from station p and feed a.","category":"page"},{"location":"conventions/#Circular-Polarization-Conversions","page":"Conventions","title":"Circular Polarization Conversions","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"To convert from measured RL circular cross-correlation products to the Fourier transform of the Stokes parameters, we use:","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" beginpmatrix\n tildeI tildeQ tildeU tildeV\n endpmatrix\n =frac12\n beginpmatrix\n leftRR^*right + leftLL^*right \n leftRL^*right + leftLR^*right \n i(leftLR^*right - leftRL^*right)\n leftRR^*right - leftLL^*right\n endpmatrix","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"where e.g., leftRL^*right = 2leftv_pRv^*_pLright.","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"The inverse transformation is then:","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" C = \n beginpmatrix\n tildeI + tildeV tildeQ + itildeU\n tildeQ - itildeU tildeI - tildeV\n endpmatrix","category":"page"},{"location":"conventions/#Linear-Polarization-Conversions","page":"Conventions","title":"Linear Polarization Conversions","text":"","category":"section"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"To convert from measured XY linear cross-correlation products to the Fourier transform of the Stokes parameters, we use:","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" beginpmatrix\n tildeI tildeQ tildeU tildeV\n endpmatrix\n =frac12\n beginpmatrix\n leftXX^*right + leftYY^*right \n leftXY^*right + leftYX^*right \n i(leftYX^*right - leftXY^*right)\n leftXX^*right - leftYY^*right\n endpmatrix","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"The inverse transformation is then:","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":" C = \n beginpmatrix\n tildeI + tildeQ tildeU + itildeV\n tildeU - itildeV tildeI - tildeQ\n endpmatrix","category":"page"},{"location":"conventions/","page":"Conventions","title":"Conventions","text":"where e.g., leftXY^*right = 2leftv_pXv^*_pYright.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"EditURL = \"../../../examples/hybrid_imaging.jl\"","category":"page"},{"location":"examples/hybrid_imaging/#Hybrid-Imaging-of-a-Black-Hole","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"In this tutorial, we will use hybrid imaging to analyze the 2017 EHT data. By hybrid imaging, we mean decomposing the model into simple geometric models, e.g., rings and such, plus a rasterized image model to soak up the additional structure. This approach was first developed in BB20 and applied to EHT 2017 data. We will use a similar model in this tutorial.","category":"page"},{"location":"examples/hybrid_imaging/#Introduction-to-Hybrid-modeling-and-imaging","page":"Hybrid Imaging of a Black Hole","title":"Introduction to Hybrid modeling and imaging","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"The benefit of using a hybrid-based modeling approach is the effective compression of information/parameters when fitting the data. Hybrid modeling requires the user to incorporate specific knowledge of how you expect the source to look like. For instance for M87, we expect the image to be dominated by a ring-like structure. Therefore, instead of using a high-dimensional raster to recover the ring, we can use a ring model plus a raster to soak up the additional degrees of freedom. This is the approach we will take in this tutorial to analyze the April 6 2017 EHT data of M87.","category":"page"},{"location":"examples/hybrid_imaging/#Loading-the-Data","page":"Hybrid Imaging of a Black Hole","title":"Loading the Data","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"To get started we will load Comrade","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using Comrade","category":"page"},{"location":"examples/hybrid_imaging/#Load-the-Data","page":"Hybrid Imaging of a Black Hole","title":"Load the Data","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\n\nusing Pyehtim","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using StableRNGs\nrng = StableRNG(42)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"To download the data visit https://doi.org/10.25739/g85n-f134 To load the eht-imaging obsdata object we do:","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"obs = ehtim.obsdata.load_uvfits(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we do some minor preprocessing:","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Scan average the data since the data have been preprocessed so that the gain phases coherent.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"obs = scan_average(obs).add_fractional_noise(0.01)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For this tutorial we will once again fit complex visibilities since they provide the most information once the telescope/instrument model are taken into account.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"dvis = extract_table(obs, ComplexVisibilities())","category":"page"},{"location":"examples/hybrid_imaging/#Building-the-Model/Posterior","page":"Hybrid Imaging of a Black Hole","title":"Building the Model/Posterior","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we build our intensity/visibility model. That is, the model that takes in a named tuple of parameters and perhaps some metadata required to construct the model. For our model, we will use a raster or ContinuousImage model, an m-ring model, and a large asymmetric Gaussian component to model the unresolved short-baseline flux.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"function sky(θ, metadata)\n (;c, σimg, f, r, σ, τ, ξτ, ma1, mp1, ma2, mp2, fg) = θ\n (;ftot, grid, cache) = metadata\n # Form the image model\n # First transform to simplex space first applying the non-centered transform\n rast = to_simplex(CenteredLR(), σimg.*c)\n img = IntensityMap((ftot*f*(1-fg))*rast, grid)\n mimg = ContinuousImage(img, cache)\n # Form the ring model\n s1,c1 = sincos(mp1)\n s2,c2 = sincos(mp2)\n α = (ma1*c1, ma2*c2)\n β = (ma1*s1, ma2*s2)\n ring = smoothed(modify(MRing(α, β), Stretch(r, r*(1+τ)), Rotate(ξτ), Renormalize((ftot*(1-f)*(1-fg)))), σ)\n gauss = modify(Gaussian(), Stretch(μas2rad(250.0)), Renormalize(ftot*f*fg))\n # We group the geometric models together for improved efficiency. This will be\n # automated in future versions.\n return mimg + (ring + gauss)\nend","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Unlike other imaging examples (e.g., Imaging a Black Hole using only Closure Quantities) we also need to include a model for the instrument, i.e., gains as well. The gains will be broken into two components","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Gain amplitudes which are typically known to 10-20%, except for LMT, which has amplitudes closer to 50-100%.\nGain phases which are more difficult to constrain and can shift rapidly.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"function instrument(θ, metadata)\n (; lgamp, gphase) = θ\n (; gcache, gcachep) = metadata\n # Now form our instrument model\n gvis = exp.(lgamp)\n gphase = exp.(1im.*gphase)\n jgamp = jonesStokes(gvis, gcache)\n jgphase = jonesStokes(gphase, gcachep)\n return JonesModel(jgamp*jgphase)\nend","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Before we move on, let's go into the model function a bit. This function takes two arguments θ and metadata. The θ argument is a named tuple of parameters that are fit to the data. The metadata argument is all the ancillary information we need to construct the model. For our hybrid model, we will need two variables for the metadata, a grid that specifies the locations of the image pixels and a cache that defines the algorithm used to calculate the visibilities given the image model. This is required since ContinuousImage is most easily computed using number Fourier transforms like the NFFT or FFT. To combine the models, we use Comrade's overloaded + operators, which will combine the images such that their intensities and visibilities are added pointwise.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now let's define our metadata. First we will define the cache for the image. This is required to compute the numerical Fourier transform.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"fovxy = μas2rad(150.0)\nnpix = 32\ngrid = imagepixels(fovxy, fovxy, npix, npix)\nbuffer = IntensityMap(zeros(npix,npix), grid)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For our image, we will use the non-uniform Fourier transform (NFFTAlg) to compute the numerical FT. The last argument to the create_cache call is the image kernel or pulse defines the continuous function we convolve our image with to produce a continuous on-sky image.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"cache = create_cache(NFFTAlg(dvis), buffer, BSplinePulse{3}())","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"The next step is defining our image priors. For our raster c, we will use a Gaussian markov random field prior, with the softmax or centered log-ratio transform so that it lives on the simplex. That is, the sum of all the numbers from a Dirichlet distribution always equals unity. First we load VLBIImagePriors which containts a large number of priors and transformations that are useful for imaging.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using VLBIImagePriors","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we form the metadata","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"skymetadata = (;ftot=1.1, grid, cache)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Second, we now construct our instrument model cache. This tells us how to map from the gains to the model visibilities. However, to construct this map, we also need to specify the observation segmentation over which we expect the gains to change. This is specified in the second argument to jonescache, and currently, there are two options","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"FixedSeg(val): Fixes the corruption to the value val for all time. This is usefule for reference stations\nScanSeg(): which forces the corruptions to only change from scan-to-scan\nTrackSeg(): which forces the corruptions to be constant over a night's observation","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For this work, we use the scan segmentation for the gain amplitudes since that is roughly the timescale we expect them to vary. For the phases we need to set a reference station for each scan to prevent a global phase offset degeneracy. To do this we select a reference station for each scan based on the SEFD of each telescope. The telescope with the lowest SEFD that is in each scan is selected. For M87 2017 this is almost always ALMA.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"gcache = jonescache(dvis, ScanSeg())\ngcachep = jonescache(dvis, ScanSeg(), autoref=SEFDReference(1.0 + 0.0im))\n\nintmetadata = (;gcache, gcachep)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"This is everything we need to form our likelihood. Note the first two arguments must be the model and then the metadata for the likelihood. The rest of the arguments are required to be Comrade.EHTObservation","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"lklhd = RadioLikelihood(sky, instrument, dvis;\n skymeta=skymetadata, instrumentmeta=intmetadata)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Part of hybrid imaging is to force a scale separation between the different model components to make them identifiable. To enforce this we will set the length scale of the raster component equal to the beam size of the telescope in units of pixel length, which is given by","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"beam = beamsize(dvis)\nrat = (beam/(step(grid.X)))\ncprior = GaussMarkovRandomField(10*rat, 1.0, size(grid))","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"additionlly we will fix the standard deviation of the field to unity and instead use a pseudo non-centered parameterization for the field. GaussMarkovRandomField(meanpr, 0.1*rat, 1.0, crcache)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we can construct the instrument model prior Each station requires its own prior on both the amplitudes and phases. For the amplitudes we assume that the gains are apriori well calibrated around unit gains (or 0 log gain amplitudes) which corresponds to no instrument corruption. The gain dispersion is then set to 10% for all stations except LMT, representing that we expect 10% deviations from scan-to-scan. For LMT we let the prior expand to 100% due to the known pointing issues LMT had in 2017.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using Distributions\nusing DistributionsAD\ndistamp = station_tuple(dvis, Normal(0.0, 0.1); LM = Normal(0.0, 1.0))","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"For the phases, as mentioned above, we will use a segmented gain prior. This means that rather than the parameters being directly the gains, we fit the first gain for each site, and then the other parameters are the segmented gains compared to the previous time. To model this","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"#, we break the gain phase prior into two parts. The first is the prior","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"for the first observing timestamp of each site, distphase0, and the second is the prior for segmented gain ϵₜ from time i to i+1, given by distphase. For the EHT, we are dealing with pre-calibrated data, so often, the gain phase jumps from scan to scan are minor. As such, we can put a more informative prior on distphase.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"warning: Warning\nWe use AA (ALMA) as a reference station so we do not have to specify a gain prior for it.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"distphase = station_tuple(dvis, DiagonalVonMises(0.0, inv(π^2)))\n\nusing VLBIImagePriors","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Finally we can put form the total model prior","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"prior = NamedDist(\n c = cprior,\n # We use a strong smoothing prior since we want to limit the amount of high-frequency structure in the raster.\n σimg = truncated(Normal(0.0, 0.1); lower=0.01),\n f = Uniform(0.0, 1.0),\n r = Uniform(μas2rad(10.0), μas2rad(30.0)),\n σ = Uniform(μas2rad(0.1), μas2rad(10.0)),\n τ = truncated(Normal(0.0, 0.1); lower=0.0, upper=1.0),\n ξτ = Uniform(-π/2, π/2),\n ma1 = Uniform(0.0, 0.5),\n mp1 = Uniform(0.0, 2π),\n ma2 = Uniform(0.0, 0.5),\n mp2 = Uniform(0.0, 2π),\n fg = Uniform(0.0, 1.0),\n lgamp = CalPrior(distamp, gcache),\n gphase = CalPrior(distphase, gcachep),\n )","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"This is everything we need to specify our posterior distribution, which our is the main object of interest in image reconstructions when using Bayesian inference.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"post = Posterior(lklhd, prior)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"To sample from our prior we can do","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"xrand = prior_sample(rng, post)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"and then plot the results","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"import CairoMakie as CM\ng = imagepixels(μas2rad(150.0), μas2rad(150.0), 128, 128)\nimageviz(intensitymap(skymodel(post, xrand), g))","category":"page"},{"location":"examples/hybrid_imaging/#Reconstructing-the-Image","page":"Hybrid Imaging of a Black Hole","title":"Reconstructing the Image","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"To sample from this posterior, it is convenient to first move from our constrained parameter space to an unconstrained one (i.e., the support of the transformed posterior is (-∞, ∞)). This is done using the asflat function.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"tpost = asflat(post)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"We can now also find the dimension of our posterior or the number of parameters we will sample.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"warning: Warning\nThis can often be different from what you would expect. This is especially true when using angular variables, where we often artificially increase the dimension of the parameter space to make sampling easier.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"ndim = dimension(tpost)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now we optimize using LBFGS","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using ComradeOptimization\nusing OptimizationOptimJL\nusing Zygote\nf = OptimizationFunction(tpost, Optimization.AutoZygote())\nprob = Optimization.OptimizationProblem(f, prior_sample(rng, tpost), nothing)\nsol = solve(prob, LBFGS(); maxiters=5_000);\nnothing #hide","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Before we analyze our solution we first need to transform back to parameter space.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"xopt = transform(tpost, sol)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"First we will evaluate our fit by plotting the residuals","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using Plots\nresidual(vlbimodel(post, xopt), dvis, ylabel=\"Correlated Flux Residual\")","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"and now closure phases","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now these residuals look a bit high. However, it turns out this is because the MAP is typically not a great estimator and will not provide very predictive measurements of the data. We will show this below after sampling from the posterior.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"CM.image(g, skymodel(post, xopt), axis=(aspect=1, xreversed=true, title=\"MAP\"), colormap=:afmhot)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"We will now move directly to sampling at this point.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using ComradeAHMC\nusing Zygote\nmetric = DiagEuclideanMetric(ndim)\nchain, stats = sample(rng, post, AHMC(;metric, autodiff=Val(:Zygote)), 700; nadapts=500, init_params=xopt)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"We then remove the adaptation/warmup phase from our chain","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"chain = chain[501:end]\nstats = stats[501:end]","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"warning: Warning\nThis should be run for 2-3x more steps to properly estimate expectations of the posterior","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now lets plot the mean image and standard deviation images. To do this we first clip the first 250 MCMC steps since that is during tuning and so the posterior is not sampling from the correct stationary distribution.","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"using StatsBase\nmsamples = skymodel.(Ref(post), chain[begin:2:end]);\nnothing #hide","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"The mean image is then given by","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"imgs = intensitymap.(msamples, fovxy, fovxy, 128, 128)\nimageviz(mean(imgs), colormap=:afmhot)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"imageviz(std(imgs), colormap=:batlow)","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"We can also split up the model into its components and analyze each separately","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"comp = Comrade.components.(msamples)\nring_samples = getindex.(comp, 2)\nrast_samples = first.(comp)\nring_imgs = intensitymap.(ring_samples, fovxy, fovxy, 128, 128)\nrast_imgs = intensitymap.(rast_samples, fovxy, fovxy, 128, 128)\n\nring_mean, ring_std = mean_and_std(ring_imgs)\nrast_mean, rast_std = mean_and_std(rast_imgs)\n\nfig = CM.Figure(; resolution=(800, 800))\naxes = [CM.Axis(fig[i, j], xreversed=true, aspect=CM.DataAspect()) for i in 1:2, j in 1:2]\nCM.image!(axes[1,1], ring_mean, colormap=:afmhot); axes[1,1].title = \"Ring Mean\"\nCM.image!(axes[1,2], ring_std, colormap=:afmhot); axes[1,2].title = \"Ring Std. Dev.\"\nCM.image!(axes[2,1], rast_mean, colormap=:afmhot); axes[2,1].title = \"Rast Mean\"\nCM.image!(axes[2,2], rast_std, colormap=:afmhot); axes[2,2].title = \"Rast Std. Dev.\"\nfig","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Finally, let's take a look at some of the ring parameters","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"figd = CM.Figure(;resolution=(900, 600))\np1 = CM.density(figd[1,1], rad2μas(chain.r)*2, axis=(xlabel=\"Ring Diameter (μas)\",))\np2 = CM.density(figd[1,2], rad2μas(chain.σ)*2*sqrt(2*log(2)), axis=(xlabel=\"Ring FWHM (μas)\",))\np3 = CM.density(figd[1,3], -rad2deg.(chain.mp1) .+ 360.0, axis=(xlabel = \"Ring PA (deg) E of N\",))\np4 = CM.density(figd[2,1], 2*chain.ma1, axis=(xlabel=\"Brightness asymmetry\",))\np5 = CM.density(figd[2,2], 1 .- chain.f, axis=(xlabel=\"Ring flux fraction\",))\nfigd","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Now let's check the residuals using draws from the posterior","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"p = plot();\nfor s in sample(chain, 10)\n residual!(p, vlbimodel(post, s), dvis)\nend\np","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"And everything looks pretty good! Now comes the hard part: interpreting the results...","category":"page"},{"location":"examples/hybrid_imaging/#Computing-information","page":"Hybrid Imaging of a Black Hole","title":"Computing information","text":"","category":"section"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"Julia Version 1.8.5\nCommit 17cfb8e65ea (2023-01-08 06:45 UTC)\nPlatform Info:\n OS: Linux (x86_64-linux-gnu)\n CPU: 32 × AMD Ryzen 9 7950X 16-Core Processor\n WORD_SIZE: 64\n LIBM: libopenlibm\n LLVM: libLLVM-13.0.1 (ORCJIT, znver3)\n Threads: 1 on 32 virtual cores\nEnvironment:\n JULIA_EDITOR = code\n JULIA_NUM_THREADS = 1","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"","category":"page"},{"location":"examples/hybrid_imaging/","page":"Hybrid Imaging of a Black Hole","title":"Hybrid Imaging of a Black Hole","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"EditURL = \"../../../examples/data.jl\"","category":"page"},{"location":"examples/data/#Loading-Data-into-Comrade","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"","category":"section"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"The VLBI field does not have a standardized data format, and the EHT uses a particular uvfits format similar to the optical interferometry oifits format. As a result, we reuse the excellent eht-imaging package to load data into Comrade.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"Once the data is loaded, we then convert the data into the tabular format Comrade expects. Note that this may change to a Julia package as the Julia radio astronomy group grows.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"To get started, we will load Comrade and Plots to enable visualizations of the data","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"using Comrade\n\nusing Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\n\n\nusing Plots","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"We also load Pyehtim since it loads eht-imaging into Julia using PythonCall and exports the variable ehtim","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"using Pyehtim","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"To load the data we will use eht-imaging. We will use the 2017 public M87 data which can be downloaded from cyverse","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"obseht = ehtim.obsdata.load_uvfits(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"Now we will average the data over telescope scans. Note that the EHT data has been pre-calibrated so this averaging doesn't induce large coherence losses.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"obs = Pyehtim.scan_average(obseht)","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"warning: Warning\nWe use a custom scan-averaging function to ensure that the scan-times are homogenized.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"We can now extract data products that Comrade can use","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"vis = extract_table(obs, ComplexVisibilities()) #complex visibilites\namp = extract_table(obs, VisibilityAmplitudes()) # visibility amplitudes\ncphase = extract_table(obs, ClosurePhases(; snrcut=3.0)) # extract minimal set of closure phases\nlcamp = extract_table(obs, LogClosureAmplitudes(; snrcut=3.0)) # extract minimal set of log-closure amplitudes","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"For polarization we first load the data in the cirular polarization basis Additionally, we load the array table at the same time to load the telescope mounts.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"obseht = Pyehtim.load_uvfits_and_array(\n joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/polarized_gaussian_all_corruptions.uvfits\"),\n joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/array.txt\"),\n polrep=\"circ\"\n )\nobs = Pyehtim.scan_average(obseht)\ncoh = extract_table(obs, Coherencies())","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"warning: Warning\nAlways use our extract_cphase and extract_lcamp functions to find the closures eht-imaging will sometimes incorrectly calculate a non-redundant set of closures.","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"We can also recover the array used in the observation using","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"ac = arrayconfig(vis)\nplot(ac) # Plot the baseline coverage","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"To plot the data we just call","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"l = @layout [a b; c d]\npv = plot(vis)\npa = plot(amp)\npcp = plot(cphase)\nplc = plot(lcamp)\n\nplot(pv, pa, pcp, plc; layout=l)","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"And also the coherency matrices","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"plot(coh)","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"","category":"page"},{"location":"examples/data/","page":"Loading Data into Comrade","title":"Loading Data into Comrade","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"EditURL = \"../../../examples/imaging_vis.jl\"","category":"page"},{"location":"examples/imaging_vis/#Stokes-I-Simultaneous-Image-and-Instrument-Modeling","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"","category":"section"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"In this tutorial, we will create a preliminary reconstruction of the 2017 M87 data on April 6 by simultaneously creating an image and model for the instrument. By instrument model, we mean something akin to self-calibration in traditional VLBI imaging terminology. However, unlike traditional self-cal, we will at each point in our parameter space effectively explore the possible self-cal solutions. This will allow us to constrain and marginalize over the instrument effects, such as time variable gains.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To get started we load Comrade.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Comrade\n\n\nusing Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Pyehtim\nusing LinearAlgebra","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using StableRNGs\nrng = StableRNG(42)","category":"page"},{"location":"examples/imaging_vis/#Load-the-Data","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Load the Data","text":"","category":"section"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To download the data visit https://doi.org/10.25739/g85n-f134 First we will load our data:","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"obs = ehtim.obsdata.load_uvfits(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_hi_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we do some minor preprocessing:","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Scan average the data since the data have been preprocessed so that the gain phases coherent.\nAdd 1% systematic noise to deal with calibration issues that cause 1% non-closing errors.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"obs = scan_average(obs.add_fractional_noise(0.01))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we extract our complex visibilities.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"dvis = extract_table(obs, ComplexVisibilities())","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"##Building the Model/Posterior","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now, we must build our intensity/visibility model. That is, the model that takes in a named tuple of parameters and perhaps some metadata required to construct the model. For our model, we will use a raster or ContinuousImage for our image model. The model is given below:","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"function sky(θ, metadata)\n (;fg, c, σimg) = θ\n (;ftot, K, meanpr, grid, cache) = metadata\n # Transform to the log-ratio pixel fluxes\n cp = meanpr .+ σimg.*c.params\n # Transform to image space\n rast = (ftot*(1-fg))*K(to_simplex(CenteredLR(), cp))\n img = IntensityMap(rast, grid)\n m = ContinuousImage(img, cache)\n # Add a large-scale gaussian to deal with the over-resolved mas flux\n g = modify(Gaussian(), Stretch(μas2rad(250.0), μas2rad(250.0)), Renormalize(ftot*fg))\n return m + g\nend","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Unlike other imaging examples (e.g., Imaging a Black Hole using only Closure Quantities) we also need to include a model for the instrument, i.e., gains as well. The gains will be broken into two components","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Gain amplitudes which are typically known to 10-20%, except for LMT, which has amplitudes closer to 50-100%.\nGain phases which are more difficult to constrain and can shift rapidly.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"function instrument(θ, metadata)\n (; lgamp, gphase) = θ\n (; gcache, gcachep) = metadata\n # Now form our instrument model\n gvis = exp.(lgamp)\n gphase = exp.(1im.*gphase)\n jgamp = jonesStokes(gvis, gcache)\n jgphase = jonesStokes(gphase, gcachep)\n return JonesModel(jgamp*jgphase)\nend","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"The model construction is very similar to Imaging a Black Hole using only Closure Quantities, except we include a large scale gaussian since we want to model the zero baselines. For more information about the image model please read the closure-only example. Let's discuss the instrument model Comrade.JonesModel. Thanks to the EHT pre-calibration, the gains are stable over scans. Therefore, we can model the gains on a scan-by-scan basis. To form the instrument model, we need our","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Our (log) gain amplitudes and phases are given below by lgamp and gphase\nOur function or cache that maps the gains from a list to the stations they impact gcache.\nThe set of Comrade.JonesPairs produced by jonesStokes","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"These three ingredients then specify our instrument model. The instrument model can then be combined with our image model cimg to form the total JonesModel.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now, let's set up our image model. The EHT's nominal resolution is 20-25 μas. Additionally, the EHT is not very sensitive to a larger field of view. Typically 60-80 μas is enough to describe the compact flux of M87. Given this, we only need to use a small number of pixels to describe our image.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"npix = 32\nfovx = μas2rad(150.0)\nfovy = μas2rad(150.0)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now let's form our cache's. First, we have our usual image cache which is needed to numerically compute the visibilities.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"grid = imagepixels(fovx, fovy, npix, npix)\nbuffer = IntensityMap(zeros(npix, npix), grid)\ncache = create_cache(NFFTAlg(dvis), buffer, BSplinePulse{3}())","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Second, we now construct our instrument model cache. This tells us how to map from the gains to the model visibilities. However, to construct this map, we also need to specify the observation segmentation over which we expect the gains to change. This is specified in the second argument to jonescache, and currently, there are two options","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"FixedSeg(val): Fixes the corruption to the value val for all time. This is usefule for reference stations\nScanSeg(): which forces the corruptions to only change from scan-to-scan\nTrackSeg(): which forces the corruptions to be constant over a night's observation","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"For this work, we use the scan segmentation for the gain amplitudes since that is roughly the timescale we expect them to vary. For the phases we use a station specific scheme where we set AA to be fixed to unit gain because it will function as a reference station.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gcache = jonescache(dvis, ScanSeg())\ngcachep = jonescache(dvis, ScanSeg(); autoref=SEFDReference((complex(1.0))))\n\nusing VLBIImagePriors","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we need to specify our image prior. For this work we will use a Gaussian Markov Random field prior Since we are using a Gaussian Markov random field prior we need to first specify our mean image. This behaves somewhat similary to a entropy regularizer in that it will start with an initial guess for the image structure. For this tutorial we will use a a symmetric Gaussian with a FWHM of 60 μas","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"fwhmfac = 2*sqrt(2*log(2))\nmpr = modify(Gaussian(), Stretch(μas2rad(50.0)./fwhmfac))\nimgpr = intensitymap(mpr, grid)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now since we are actually modeling our image on the simplex we need to ensure that our mean image has unit flux","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"imgpr ./= flux(imgpr)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"and since our prior is not on the simplex we need to convert it to unconstrained or real space.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"meanpr = to_real(CenteredLR(), Comrade.baseimage(imgpr))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we can form our metadata we need to fully define our model.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"metadata = (;ftot=1.1, K=CenterImage(imgpr), meanpr, grid, cache, gcache, gcachep)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"We will also fix the total flux to be the observed value 1.1. This is because total flux is degenerate with a global shift in the gain amplitudes making the problem degenerate. To fix this we use the observed total flux as our value.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Moving onto our prior, we first focus on the instrument model priors. Each station requires its own prior on both the amplitudes and phases. For the amplitudes we assume that the gains are apriori well calibrated around unit gains (or 0 log gain amplitudes) which corresponds to no instrument corruption. The gain dispersion is then set to 10% for all stations except LMT, representing that we expect 10% deviations from scan-to-scan. For LMT we let the prior expand to 100% due to the known pointing issues LMT had in 2017.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Distributions\nusing DistributionsAD\ndistamp = station_tuple(dvis, Normal(0.0, 0.1); LM = Normal(1.0))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"For the phases, as mentioned above, we will use a segmented gain prior. This means that rather than the parameters being directly the gains, we fit the first gain for each site, and then the other parameters are the segmented gains compared to the previous time. To model this we break the gain phase prior into two parts. The first is the prior for the first observing timestamp of each site, distphase0, and the second is the prior for segmented gain ϵₜ from time i to i+1, given by distphase. For the EHT, we are dealing with pre-2*rand(rng, ndim) .- 1.5calibrated data, so often, the gain phase jumps from scan to scan are minor. As such, we can put a more informative prior on distphase.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"warning: Warning\nWe use AA (ALMA) as a reference station so we do not have to specify a gain prior for it.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"distphase = station_tuple(dvis, DiagonalVonMises(0.0, inv(π^2)))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"In addition we want a reasonable guess for what the resolution of our image should be. For radio astronomy this is given by roughly the longest baseline in the image. To put this into pixel space we then divide by the pixel size.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"beam = beamsize(dvis)\nrat = (beam/(step(grid.X)))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To make the Gaussian Markov random field efficient we first precompute a bunch of quantities that allow us to scale things linearly with the number of image pixels. This drastically improves the usual N^3 scaling you get from usual Gaussian Processes.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"crcache = MarkovRandomFieldCache(meanpr)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"One of the benefits of the Bayesian approach is that we can fit for the hyperparameters of our prior/regularizers unlike traditional RML appraoches. To construct this heirarchical prior we will first make a map that takes in our regularizer hyperparameters and returns the image prior given those hyperparameters.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"fmap = let crcache=crcache\n x->GaussMarkovRandomField(x, 1.0, crcache)\nend","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we can finally form our image prior. For this we use a heirarchical prior where the inverse correlation length is given by a Half-Normal distribution whose peak is at zero and standard deviation is 0.1/rat where recall rat is the beam size per pixel. For the variance of the random field we use another half normal prior with standard deviation 0.1. The reason we use the half-normal priors is to prefer \"simple\" structures. Gaussian Markov random fields are extremly flexible models, and to prevent overfitting it is common to use priors that penalize complexity. Therefore, we want to use priors that enforce similarity to our mean image. If the data wants more complexity then it will drive us away from the prior.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"cprior = HierarchicalPrior(fmap, InverseGamma(1.0, -log(0.01*rat)))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"We can now form our model parameter priors. Like our other imaging examples, we use a Dirichlet prior for our image pixels. For the log gain amplitudes, we use the CalPrior which automatically constructs the prior for the given jones cache gcache.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"prior = NamedDist(\n fg = Uniform(0.0, 1.0),\n σimg = truncated(Normal(0.0, 1.0); lower=0.01),\n c = cprior,\n lgamp = CalPrior(distamp, gcache),\n gphase = CalPrior(distphase, gcachep),\n )","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Putting it all together we form our likelihood and posterior objects for optimization and sampling.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"lklhd = RadioLikelihood(sky, instrument, dvis; skymeta=metadata, instrumentmeta=metadata)\npost = Posterior(lklhd, prior)","category":"page"},{"location":"examples/imaging_vis/#Reconstructing-the-Image-and-Instrument-Effects","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Reconstructing the Image and Instrument Effects","text":"","category":"section"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To sample from this posterior, it is convenient to move from our constrained parameter space to an unconstrained one (i.e., the support of the transformed posterior is (-∞, ∞)). This is done using the asflat function.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"tpost = asflat(post)\nndim = dimension(tpost)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Our Posterior and TransformedPosterior objects satisfy the LogDensityProblems interface. This allows us to easily switch between different AD backends and many of Julia's statistical inference packages use this interface as well.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using LogDensityProblemsAD\nusing Zygote\ngtpost = ADgradient(Val(:Zygote), tpost)\nx0 = randn(rng, ndim)\nLogDensityProblemsAD.logdensity_and_gradient(gtpost, x0)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"We can now also find the dimension of our posterior or the number of parameters we are going to sample.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"warning: Warning\nThis can often be different from what you would expect. This is especially true when using angular variables where we often artificially increase the dimension of the parameter space to make sampling easier.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To initialize our sampler we will use optimize using LBFGS","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using ComradeOptimization\nusing OptimizationOptimJL\nf = OptimizationFunction(tpost, Optimization.AutoZygote())\nprob = Optimization.OptimizationProblem(f, prior_sample(rng, tpost), nothing)\nℓ = logdensityof(tpost)\nsol = solve(prob, LBFGS(), maxiters=1_000, g_tol=1e-1);\nnothing #hide","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now transform back to parameter space","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"xopt = transform(tpost, sol.u)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"warning: Warning\nFitting gains tends to be very difficult, meaning that optimization can take a lot longer. The upside is that we usually get nicer images.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"First we will evaluate our fit by plotting the residuals","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Plots\nresidual(vlbimodel(post, xopt), dvis)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"These look reasonable, although there may be some minor overfitting. This could be improved in a few ways, but that is beyond the goal of this quick tutorial. Plotting the image, we see that we have a much cleaner version of the closure-only image from Imaging a Black Hole using only Closure Quantities.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"import CairoMakie as CM\nimg = intensitymap(skymodel(post, xopt), fovx, fovy, 128, 128)\nCM.image(img, axis=(xreversed=true, aspect=1, title=\"MAP Image\"), colormap=:afmhot)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Because we also fit the instrument model, we can inspect their parameters. To do this, Comrade provides a caltable function that converts the flattened gain parameters to a tabular format based on the time and its segmentation.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gt = Comrade.caltable(gcachep, xopt.gphase)\nplot(gt, layout=(3,3), size=(600,500))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"The gain phases are pretty random, although much of this is due to us picking a random reference station for each scan.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Moving onto the gain amplitudes, we see that most of the gain variation is within 10% as expected except LMT, which has massive variations.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gt = Comrade.caltable(gcache, exp.(xopt.lgamp))\nplot(gt, layout=(3,3), size=(600,500))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"To sample from the posterior, we will use HMC, specifically the NUTS algorithm. For information about NUTS, see Michael Betancourt's notes.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"note: Note\nFor our metric, we use a diagonal matrix due to easier tuning","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"However, due to the need to sample a large number of gain parameters, constructing the posterior is rather time-consuming. Therefore, for this tutorial, we will only do a quick preliminary run, and any posterior inferences should be appropriately skeptical.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using ComradeAHMC\nmetric = DiagEuclideanMetric(ndim)\nchain, stats = sample(rng, post, AHMC(;metric, autodiff=Val(:Zygote)), 700; nadapts=500, init_params=xopt)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"note: Note\nThe above sampler will store the samples in memory, i.e. RAM. For large models this can lead to out-of-memory issues. To fix that you can include the keyword argument saveto = DiskStore() which periodically saves the samples to disk limiting memory useage. You can load the chain using load_table(diskout) where diskout is the object returned from sample. For more information please see ComradeAHMC.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we prune the adaptation phase","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"chain = chain[501:end]","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"warning: Warning\nThis should be run for likely an order of magnitude more steps to properly estimate expectations of the posterior","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now that we have our posterior, we can put error bars on all of our plots above. Let's start by finding the mean and standard deviation of the gain phases","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gphase = hcat(chain.gphase...)\nmgphase = mean(gphase, dims=2)\nsgphase = std(gphase, dims=2)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"and now the gain amplitudes","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"gamp = exp.(hcat(chain.lgamp...))\nmgamp = mean(gamp, dims=2)\nsgamp = std(gamp, dims=2)","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now we can use the measurements package to automatically plot everything with error bars. First we create a caltable the same way but making sure all of our variables have errors attached to them.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"using Measurements\ngmeas_am = measurement.(mgamp, sgamp)\nctable_am = caltable(gcache, vec(gmeas_am)) # caltable expects gmeas_am to be a Vector\ngmeas_ph = measurement.(mgphase, sgphase)\nctable_ph = caltable(gcachep, vec(gmeas_ph))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now let's plot the phase curves","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"plot(ctable_ph, layout=(3,3), size=(600,500))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"and now the amplitude curves","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"plot(ctable_am, layout=(3,3), size=(600,500))","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Finally let's construct some representative image reconstructions.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"samples = skymodel.(Ref(post), chain[begin:2:end])\nimgs = intensitymap.(samples, fovx, fovy, 128, 128)\n\nmimg = mean(imgs)\nsimg = std(imgs)\nfig = CM.Figure(;resolution=(800, 800))\nCM.image(fig[1,1], mimg,\n axis=(xreversed=true, aspect=1, title=\"Mean Image\"),\n colormap=:afmhot)\nCM.image(fig[1,2], simg./(max.(mimg, 1e-5)),\n axis=(xreversed=true, aspect=1, title=\"1/SNR\",),\n colormap=:afmhot)\nCM.image(fig[2,1], imgs[1],\n axis=(xreversed=true, aspect=1,title=\"Draw 1\"),\n colormap=:afmhot)\nCM.image(fig[2,2], imgs[end],\n axis=(xreversed=true, aspect=1,title=\"Draw 2\"),\n colormap=:afmhot)\nfig","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"Now let's check the residuals","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"p = plot();\nfor s in sample(chain, 10)\n residual!(p, vlbimodel(post, s), dvis)\nend\np","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"And viola, you have just finished making a preliminary image and instrument model reconstruction. In reality, you should run the sample step for many more MCMC steps to get a reliable estimate for the reconstructed image and instrument model parameters.","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"","category":"page"},{"location":"examples/imaging_vis/","page":"Stokes I Simultaneous Image and Instrument Modeling","title":"Stokes I Simultaneous Image and Instrument Modeling","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"EditURL = \"../../../examples/imaging_pol.jl\"","category":"page"},{"location":"examples/imaging_pol/#Polarized-Image-and-Instrumental-Modeling","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"In this tutorial, we will analyze a simulated simple polarized dataset to demonstrate Comrade's polarized imaging capabilities.","category":"page"},{"location":"examples/imaging_pol/#Introduction-to-Polarized-Imaging","page":"Polarized Image and Instrumental Modeling","title":"Introduction to Polarized Imaging","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"The EHT is a polarized interferometer. However, like all VLBI interferometers, it does not directly measure the Stokes parameters (I, Q, U, V). Instead, it measures components related to the electric field at the telescope along two directions using feeds. There are two types of feeds at telescopes: circular, which measure RL components of the electric field, and linear feeds, which measure XY components of the electric field. Most sites in the EHT use circular feeds, meaning they measure the right (R) and left electric field (L) at each telescope. These circular electric field measurements are then correlated, producing coherency matrices,","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" C_ij = beginpmatrix\n RR^* RL^*\n LR^* LL^*\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"These coherency matrices are the fundamental object in interferometry and what the telescope observes. For a perfect interferometer, these coherency matrices are related to the usual Fourier transform of the stokes parameters by","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" beginpmatrix\n tildeI tildeQ tildeU tildeV\n endpmatrix\n =frac12\n beginpmatrix\n RR^* + LL^* \n RL^* + LR^* \n i(LR^* - RL^*)\n RR^* - LL^*\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"for circularly polarized measurements.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"note: Note\nIn this tutorial, we stick to circular feeds but Comrade has the capabilities to model linear (XX,XY, ...) and mixed basis coherencies (e.g., RX, RY, ...).","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"In reality, the measure coherencies are corrupted by both the atmosphere and the telescope itself. In Comrade we use the RIME formalism [1] to represent these corruptions, namely our measured coherency matrices V_ij are given by","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" V_ij = J_iC_ijJ_j^dagger","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"where J is known as a Jones matrix and ij denotes the baseline ij with sites i and j.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Comrade is highly flexible with how the Jones matrices are formed and provides several convenience functions that parameterize standard Jones matrices. These matrices include:","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"jonesG which builds the set of complex gain Jones matrices","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" G = beginpmatrix\n g_a 0\n 0 g_b\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"jonesD which builds the set of complex d-terms Jones matrices","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" D = beginpmatrix\n 1 d_a\n d_b 1\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"jonesT is the basis transform matrix T. This transformation is special and combines two things using the decomposition T=FB. The first, B, is the transformation from some reference basis to the observed coherency basis (this allows for mixed basis measurements). The second is the feed rotation, F, that transforms from some reference axis to the axis of the telescope as the source moves in the sky. The feed rotation matrix F in terms of the per station feed rotation angle varphi is","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":" F = beginpmatrix\n e^-ivarphi 0\n 0 e^ivarphi\n endpmatrix","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"In the rest of the tutorial, we are going to solve for all of these instrument model terms on in addition to our image structure to reconstruct a polarized image of a synthetic dataset.","category":"page"},{"location":"examples/imaging_pol/#Load-the-Data","page":"Polarized Image and Instrumental Modeling","title":"Load the Data","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"To get started we will load Comrade","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using Comrade","category":"page"},{"location":"examples/imaging_pol/#Load-the-Data-2","page":"Polarized Image and Instrumental Modeling","title":"Load the Data","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\nusing Pyehtim","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using StableRNGs\nrng = StableRNG(123)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now we will load some synthetic polarized data.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"obs = Pyehtim.load_uvfits_and_array(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/polarized_gaussian_all_corruptions.uvfits\"),\n joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/array.txt\"), polrep=\"circ\")","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Notice that, unlike other non-polarized tutorials, we need to include a second argument. This is the array file of the observation and is required to determine the feed rotation of the array.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now we scan average the data since the data to boost the SNR and reduce the total data volume.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"obs = scan_average(obs)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now we extract our observed/corrupted coherency matrices.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"dvis = extract_table(obs, Coherencies())","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"##Building the Model/Posterior","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"To build the model, we first break it down into two parts:","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"The image or sky model. In Comrade, all polarized image models are written in terms of the Stokes parameters. The reason for using Stokes parameters is that it is usually what physical models consider and is the often easiest to reason about since they are additive. In this tutorial, we will use a polarized image model based on Pesce (2021)[2]. This model parameterizes the polarized image in terms of the Poincare sphere, and allows us to easily incorporate physical restrictions such as I^2 Q^2 + U^2 + V^2.\nThe instrument model. The instrument model specifies the model that describes the impact of instrumental and atmospheric effects. We will be using the J = GDT decomposition we described above. However, to parameterize the R/L complex gains, we will be using a gain product and ratio decomposition. The reason for this decomposition is that in realistic measurements, the gain ratios and products have different temporal characteristics. Namely, many of the EHT observations tend to demonstrate constant R/L gain ratios across an nights observations, compared to the gain products, which vary every scan. Additionally, the gain ratios tend to be smaller (i.e., closer to unity) than the gain products. Using this apriori knowledge, we can build this into our model and reduce the total number of parameters we need to model.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"function sky(θ, metadata)\n (;c, f, p, angparams) = θ\n (;K, grid, cache) = metadata\n # Construct the image model\n # produce Stokes images from parameters\n imgI = f*K(c)\n # Converts from poincare sphere parameterization of polzarization to Stokes Parameters\n pimg = PoincareSphere2Map(imgI, p, angparams, grid)\n m = ContinuousImage(pimg, cache)\n return m\nend","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"note: Note\nIf you want to add a geometric polarized model please see the PolarizedModel docstring. For instance to create a stokes I only Gaussian component to the above model we can do pg = PolarizedModel(modify(Gaussian(), Stretch(1e-10)), ZeroModel(), ZeroModel(), ZeroModel()).","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"function instrument(θ, metadata)\n (; lgp, gpp, lgr, gpr, dRx, dRy, dLx, dLy) = θ\n (; tcache, scancache, phasecache, trackcache) = metadata\n # Now construct the basis transformation cache\n jT = jonesT(tcache)\n\n # Gain product parameters\n gPa = exp.(lgp)\n gRa = exp.(lgp .+ lgr)\n Gp = jonesG(gPa, gRa, scancache)\n # Gain ratio\n gPp = exp.(1im.*(gpp))\n gRp = exp.(1im.*(gpp.+gpr))\n Gr = jonesG(gPp, gRp, phasecache)\n ##D-terms\n D = jonesD(complex.(dRx, dRy), complex.(dLx, dLy), trackcache)\n # sandwich all the jones matrices together\n J = Gp*Gr*D*jT\n # form the complete Jones or RIME model. We use tcache here\n # to set the reference basis of the model.\n return JonesModel(J, tcache)\nend","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now, we define the model metadata required to build the model. We specify our image grid and cache model needed to define the polarimetric image model.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"fovx = μas2rad(50.0)\nfovy = μas2rad(50.0)\nnx = 6\nny = floor(Int, fovy/fovx*nx)\ngrid = imagepixels(fovx, fovy, nx, ny) # image grid\nbuffer = IntensityMap(zeros(nx, ny), grid) # buffer to store temporary image\npulse = BSplinePulse{3}() # pulse we will be using\ncache = create_cache(NFFTAlg(dvis), buffer, pulse) # cache to define the NFFT transform","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Finally we compute a center projector that forces the centroid to live at the image origin","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using VLBIImagePriors\nK = CenterImage(grid)\nskymeta = (;K, cache, grid)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"To define the instrument models, T, G, D, we need to build some Jones caches (see JonesCache) that map from a flat vector of gain/dterms to the specific sites for each baseline.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"First, we will define our deterministic transform cache. Note that this dataset has need been pre-corrected for feed rotation, so we need to add those into the tcache.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"tcache = ResponseCache(dvis; add_fr=true, ehtim_fr_convention=false)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Next we define our cache that maps quantities e.g., gain products, that change from scan-to-scan.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"scancache = jonescache(dvis, ScanSeg())","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"In addition we will assign a reference station. This is necessary for gain phases due to a trivial degeneracy being present. To do this we will select ALMA AA as the reference station as is standard in EHT analyses.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"phase_segs = station_tuple(dvis, ScanSeg(); AA=FixedSeg(1.0 + 0.0im))\nphasecache = jonescache(dvis, phase_segs)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Finally, we define our cache that maps quantities, e.g., gain ratios and d-terms, that are constant across a observation night, and we collect everything together.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"trackcache = jonescache(dvis, TrackSeg())\ninstrumentmeta = (;tcache, scancache, trackcache, phasecache)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Moving onto our prior, we first focus on the instrument model priors. Each station gain requires its own prior on both the amplitudes and phases. For the amplitudes, we assume that the gains are apriori well calibrated around unit gains (or 0 log gain amplitudes) which corresponds to no instrument corruption. The gain dispersion is then set to 10% for all stations except LMT, representing that we expect 10% deviations from scan-to-scan. For LMT, we let the prior expand to 100% due to the known pointing issues LMT had in 2017.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using Distributions\nusing DistributionsAD\ndistamp = station_tuple(dvis, Normal(0.0, 0.1))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"For the phases, we assume that the atmosphere effectively scrambles the gains. Since the gain phases are periodic, we also use broad von Mises priors for all stations. Notice that we don't assign a prior for AA since we have already fixed it.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"distphase = station_tuple(dvis, DiagonalVonMises(0.0, inv(π^2)); reference=:AA)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"However, we can now also use a little additional information about the phase offsets where in most cases, they are much better behaved than the products","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"distphase_ratio = station_tuple(dvis, DiagonalVonMises(0.0, inv(0.1)); reference=:AA)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Moving onto the d-terms, here we directly parameterize the real and complex components of the d-terms since they are expected to be complex numbers near the origin. To help enforce this smallness, a weakly informative Normal prior is used.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"distD = station_tuple(dvis, Normal(0.0, 0.1))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Our image priors are:","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"We use a Dirichlet prior, ImageDirichlet, with unit concentration for our stokes I image pixels, c.\nFor the total polarization fraction, p, we assume an uncorrelated uniform prior ImageUniform for each pixel.\nTo specify the orientation of the polarization, angparams, on the Poincare sphere, we use a uniform spherical distribution, ImageSphericalUniform.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"For all the calibration parameters, we use a helper function CalPrior which builds the prior given the named tuple of station priors and a JonesCache that specifies the segmentation scheme. For the gain products, we use the scancache, while for every other quantity, we use the trackcache.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"prior = NamedDist(\n c = ImageDirichlet(2.0, nx, ny),\n f = Uniform(0.7, 1.2),\n p = ImageUniform(nx, ny),\n angparams = ImageSphericalUniform(nx, ny),\n dRx = CalPrior(distD, trackcache),\n dRy = CalPrior(distD, trackcache),\n dLx = CalPrior(distD, trackcache),\n dLy = CalPrior(distD, trackcache),\n lgp = CalPrior(distamp, scancache),\n gpp = CalPrior(distphase, phasecache),\n lgr = CalPrior(distamp, scancache),\n gpr = CalPrior(distphase,phasecache),\n )","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Putting it all together, we form our likelihood and posterior objects for optimization and sampling.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"lklhd = RadioLikelihood(sky, instrument, dvis; skymeta, instrumentmeta)\npost = Posterior(lklhd, prior)","category":"page"},{"location":"examples/imaging_pol/#Reconstructing-the-Image-and-Instrument-Effects","page":"Polarized Image and Instrumental Modeling","title":"Reconstructing the Image and Instrument Effects","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"To sample from this posterior, it is convenient to move from our constrained parameter space to an unconstrained one (i.e., the support of the transformed posterior is (-∞, ∞)). This transformation is done using the asflat function.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"tpost = asflat(post)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"We can now also find the dimension of our posterior or the number of parameters we will sample.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"warning: Warning\nThis can often be different from what you would expect. This difference is especially true when using angular variables, where we often artificially increase the dimension of the parameter space to make sampling easier.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"ndim = dimension(tpost)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now we optimize. Unlike other imaging examples, we move straight to gradient optimizers due to the higher dimension of the space.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using ComradeOptimization\nusing OptimizationOptimJL\nusing Zygote\nf = OptimizationFunction(tpost, Optimization.AutoZygote())\nℓ = logdensityof(tpost)\nprob = Optimization.OptimizationProblem(f, prior_sample(tpost), nothing)\nsol = solve(prob, LBFGS(), maxiters=15_000, g_tol=1e-1);\nnothing #hide","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"warning: Warning\nFitting polarized images is generally much harder than Stokes I imaging. This difficulty means that optimization can take a long time, and starting from a good starting location is often required.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Before we analyze our solution, we need to transform it back to parameter space.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"xopt = transform(tpost, sol)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Now let's evaluate our fits by plotting the residuals","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using Plots\nresidual(vlbimodel(post, xopt), dvis)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"These look reasonable, although there may be some minor overfitting. Let's compare our results to the ground truth values we know in this example. First, we will load the polarized truth","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"using AxisKeys\nimgtrue = Comrade.load(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"PolarizedExamples/polarized_gaussian.fits\"), StokesIntensityMap)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Select a reasonable zoom in of the image.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"imgtruesub = imgtrue(Interval(-fovx/2, fovx/2), Interval(-fovy/2, fovy/2))\nimg = intensitymap!(copy(imgtruesub), skymodel(post, xopt))\nimport CairoMakie as CM\nfig = CM.Figure(;resolution=(900, 400))\npolimage(fig[1,1], imgtruesub,\n axis=(xreversed=true, aspect=1, title=\"Truth\", limits=((-20.0,20.0), (-20.0, 20.0))),\n length_norm=1, plot_total=true,\n pcolorrange=(-0.25, 0.25), pcolormap=CM.Reverse(:jet))\npolimage(fig[1,2], img,\n axis=(xreversed=true, aspect=1, title=\"Recon.\", limits=((-20.0,20.0), (-20.0, 20.0))),\n length_norm=1, plot_total=true,\n pcolorrange=(-0.25, 0.25), pcolormap=CM.Reverse(:jet))\nCM.Colorbar(fig[1,3], colormap=CM.Reverse(:jet), colorrange=(-0.25, 0.25), label=\"Signed Polarization Fraction sign(V)*|p|\")\nCM.colgap!(fig.layout, 1)\nfig","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Let's compare some image statics, like the total linear polarization fraction","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"ftrue = flux(imgtruesub);\n@info \"Linear polarization true image: $(abs(linearpol(ftrue))/ftrue.I)\"\nfrecon = flux(img);\n@info \"Linear polarization recon image: $(abs(linearpol(frecon))/frecon.I)\"","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"And the Circular polarization fraction","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"@info \"Circular polarization true image: $(ftrue.V/ftrue.I)\"\n@info \"Circular polarization recon image: $(frecon.V/frecon.I)\"","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Because we also fit the instrument model, we can inspect their parameters. To do this, Comrade provides a caltable function that converts the flattened gain parameters to a tabular format based on the time and its segmentation.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"dR = caltable(trackcache, complex.(xopt.dRx, xopt.dRy))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"We can compare this to the ground truth d-terms","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"time AA AP AZ JC LM PV SM\n0.0 0.01-0.02im -0.08+0.07im 0.09-0.10im -0.04+0.05im 0.03-0.02im -0.01+0.02im 0.08-0.07im","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"And same for the left-handed dterms","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"dL = caltable(trackcache, complex.(xopt.dLx, xopt.dLy))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"time AA AP AZ JC LM PV SM\n0.0 0.03-0.04im -0.06+0.05im 0.09-0.08im -0.06+0.07im 0.01-0.00im -0.03+0.04im 0.06-0.05im","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Looking at the gain phase ratio","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"gphase_ratio = caltable(phasecache, xopt.gpr)","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"we see that they are all very small. Which should be the case since this data doesn't have gain corruptions! Similarly our gain ratio amplitudes are also very close to unity as expected.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"gamp_ratio = caltable(scancache, exp.(xopt.lgr))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Plotting the gain phases, we see some offsets from zero. This is because the prior on the gain product phases is very broad, so we can't phase center the image. For realistic data this is always the case since the atmosphere effectively scrambles the phases.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"gphase_prod = caltable(phasecache, xopt.gpp)\nplot(gphase_prod, layout=(3,3), size=(650,500))\nplot!(gphase_ratio, layout=(3,3), size=(650,500))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Finally, the product gain amplitudes are all very close to unity as well, as expected since gain corruptions have not been added to the data.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"gamp_prod = caltable(scancache, exp.(xopt.lgp))\nplot(gamp_prod, layout=(3,3), size=(650,500))\nplot!(gamp_ratio, layout=(3,3), size=(650,500))","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"At this point, you should run the sampler to recover an uncertainty estimate, which is identical to every other imaging example (see, e.g., Stokes I Simultaneous Image and Instrument Modeling. However, due to the time it takes to sample, we will skip that for this tutorial. Note that on the computer environment listed below, 20_000 MCMC steps take 4 hours.","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"[1]: Hamaker J.P, Bregman J.D., Sault R.J. (1996) [https://articles.adsabs.harvard.edu/pdf/1996A%26AS..117..137H]","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"[2]: Pesce D. (2021) [https://ui.adsabs.harvard.edu/abs/2021AJ....161..178P/abstract]","category":"page"},{"location":"examples/imaging_pol/#Computing-information","page":"Polarized Image and Instrumental Modeling","title":"Computing information","text":"","category":"section"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"Julia Version 1.8.5\nCommit 17cfb8e65ea (2023-01-08 06:45 UTC)\nPlatform Info:\n OS: Linux (x86_64-linux-gnu)\n CPU: 32 × AMD Ryzen 9 7950X 16-Core Processor\n WORD_SIZE: 64\n LIBM: libopenlibm\n LLVM: libLLVM-13.0.1 (ORCJIT, znver3)\n Threads: 1 on 32 virtual cores\nEnvironment:\n JULIA_EDITOR = code\n JULIA_NUM_THREADS = 1","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"","category":"page"},{"location":"examples/imaging_pol/","page":"Polarized Image and Instrumental Modeling","title":"Polarized Image and Instrumental Modeling","text":"This page was generated using Literate.jl.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"EditURL = \"../../../examples/geometric_modeling.jl\"","category":"page"},{"location":"examples/geometric_modeling/#Geometric-Modeling-of-EHT-Data","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Comrade has been designed to work with the EHT and ngEHT. In this tutorial, we will show how to reproduce some of the results from EHTC VI 2019.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"In EHTC VI, they considered fitting simple geometric models to the data to estimate the black hole's image size, shape, brightness profile, etc. In this tutorial, we will construct a similar model and fit it to the data in under 50 lines of code (sans comments). To start, we load Comrade and some other packages we need.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using Comrade","category":"page"},{"location":"examples/geometric_modeling/#Load-the-Data","page":"Geometric Modeling of EHT Data","title":"Load the Data","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using Pkg #hide\nPkg.activate(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\")) #hide\n\nusing Pyehtim","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"For reproducibility we use a stable random number genreator","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using StableRNGs\nrng = StableRNG(42)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"The next step is to load the data. We will use the publically available M 87 data which can be downloaded from cyverse. For an introduction to data loading, see Loading Data into Comrade.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"obs = load_uvfits_and_array(joinpath(dirname(pathof(Comrade)), \"..\", \"examples\", \"SR1_M87_2017_096_lo_hops_netcal_StokesI.uvfits\"))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Now we will kill 0-baselines since we don't care about large-scale flux and since we know that the gains in this dataset are coherent across a scan, we make scan-average data","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"obs = Pyehtim.scan_average(obs.flag_uvdist(uv_min=0.1e9))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Now we extract the data products we want to fit","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"dlcamp, dcphase = extract_table(obs, LogClosureAmplitudes(;snrcut=3.0), ClosurePhases(;snrcut=3.0))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"!!!warn We remove the low-snr closures since they are very non-gaussian. This can create rather large biases in the model fitting since the likelihood has much heavier tails that the usual Gaussian approximation.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"For the image model, we will use a modified MRing, a infinitely thin delta ring with an azimuthal structure given by a Fourier expansion. To give the MRing some width, we will convolve the ring with a Gaussian and add an additional gaussian to the image to model any non-ring flux. Comrade expects that any model function must accept a named tuple and returns must always return an object that implements the VLBISkyModels Interface","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"function model(θ)\n (;radius, width, ma, mp, τ, ξτ, f, σG, τG, ξG, xG, yG) = θ\n α = ma.*cos.(mp .- ξτ)\n β = ma.*sin.(mp .- ξτ)\n ring = f*smoothed(modify(MRing(α, β), Stretch(radius, radius*(1+τ)), Rotate(ξτ)), width)\n g = (1-f)*shifted(rotated(stretched(Gaussian(), σG, σG*(1+τG)), ξG), xG, yG)\n return ring + g\nend","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"To construct our likelihood p(V|M) where V is our data and M is our model, we use the RadioLikelihood function. The first argument of RadioLikelihood is always a function that constructs our Comrade model from the set of parameters θ.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"lklhd = RadioLikelihood(model, dlcamp, dcphase)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"We now need to specify the priors for our model. The easiest way to do this is to specify a NamedTuple of distributions:","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using Distributions, VLBIImagePriors\nprior = NamedDist(\n radius = Uniform(μas2rad(10.0), μas2rad(30.0)),\n width = Uniform(μas2rad(1.0), μas2rad(10.0)),\n ma = (Uniform(0.0, 0.5), Uniform(0.0, 0.5)),\n mp = (Uniform(0, 2π), Uniform(0, 2π)),\n τ = Uniform(0.0, 1.0),\n ξτ= Uniform(0.0, π),\n f = Uniform(0.0, 1.0),\n σG = Uniform(μas2rad(1.0), μas2rad(100.0)),\n τG = Uniform(0.0, 1.0),\n ξG = Uniform(0.0, 1π),\n xG = Uniform(-μas2rad(80.0), μas2rad(80.0)),\n yG = Uniform(-μas2rad(80.0), μas2rad(80.0))\n )","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Note that for α and β we use a product distribution to signify that we want to use a multivariate uniform for the mring components α and β. In general the structure of the variables is specified by the prior. Note that this structure must be compatible with the model definition model(θ).","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"To form the posterior we now call","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"post = Posterior(lklhd, prior)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"!!!warn As of Comrade 0.9 we have switched to the proper covariant closure likelihood. This is slower than the naieve diagonal liklelihood, but takes into account the correlations between closures that share the same baselines.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"This constructs a posterior density that can be evaluated by calling logdensityof. For example,","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"logdensityof(post, (radius = μas2rad(20.0),\n width = μas2rad(10.0),\n ma = (0.3, 0.3),\n mp = (π/2, π),\n τ = 0.1,\n ξτ= π/2,\n f = 0.6,\n σG = μas2rad(50.0),\n τG = 0.1,\n ξG = 0.5,\n xG = 0.0,\n yG = 0.0))","category":"page"},{"location":"examples/geometric_modeling/#Reconstruction","page":"Geometric Modeling of EHT Data","title":"Reconstruction","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Now that we have fully specified our model, we now will try to find the optimal reconstruction of our model given our observed data.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Currently, post is in parameter space. Often optimization and sampling algorithms want it in some modified space. For example, nested sampling algorithms want the parameters in the unit hypercube. To transform the posterior to the unit hypercube, we can use the ascube function","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"cpost = ascube(post)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"If we want to flatten the parameter space and move from constrained parameters to (-∞, ∞) support we can use the asflat function","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"fpost = asflat(post)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"These transformed posterior expect a vector of parameters. That is we can evaluate the transformed log density by calling","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"logdensityof(cpost, rand(rng, dimension(cpost)))\nlogdensityof(fpost, randn(rng, dimension(fpost)))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"note that cpost logdensity vector expects that each element lives in [0,1].","category":"page"},{"location":"examples/geometric_modeling/#Finding-the-Optimal-Image","page":"Geometric Modeling of EHT Data","title":"Finding the Optimal Image","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Typically, most VLBI modeling codes only care about finding the optimal or best guess image of our posterior post To do this, we will use Optimization.jl and specifically the BlackBoxOptim.jl package. For Comrade, this workflow is very similar to the usual Optimization.jl workflow. The only thing to keep in mind is that Optimization.jl expects that the function we are evaluating expects the parameters to be represented as a flat Vector of float. Therefore, we must use one of our transformed posteriors, cpost or fpost. For this example","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"#, we will use `cpost` since it restricts the domain to live within the compact unit hypercube\n#, which is easier to explore for non-gradient-based optimizers like `BBO`.\n\nusing ComradeOptimization\nusing OptimizationBBO\n\nndim = dimension(fpost)\nf = OptimizationFunction(fpost)\nprob = Optimization.OptimizationProblem(f, randn(rng, ndim), nothing, lb=fill(-5.0, ndim), ub=fill(5.0, ndim))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Now we solve for our optimial image.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited(); maxiters=50_000);\nnothing #hide","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"The sol vector is in the transformed space, so first we need to transform back to parameter space to that we can interpret the solution.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"xopt = transform(fpost, sol)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Given this we can now plot the optimal image or the maximum a posteriori (MAP) image.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"import CairoMakie as CM\ng = imagepixels(μas2rad(200.0), μas2rad(200.0), 256, 256)\nfig, ax, plt = CM.image(g, model(xopt); axis=(xreversed=true, aspect=1, xlabel=\"RA (μas)\", ylabel=\"Dec (μas)\"), figure=(;resolution=(650,500),) ,colormap=:afmhot)","category":"page"},{"location":"examples/geometric_modeling/#Quantifying-the-Uncertainty-of-the-Reconstruction","page":"Geometric Modeling of EHT Data","title":"Quantifying the Uncertainty of the Reconstruction","text":"","category":"section"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"While finding the optimal image is often helpful, in science, the most important thing is to quantify the certainty of our inferences. This is the goal of Comrade. In the language of Bayesian statistics, we want to find a representation of the posterior of possible image reconstructions given our choice of model and the data.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Comrade provides several sampling and other posterior approximation tools. To see the list, please see the Libraries section of the docs. For this example, we will be using Pigeons.jl which is a state-of-the-art parallel tempering sampler that enables global exploration of the posterior. For smaller dimension problems (< 100) we recommend using this sampler especially if you have access to > 1 thread/core.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using Pigeons\npt = pigeons(target=cpost, explorer=SliceSampler(), record=[traces, round_trip, log_sum_ratio], n_chains=18, n_rounds=9)\nchain = sample_array(cpost, pt)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"That's it! To finish it up we can then plot some simple visual fit diagnostics.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"First to plot the image we call","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"imgs = intensitymap.(skymodel.(Ref(post), sample(chain, 100)), μas2rad(200.0), μas2rad(200.0), 128, 128)\nimageviz(imgs[end], colormap=:afmhot)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"What about the mean image? Well let's grab 100 images from the chain, where we first remove the adaptation steps since they don't sample from the correct posterior distribution","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"meanimg = mean(imgs)\nimageviz(meanimg, colormap=:afmhot)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"That looks similar to the EHTC VI, and it took us no time at all!. To see how well the model is fitting the data we can plot the model and data products","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using Plots\nplot(model(xopt), dlcamp, label=\"MAP\")","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"We can also plot random draws from the posterior predictive distribution. The posterior predictive distribution create a number of synthetic observations that are marginalized over the posterior.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"p = plot(dlcamp);\nuva = [sqrt.(uvarea(dlcamp[i])) for i in 1:length(dlcamp)]\nfor i in 1:10\n m = simulate_observation(post, sample(chain, 1)[1])[1]\n scatter!(uva, m, color=:grey, label=:none, alpha=0.1)\nend\np","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Finally, we can also put everything onto a common scale and plot the normalized residuals. The normalied residuals are the difference between the data and the model, divided by the data's error:","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"residual(model(xopt), dlcamp)","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"All diagnostic plots suggest that the model is missing some emission sources. In fact, this model is too simple to explain the data. Check out EHTC VI 2019 for some ideas about what features need to be added to the model to get a better fit!","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"For a real run we should also check that the MCMC chain has converged. For this we can use MCMCDiagnosticTools","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"using MCMCDiagnosticTools, Tables","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"First, lets look at the effective sample size (ESS) and R̂. This is important since the Monte Carlo standard error for MCMC estimates is proportional to 1/√ESS (for some problems) and R̂ is a measure of chain convergence. To find both, we can use:","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"compute_ess(x::NamedTuple) = map(compute_ess, x)\ncompute_ess(x::AbstractVector{<:Number}) = ess_rhat(x)\ncompute_ess(x::AbstractVector{<:Tuple}) = map(ess_rhat, Tables.columns(x))\ncompute_ess(x::Tuple) = map(compute_ess, x)\nessrhat = compute_ess(Tables.columns(chain))","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"Here, the first value is the ESS, and the second is the R̂. Note that we typically want R̂ < 1.01 for all parameters, but you should also be running the problem at least four times from four different starting locations. In the future we will write an extension that works with Arviz.jl.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"In our example here, we see that we have an ESS > 100 for all parameters and the R̂ < 1.01 meaning that our MCMC chain is a reasonable approximation of the posterior. For more diagnostics, see MCMCDiagnosticTools.jl.","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"","category":"page"},{"location":"examples/geometric_modeling/","page":"Geometric Modeling of EHT Data","title":"Geometric Modeling of EHT Data","text":"This page was generated using Literate.jl.","category":"page"},{"location":"vlbi_imaging_problem/#Introduction-to-the-VLBI-Imaging-Problem","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"","category":"section"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Very-long baseline interferometry (VLBI) is capable of taking the highest resolution images in the world, achieving angular resolutions of ~20 μas. In 2019, the first-ever image of a black hole was produced by the Event Horizon Telescope (EHT). However, while the EHT has unprecedented resolution, it is also a sparse interferometer. As a result, the sampling in the uv or Fourier space of the image is incomplete. This incompleteness makes the imaging problem uncertain. Namely, infinitely many images are possible, given the data. Comrade is a imaging/modeling package that aims to quantify this uncertainty using Bayesian inference.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"If we denote visibilities by V and the image structure/model by I, Comrade will then compute the posterior or the probability of an image given the visibility data or in an equation","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"p(IV) = fracp(VI)p(I)p(V)","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Here p(VI) is known as the likelihood and describes the probability distribution of the data given some image I. The prior p(I) encodes prior knowledge of the image structure. This prior includes distributions of model parameters and even the model itself. Finally, the denominator p(V) is a normalization term and is known as the marginal likelihood or evidence and can be used to assess how well particular models fit the data.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Therefore, we must specify the likelihood and prior to construct our posterior. Below we provide a brief description of the likelihoods and models/priors that Comrade uses. However, if the user wants to see how everything works first, they should check out the Geometric Modeling of EHT Data tutorial.","category":"page"},{"location":"vlbi_imaging_problem/#Likelihood","page":"Introduction to the VLBI Imaging Problem","title":"Likelihood","text":"","category":"section"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Following TMS[TMS], we note that the likelihood for a single complex visibility at baseline u_ij v_ij is","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"p(V_ij I) = (2pi sigma^2_ij)^-12expleft(-frac V_ij - g_ig_j^*tildeI_ij(I)^22sigma^2_ijright)","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"In this equation, tildeI is the Fourier transform of the image I, and g_ij are complex numbers known as gains. The gains arise due to atmospheric and telescope effects and corrupt the incoming signal. Therefore, if a user attempts to model the complex visibilities, they must also model the complex gains. An example showing how to model gains in Comrade can be found in Stokes I Simultaneous Image and Instrument Modeling.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Modeling the gains can be computationally expensive, especially if our image model is simple. For instance, in Comrade, we have a wide variety of geometric models. These models tend to have a small number of parameters and are simple to evaluate. Solving for gains then drastically increases the amount of time it takes to sample the posterior. As a result, part of the typical EHT analysis[M87P6][SgrAP4] instead uses closure products as its data. The two forms of closure products are:","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Closure Phases,\nLog-Closure Amplitudes.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Closure Phases psi are constructed by selecting three baselines (ijk) and finding the argument of the bispectrum:","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":" psi_ijk = arg V_ijV_jkV_ki","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Similar log-closure amplitudes are found by selecting four baselines (ijkl) and forming the closure amplitudes:","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":" A_ijkl = frac V_ijV_klV_jkV_li","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Instead of directly fitting closure amplitudes, it turns out that the statistically better-behaved data product is the log-closure amplitude. ","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"The benefit of fitting closure products is that they are independent of complex gains, so we can leave them out when modeling the data. However, the downside is that they effectively put uniform improper priors on the gains[Blackburn], meaning that we often throw away information about the telescope's performance. On the other hand, we can then view closure fitting as a very conservative estimate about what image structures are consistent with the data. Another downside of using closure products is that their likelihoods are complex. In the high-signal-to-noise limit, however, they do reduce to Gaussian likelihoods, and this is the limit we are usually in for the EHT. For the explicit likelihood Comrade uses, we refer the reader to appendix F in paper IV of the first Sgr A* EHT publications[SgrAP4]. The computational implementation of these likelihoods can be found in VLBILikelihoods.jl.","category":"page"},{"location":"vlbi_imaging_problem/#Prior-Model","page":"Introduction to the VLBI Imaging Problem","title":"Prior Model","text":"","category":"section"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Comrade has included a large number of possible models (see Comrade API for a list). These can be broken down into two categories:","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Parametric or geometric models\nNon-parametric or image models","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Comrade's geometric model interface is built using VLBISkyModels and is different from other EHT modeling packages because we don't directly provide fully formed models. Instead, we offer simple geometric models, which we call primitives. These primitive models can then be modified and combined to form complicated image structures. For more information, we refer the reader to the VLBISkyModels docs.","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"Additionally, we include an interface to Bayesian imaging methods, where we directly fit a rasterized image to the data. These models are highly flexible and assume very little about the image structure. In that sense, these methods are an excellent way to explore the data first and see what kinds of image structures are consistent with observations. For an example of how to fit an image model to closure products, we refer the reader to the other tutorial included in the docs.","category":"page"},{"location":"vlbi_imaging_problem/#References","page":"Introduction to the VLBI Imaging Problem","title":"References","text":"","category":"section"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"[TMS]: Thompson, A., Moran, J., Swenson, G. (2017). Interferometry and Synthesis in Radio Astronomy (Third). Springer Cham","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"[M87P6]: Event Horizon Telescope Collaboration, (2022). First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole. ApJL 875 L6 doi","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"[SgrAP4]: Event Horizon Telescope Collaboration, (2022). First Sagittarius A* Event Horizon Telscope Results. IV. Variability, Morphology, and Black Hole Mass. ApJL 930 L15 arXiv","category":"page"},{"location":"vlbi_imaging_problem/","page":"Introduction to the VLBI Imaging Problem","title":"Introduction to the VLBI Imaging Problem","text":"[Blackburn]: Blackburn, L., et. al. (2020). Closure statistics in interferometric data. ApJ, 894(1), 31.","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = Comrade","category":"page"},{"location":"#Comrade","page":"Home","title":"Comrade","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Comrade is a Bayesian differentiable modular modeling framework for use with very long baseline interferometry. The goal is to allow the user to easily combine and modify a set of primitive models to construct complicated source structures. The benefit of this approach is that it is straightforward to construct different source models out of these primitives. Namely, an end-user does not have to create a separate source \"model\" every time they change the model specification. Additionally, most models currently implemented are differentiable with at Zygote and sometimes ForwardDiff[2]. This allows for gradient accelerated optimization and sampling (e.g., HMC) to be used with little effort by the end user. To sample from the posterior, we provide a somewhat barebones interface since, most of the time, and we don't require the additional features offered by most PPLs. Additionally, the overhead introduced by PPLs tends to be rather large. In the future, we may revisit this as Julia's PPL ecosystem matures.","category":"page"},{"location":"","page":"Home","title":"Home","text":"note: Note\nThe primitives the Comrade defines, however, would allow for it to be easily included in PPLs like Turing.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Our tutorial section currently has a large number of examples. The simplest example is fitting simple geometric models to the 2017 M87 data and is detailed in the Geometric Modeling of EHT Data tutorial. We also include \"non-parametric\" modeling or imaging examples in Imaging a Black Hole using only Closure Quantities, and Stokes I Simultaneous Image and Instrument Modeling. There is also an introduction to hybrid geometric and image modeling in Hybrid Imaging of a Black Hole, which combines physically motivated geometric modeling with the flexibility of image-based models.","category":"page"},{"location":"","page":"Home","title":"Home","text":"As of 0.7, Comrade also can simultaneously reconstruct polarized image models and instrument corruptions through the RIME[1] formalism. A short example explaining these features can be found in Polarized Image and Instrumental Modeling.","category":"page"},{"location":"#Contributing","page":"Home","title":"Contributing","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"This repository has recently moved to ColPrac. If you would like to contribute please feel free to open a issue or pull-request.","category":"page"},{"location":"","page":"Home","title":"Home","text":"[2]: As of 0.9 Comrade switched to using full covariance closures. As a result this requires a sparse cholesky solve in the likelihood evaluation which requires ","category":"page"},{"location":"","page":"Home","title":"Home","text":"a Dual number overload. As a result we recommend using Zygote which does work and often is similarly performant (reverse 3-6x slower compared to the forward pass).","category":"page"},{"location":"#Requirements","page":"Home","title":"Requirements","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"The minimum Julia version we require is 1.7. In the future we may increase this as Julia advances.","category":"page"},{"location":"","page":"Home","title":"Home","text":"Pages = [\n \"index.md\",\n \"vlbi_imaging_problem.md\",\n \"conventions.md\",\n \"Tutorials\",\n \"Libraries\",\n \"interface.md\",\n \"base_api.md\",\n \"api.md\"\n]","category":"page"},{"location":"#References","page":"Home","title":"References","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"[1]: Hamaker J.P and Bregman J.D. and Sault R.J. Understanding radio polarimetry. I. Mathematical foundations ADS. ","category":"page"}] } diff --git a/dev/vlbi_imaging_problem/index.html b/dev/vlbi_imaging_problem/index.html index 2012643b..3d1fd182 100644 --- a/dev/vlbi_imaging_problem/index.html +++ b/dev/vlbi_imaging_problem/index.html @@ -1,2 +1,2 @@ -Introduction to the VLBI Imaging Problem · Comrade.jl

Introduction to the VLBI Imaging Problem

Very-long baseline interferometry (VLBI) is capable of taking the highest resolution images in the world, achieving angular resolutions of ~20 μas. In 2019, the first-ever image of a black hole was produced by the Event Horizon Telescope (EHT). However, while the EHT has unprecedented resolution, it is also a sparse interferometer. As a result, the sampling in the uv or Fourier space of the image is incomplete. This incompleteness makes the imaging problem uncertain. Namely, infinitely many images are possible, given the data. Comrade is a imaging/modeling package that aims to quantify this uncertainty using Bayesian inference.

If we denote visibilities by V and the image structure/model by I, Comrade will then compute the posterior or the probability of an image given the visibility data or in an equation

\[p(I|V) = \frac{p(V|I)p(I)}{p(V)}.\]

Here $p(V|I)$ is known as the likelihood and describes the probability distribution of the data given some image I. The prior $p(I)$ encodes prior knowledge of the image structure. This prior includes distributions of model parameters and even the model itself. Finally, the denominator $p(V)$ is a normalization term and is known as the marginal likelihood or evidence and can be used to assess how well particular models fit the data.

Therefore, we must specify the likelihood and prior to construct our posterior. Below we provide a brief description of the likelihoods and models/priors that Comrade uses. However, if the user wants to see how everything works first, they should check out the Geometric Modeling of EHT Data tutorial.

Likelihood

Following TMS[TMS], we note that the likelihood for a single complex visibility at baseline $u_{ij}, v_{ij}$ is

\[p(V_{ij} | I) = (2\pi \sigma^2_{ij})^{-1/2}\exp\left(-\frac{| V_{ij} - g_ig_j^*\tilde{I}_{ij}(I)|^2}{2\sigma^2_{ij}}\right).\]

In this equation, $\tilde{I}$ is the Fourier transform of the image $I$, and $g_{i,j}$ are complex numbers known as gains. The gains arise due to atmospheric and telescope effects and corrupt the incoming signal. Therefore, if a user attempts to model the complex visibilities, they must also model the complex gains. An example showing how to model gains in Comrade can be found in Stokes I Simultaneous Image and Instrument Modeling.

Modeling the gains can be computationally expensive, especially if our image model is simple. For instance, in Comrade, we have a wide variety of geometric models. These models tend to have a small number of parameters and are simple to evaluate. Solving for gains then drastically increases the amount of time it takes to sample the posterior. As a result, part of the typical EHT analysis[M87P6][SgrAP4] instead uses closure products as its data. The two forms of closure products are:

  • Closure Phases,
  • Log-Closure Amplitudes.

Closure Phases $\psi$ are constructed by selecting three baselines $(i,j,k)$ and finding the argument of the bispectrum:

\[ \psi_{ijk} = \arg V_{ij}V_{jk}V_{ki}.\]

Similar log-closure amplitudes are found by selecting four baselines $(i,j,k,l)$ and forming the closure amplitudes:

\[ A_{ijkl} = \frac{ |V_{ij}||V_{kl}|}{|V_{jk}||V_{li}|}.\]

Instead of directly fitting closure amplitudes, it turns out that the statistically better-behaved data product is the log-closure amplitude.

The benefit of fitting closure products is that they are independent of complex gains, so we can leave them out when modeling the data. However, the downside is that they effectively put uniform improper priors on the gains[Blackburn], meaning that we often throw away information about the telescope's performance. On the other hand, we can then view closure fitting as a very conservative estimate about what image structures are consistent with the data. Another downside of using closure products is that their likelihoods are complex. In the high-signal-to-noise limit, however, they do reduce to Gaussian likelihoods, and this is the limit we are usually in for the EHT. For the explicit likelihood Comrade uses, we refer the reader to appendix F in paper IV of the first Sgr A* EHT publications[SgrAP4]. The computational implementation of these likelihoods can be found in VLBILikelihoods.jl.

Prior Model

Comrade has included a large number of possible models (see Comrade API for a list). These can be broken down into two categories:

  1. Parametric or geometric models
  2. Non-parametric or image models

Comrade's geometric model interface is built using VLBISkyModels and is different from other EHT modeling packages because we don't directly provide fully formed models. Instead, we offer simple geometric models, which we call primitives. These primitive models can then be modified and combined to form complicated image structures. For more information, we refer the reader to the VLBISkyModels docs.

Additionally, we include an interface to Bayesian imaging methods, where we directly fit a rasterized image to the data. These models are highly flexible and assume very little about the image structure. In that sense, these methods are an excellent way to explore the data first and see what kinds of image structures are consistent with observations. For an example of how to fit an image model to closure products, we refer the reader to the other tutorial included in the docs.

References

  • TMSThompson, A., Moran, J., Swenson, G. (2017). Interferometry and Synthesis in Radio Astronomy (Third). Springer Cham
  • M87P6Event Horizon Telescope Collaboration, (2022). First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole. ApJL 875 L6 doi
  • SgrAP4Event Horizon Telescope Collaboration, (2022). First Sagittarius A* Event Horizon Telscope Results. IV. Variability, Morphology, and Black Hole Mass. ApJL 930 L15 arXiv
  • BlackburnBlackburn, L., et. al. (2020). Closure statistics in interferometric data. ApJ, 894(1), 31.
+Introduction to the VLBI Imaging Problem · Comrade.jl

Introduction to the VLBI Imaging Problem

Very-long baseline interferometry (VLBI) is capable of taking the highest resolution images in the world, achieving angular resolutions of ~20 μas. In 2019, the first-ever image of a black hole was produced by the Event Horizon Telescope (EHT). However, while the EHT has unprecedented resolution, it is also a sparse interferometer. As a result, the sampling in the uv or Fourier space of the image is incomplete. This incompleteness makes the imaging problem uncertain. Namely, infinitely many images are possible, given the data. Comrade is a imaging/modeling package that aims to quantify this uncertainty using Bayesian inference.

If we denote visibilities by V and the image structure/model by I, Comrade will then compute the posterior or the probability of an image given the visibility data or in an equation

\[p(I|V) = \frac{p(V|I)p(I)}{p(V)}.\]

Here $p(V|I)$ is known as the likelihood and describes the probability distribution of the data given some image I. The prior $p(I)$ encodes prior knowledge of the image structure. This prior includes distributions of model parameters and even the model itself. Finally, the denominator $p(V)$ is a normalization term and is known as the marginal likelihood or evidence and can be used to assess how well particular models fit the data.

Therefore, we must specify the likelihood and prior to construct our posterior. Below we provide a brief description of the likelihoods and models/priors that Comrade uses. However, if the user wants to see how everything works first, they should check out the Geometric Modeling of EHT Data tutorial.

Likelihood

Following TMS[TMS], we note that the likelihood for a single complex visibility at baseline $u_{ij}, v_{ij}$ is

\[p(V_{ij} | I) = (2\pi \sigma^2_{ij})^{-1/2}\exp\left(-\frac{| V_{ij} - g_ig_j^*\tilde{I}_{ij}(I)|^2}{2\sigma^2_{ij}}\right).\]

In this equation, $\tilde{I}$ is the Fourier transform of the image $I$, and $g_{i,j}$ are complex numbers known as gains. The gains arise due to atmospheric and telescope effects and corrupt the incoming signal. Therefore, if a user attempts to model the complex visibilities, they must also model the complex gains. An example showing how to model gains in Comrade can be found in Stokes I Simultaneous Image and Instrument Modeling.

Modeling the gains can be computationally expensive, especially if our image model is simple. For instance, in Comrade, we have a wide variety of geometric models. These models tend to have a small number of parameters and are simple to evaluate. Solving for gains then drastically increases the amount of time it takes to sample the posterior. As a result, part of the typical EHT analysis[M87P6][SgrAP4] instead uses closure products as its data. The two forms of closure products are:

  • Closure Phases,
  • Log-Closure Amplitudes.

Closure Phases $\psi$ are constructed by selecting three baselines $(i,j,k)$ and finding the argument of the bispectrum:

\[ \psi_{ijk} = \arg V_{ij}V_{jk}V_{ki}.\]

Similar log-closure amplitudes are found by selecting four baselines $(i,j,k,l)$ and forming the closure amplitudes:

\[ A_{ijkl} = \frac{ |V_{ij}||V_{kl}|}{|V_{jk}||V_{li}|}.\]

Instead of directly fitting closure amplitudes, it turns out that the statistically better-behaved data product is the log-closure amplitude.

The benefit of fitting closure products is that they are independent of complex gains, so we can leave them out when modeling the data. However, the downside is that they effectively put uniform improper priors on the gains[Blackburn], meaning that we often throw away information about the telescope's performance. On the other hand, we can then view closure fitting as a very conservative estimate about what image structures are consistent with the data. Another downside of using closure products is that their likelihoods are complex. In the high-signal-to-noise limit, however, they do reduce to Gaussian likelihoods, and this is the limit we are usually in for the EHT. For the explicit likelihood Comrade uses, we refer the reader to appendix F in paper IV of the first Sgr A* EHT publications[SgrAP4]. The computational implementation of these likelihoods can be found in VLBILikelihoods.jl.

Prior Model

Comrade has included a large number of possible models (see Comrade API for a list). These can be broken down into two categories:

  1. Parametric or geometric models
  2. Non-parametric or image models

Comrade's geometric model interface is built using VLBISkyModels and is different from other EHT modeling packages because we don't directly provide fully formed models. Instead, we offer simple geometric models, which we call primitives. These primitive models can then be modified and combined to form complicated image structures. For more information, we refer the reader to the VLBISkyModels docs.

Additionally, we include an interface to Bayesian imaging methods, where we directly fit a rasterized image to the data. These models are highly flexible and assume very little about the image structure. In that sense, these methods are an excellent way to explore the data first and see what kinds of image structures are consistent with observations. For an example of how to fit an image model to closure products, we refer the reader to the other tutorial included in the docs.

References

  • TMSThompson, A., Moran, J., Swenson, G. (2017). Interferometry and Synthesis in Radio Astronomy (Third). Springer Cham
  • M87P6Event Horizon Telescope Collaboration, (2022). First M87 Event Horizon Telescope Results. VI. The Shadow and Mass of the Central Black Hole. ApJL 875 L6 doi
  • SgrAP4Event Horizon Telescope Collaboration, (2022). First Sagittarius A* Event Horizon Telscope Results. IV. Variability, Morphology, and Black Hole Mass. ApJL 930 L15 arXiv
  • BlackburnBlackburn, L., et. al. (2020). Closure statistics in interferometric data. ApJ, 894(1), 31.