-
-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to use a custom prior that contains a pm.Deterministic
#857
Comments
Hey @ivanistheone thanks for the detailed issue. I think this is the expected behavior as deterministic variables are components in probabilistic programming that represent values that are completely determined by their input variables. A deterministic prior cannot be sampled directly as these deterministic variables are computed based on their parent variables rather than being sampled like traditional random variables. @tomicapretto correct me if I am wrong here. |
Right. From reading other code examples (PyMC) I see them always used as extra computed variables, but never to specify priors. Is there another way to produce a custom prior of the form Here is another thing I tried based on def ShiftedExpCustomDist(name, lam, **kwargs):
def shited_exp(lam, loc, size):
return pm.Exponential.dist(lam=lam) + loc
return pm.CustomDist(name, lam, 1, dist=shited_exp, **kwargs)
priorscd = {
...
"nu": bmb.Prior("ShiftedExpCustomDist", lam=1/29, dist=ShiftedExpCustomDist),
} which seems to work, though it produced a lot of divergent transitions, so its probably not the right way to do this. |
Yeah, this code follows a lot more closely with what the PyMC docs recommend. See the third example here for an example of a shifted exponential distribution. I you pass import pymc as pm
from pytensor.tensor import TensorVariable
def dist(
lam: TensorVariable,
shift: TensorVariable,
size: TensorVariable,
) -> TensorVariable:
return pm.Exponential.dist(lam, size=size) + shift
iqs_mean, iqs_std = iqs["iq"].mean(), iqs["iq"].std()
formula = bmb.Formula("iq ~ 0 + group", "sigma ~ 0 + group")
priors = {
"group": bmb.Prior("Normal", mu=iqs_mean, sigma=1000*iqs_std),
"sigma": {"group": bmb.Prior("Uniform", lower=np.log(iqs_std/1000), upper=np.log(iqs_std*1000))},
"nu": bmb.Prior(
"ShiftedExponential",
lam=1/29,
shift=-1.0,
dist=lambda name, lam, shift: pm.CustomDist(
name,
lam,
shift,
dist=dist,
signature="(),()->()"
)
)
} |
That makes sense since I passed I will keep the issue open and add a docs tag to remember to add an example on using |
Hello! I'm trying to reproduce the model from the Bayesian Estimation Supersedes the t-Test paper in Bambi.
I would like to set a "shifted exponential" for the degrees of freedom parameter$\nu \sim \text{Expon}\left(\lambda=\frac{1}{29}\right) + 1$ , which is what we would get if the pm.Exponential accepted the
loc=1
option likescipy.stats.expon
.I tried implementing the shifted exponential using a custom prior:
the model builds and graphs fine
but when I run
model.fit()
I get the following error:ValueError: Random variables detected in the logp graph: {MeasurableAdd.0, exponential_rv{"()->()"}.out}.
This can happen when DensityDist logp or Interval transform functions reference nonlocal variables,
or when not all rvs have a corresponding value variable.
with traceback
Complete code example below the fold
Workaround using pm.Truncated
In this specific case, I found a workaround using$\text{Expon}\left(\lambda=\frac{1}{29}\right)$ it starting at 1, gives the same PDF as $\text{Expon}\left(\lambda=\frac{1}{29}\right) + 1$ , as shown below:
pm.Truncated
, since the exponential function is self-similar, so truncatingI still wanted to flag
Deterministic
didn't work---or what is more likely---I'm not using it right.See this notebook for complete code:
https://github.com/minireference/noBSstats/blob/main/notebooks/explorations/bambi_BEST_Deterministic.ipynb
The text was updated successfully, but these errors were encountered: