Replies: 11 comments
-
I often miss the ability to specify a shape different than the observed data for PP sampling. I think this Discourse issue is about the same: https://discourse.pymc.io/t/posterior-predictive-check-limited-to-certain-sample-sizes/7120/3 Is this what you meant with this Issue? If not, could you give a minimal example? |
Beta Was this translation helpful? Give feedback.
-
aren't we using dims and coords instead of shape now(or atleast encouraging that)? |
Beta Was this translation helpful? Give feedback.
-
For a lot of models like linear regression there are two Then when doing post pred sampling we need to update X and Y, but Y can just be dummy placeholder values. I think setting Y in this case is unnecessary as the shape of X just translates through to the output. I'm not quite sure how to handle the post pred shape in the case where the likelihood shape is not specified externally (via an X). Maybe as a kwarg to |
Beta Was this translation helpful? Give feedback.
-
maybe in the second occurence of pm.data, we can just do
I might be missing something but we need some sort of placeholders for X right, which we train during sample_pp. I'm not sure if I make sense, I'm somewhat speculating I need to understand the whole coords dims thing properly again. |
Beta Was this translation helpful? Give feedback.
-
@almostmeenal We still need to set There's analog to scikit-learn. For us, specifying and sampling the model is basically
I would like us to get rid of having to set |
Beta Was this translation helpful? Give feedback.
-
got it, thanks for clarifying. So is the likely solution then only removing observed data? |
Beta Was this translation helpful? Give feedback.
-
So that would work for the case where there's an An example would be the coin-flip model where I infer a So the key question is if the shape of the observed is specified by its inputs (like when an One solution might be to rely on the Anyone got ideas? |
Beta Was this translation helpful? Give feedback.
-
If I'm understanding this correctly, all this is already covered in |
Beta Was this translation helpful? Give feedback.
-
@brandonwillard Neat! How do I set the target shape in |
Beta Was this translation helpful? Give feedback.
-
Here's a small example in import numpy as np
import aesara
import aesara.tensor as at
import pymc3 as pm
observed = aesara.shared(np.random.normal(10, 1, size=200), borrow=True)
with pm.Model() as m:
mu = pm.Normal("mu", 0, 100)
Y = pm.Normal("Y", mu, 1, observed=observed)
with m:
trace = pm.sample()
with m:
pp_trace = pm.sample_posterior_predictive(trace) The original shape of the observations is >>> pp_trace["Y"].shape
(4000, 200) Change the shape of the observations to observed.set_value(np.random.normal(10, 1, size=100))
with m:
pp_trace = pm.sample_posterior_predictive(trace) >>> pp_trace["Y"].shape
(4000, 100) Change the shape to with m:
pp_trace = pm.sample_posterior_predictive(trace, size=(2, 3)) >>> pp_trace["Y"].shape
(4000, 2, 3, 100) The same can be done for any other shared variables (i.e. not just |
Beta Was this translation helpful? Give feedback.
-
This issue needs a bit more discussion to understand what is the behavior we want. I don't think we want to always ignore the shape of the observed variables. |
Beta Was this translation helpful? Give feedback.
-
When making predictions we always need to make sure that the data into the likelihood has the right shape, even though we don't even need it because we resample new data.
We could just remove the observed data for PP sampling so that that isn't required anymore.
Beta Was this translation helpful? Give feedback.
All reactions