-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default float type to float(Real), not Real (#685) #686
Conversation
Pull Request Test Coverage Report for Build 11690510349Details
💛 - Coveralls |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #686 +/- ##
==========================================
- Coverage 82.14% 81.88% -0.27%
==========================================
Files 30 30
Lines 4200 4200
==========================================
- Hits 3450 3439 -11
- Misses 750 761 +11 ☔ View full report in Codecov by Sentry. |
Obviously broken. Next week's problem! |
e994196
to
f5e04e0
Compare
f5e04e0
to
958525b
Compare
to be honest it seems like all of the errors are ReverseDiff related 🤷♀️ this PR does fix the original problem though, which was that all of the calls to using Turing
@model function f(x)
ns ~ filldist(Normal(0, 2.0), 3)
m ~ Uniform(0, 1)
x ~ Normal(m, 1)
end
model = f(1)
chain = sample(model, NUTS(), MCMCThreads(), 10, 4);
loglikelihood(model, chain)
logprior(model, chain)
logjoint(model, chain) |
Also, the ReverseDiff tests only fail on <= Julia 1.9 (I tested 1.10 and 1.11 locally and they both work), so maybe the solution here is to bump min compat to 1.10. EDIT: CI is failing, but there aren't any new failures compared to master (all the failures are accounted for by our other open issues). |
759ff04
to
3ad1f1d
Compare
What exactly is the failure? If it's that we get slightly different numbers from sampling, then this mightbe RNG related or whatever, but we should identify what's the cause before brushing this off. If it's actual gradient computations being different, then we should identify that so we know. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Happy with the code. Unless this is in a rush, would be good to see some of the other fixes merged to see tests pass here. I may only be saying that because I'm lazy to read through the test logs to convince myself that they are fine.
Isn't it a bit annoying to just merge this if the CI is failing? IMO we hsould really at least identify why ReverseDiff is failing. From the logs it just seems to produce different results, in which case we should determine why. |
To be clear, I consider a review approval to be orthogonal to seeing tests pass, i.e. generally we should have both. Thus I often approve on a basis of "code looks good, just needs tests to pass", if I know getting those tests to pass won't require significant changes to the code I've read. Do you operate on a different basis, where one should approve only once CI is happy? |
Ah gotcha 👍 And no, I also operate under similar approaches:) But specifically here it seems we've changed the CI version due to ReverseDiff.jl bugs to make the CI pass, which goes a bit beyond maybe? |
A bump on this:) |
https://github.com/TuringLang/DynamicPPL.jl/actions/runs/11616496161/job/32349580296#step:6:357 |
To some extent it is me chasing the green tick 😅 but imo CI also reflects our confidence in the code. If we have lots of "disable this on 1.6" or "if version < 1.7 then @test_broken", it is basically untested code. Are we able to tell if anybody is using DPPL on <1.10, given that Turing is already >=1.10? |
I don't think anyone is directly depending on DynamicPPL other than Turing.jl, see |
Definitively not pro this 👍 My point is more that we shouldn't just bump Julia versions to make something work unless we can first understand what's happening. Looking at the stacktrace it seems we're trying to |
The same as #685, just ported to the newest version.
Closes #684 (again)