-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Type inference failure when compiled with custom AbstractInterpreter (e.g. GPUCompiler) #643
Comments
Some relevant code:
World = Base.get_world_counter()
FA = Const{typeof(LogDensityProblems.logdensity)}
A = Active
width = 1
Mode = Enzyme.API.DEM_ReverseModeCombined
ModifiedBetween = (false, false)
ReturnPrimal = true
ShadowInit = false
ABI = Enzyme.FFIABI
TT = Tuple{Const{LogDensityFunction{DynamicPPL.TypedVarInfo{@NamedTuple{x::DynamicPPL.Metadata{Dict{AbstractPPL.VarName{:x, Accessors.IndexLens{Tuple{AbstractPPL.ConcretizedSlice{Int64, Base.OneTo{Int64}}, Int64}}}, Int64}, Vector{IsoNormal}, Vector{AbstractPPL.VarName{:x, Accessors.IndexLens{Tuple{AbstractPPL.ConcretizedSlice{Int64, Base.OneTo{Int64}}, Int64}}}}, Vector{Float64}, Vector{Set{DynamicPPL.Selector}}}}, Float64}, DynamicPPL.Model{typeof(demo2), (Symbol("##arg#225"),), (), (), Tuple{DynamicPPL.TypeWrap{Matrix{Float64}}}, Tuple{}, DynamicPPL.DefaultContext}, DynamicPPL.DefaultContext}}, Duplicated{Vector{Float64}}}
mi = Enzyme.Compiler.fspec(eltype(FA), TT, World)
target = Enzyme.Compiler.EnzymeTarget()
params = Enzyme.Compiler.EnzymeCompilerParams(Tuple{FA, TT.parameters...}, Mode, width, Enzyme.Compiler.remove_innerty(A), true, #=abiwrap=#true, ModifiedBetween, ReturnPrimal, ShadowInit, Enzyme.Compiler.UnknownTapeType, ABI)
tmp_job = Enzyme.Compiler.CompilerJob(mi, Enzyme.Compiler.CompilerConfig(target, params; kernel=false), World)
interp = GPUCompiler.get_interpreter(tmp_job)
spec = specialize_method(mi.def, mi.specTypes, mi.sparam_vals)
Core.Compiler.typeinf_type(interp, mi.def, mi.specTypes, mi.sparam_vals) |
So is this a Turing.jl issue or a GPUCompiler.jl issue? Given that the type inference works nicely without GPUCompiler |
Not sure, but likely both. My understanding of Julia's semantics is that
some things are explicitly unspecified and the Julia compiler is free to
choose (like where some inlining and type propagation is done). While it's
possible GPUComojler is forcing Julia to make different decisions, the fact
that it can fail means that Julia is allowed to compile it in a way that
guarantees an error and thus is a bug in turing
…On Wed, May 29, 2024, 2:54 PM Tor Erlend Fjelde ***@***.***> wrote:
So is this a Turing.jl issue or a GPUCompiler.jl issue? Given that the
type inference works nicely without GPUCompiler
—
Reply to this email directly, view it on GitHub
<#643>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAJTUXDQVJUWRE24HNIDLHTZEXFZ7AVCNFSM6AAAAABINIQA52VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZXGM2DOMRVHA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Just to clarify a bit here: there's no "bug" per-se in Turing.jl. The "bug" is just that there's no constructor for But this constructor is only hit because GPUCompiler somehow causes an inference issue, leading to |
Sure, I'm not sure which subpackage used by turing the error is caused by. My guess probalbly is that somewhere there is a use of a |
Gathering links to Enzyme issues that came up when trying to minimise this: |
@willtebbutt wrote in a slack discussion
using Cthulhu, Enzyme, Tapir
# Specify function + args.
fargs = (Base._mapreduce_dim, Base.Fix1(view, [5.0, 4.0]), vcat, Float64[], [1:1, 2:2], :)
tt = typeof(fargs)
# Construct the relevant interpreters.
native_interp = Core.Compiler.NativeInterpreter();
cthulhu_interp = Cthulhu.CthulhuInterpreter();
enzyme_interp = Enzyme.Compiler.Interpreter.EnzymeInterpreter(
Enzyme.Compiler.GLOBAL_REV_CACHE,
nothing,
Base.get_world_counter(),
Enzyme.API.DEM_ReverseModeCombined,
);
tapir_interp = Tapir.TapirInterpreter();
# Both of these correctly infer the return type, Vector{Float64}.
Base.code_typed_by_type(tt; optimize=true, interp=native_interp)
Base.code_ircode_by_type(tt; optimize_until=nothing, interp=native_interp)
# Inference fails.
Base.code_typed_by_type(tt; optimize=true, interp=cthulhu_interp)
Base.code_ircode_by_type(tt; optimize_until=nothing, interp=cthulhu_interp)
# Inference fails.
Base.code_typed_by_type(tt; optimize=true, interp=enzyme_interp)
Base.code_ircode_by_type(tt; optimize_until=nothing, interp=enzyme_interp)
# Inference fails.
Base.code_typed_by_type(tt; optimize=true, interp=tapir_interp)
Base.code_ircode_by_type(tt; optimize_until=nothing, interp=tapir_interp) @wsmoses pointed out the above compiler bug might be related to this issue. |
Not a DynamicPPL/Turing issue; close in favour of JuliaLang/julia#55638 |
The text was updated successfully, but these errors were encountered: