You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am not really sure what is going on, since all the other logged values look fine. I have a simple supervised training flow with the standard phases. Here is the construction of my Learner for reference.
lossfn = Flux.Losses.logitcrossentropy
# define schedule and optimizer
es =length(trainloader)
schedule =Interpolator(Step(0.001, 0.5, [20, 10, 20]), es)
# this is a patched ADAMW not Flux.ADAMW
optim =ADAMW(0.001, (0.9, 0.999), 1e-4)
# callbacks
logger =TensorBoardBackend("tblogs")
schcb =Scheduler(LearningRate => schedule)
hlogcb =LogHyperParams(logger)
mlogcb =LogMetrics(logger)
valcb =Metrics(Metric(accuracy; phase = ValidationPhase))
# setup learner object
learner =Learner(m, lossfn;
data = (trainloader, valloader),
optimizer = optim,
callbacks = [schcb, ToGPU(), hlogcb, mlogcb, valcb])
This is what my learning rate log looks like:
I'm not sure if #107 is related. Running the scheduler without FluxTraining.jl looks fine:
I am not really sure what is going on, since all the other logged values look fine. I have a simple supervised training flow with the standard phases. Here is the construction of my
Learner
for reference.This is what my learning rate log looks like:
I'm not sure if #107 is related. Running the scheduler without FluxTraining.jl looks fine:
The text was updated successfully, but these errors were encountered: