You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The comment says 128 but the code uses 100? Also is this assuming that errors will be <= 1, or is that handled elsewhere somehow?
If we are making an assumption about error magnitudes, we should be explicit about it. In my experience, error values are very rarely going to be this high, so we might get better performance by looking at a tighter range of error values (say from -0.2 to 0.2).
The text was updated successfully, but these errors were encountered:
The only open question for me would be default values. If error does in fact typically go between -0.2 and 0.2, then we'd want a larger default for pes_error_scale. But I think this is fairly problem dependent. For example, we had to scale it down for the adaptive control example, since that used larger errors: 32f8bd6. I think that 100 is a good default, since if the range of errors is smaller, then it'll still work (just with slightly worse resolution).
At https://github.com/nengo/nengo-loihi/blob/master/nengo_loihi/simulator.py#L498 it seems as though we are getting the error from the host, doing
x = int(100 * x)
, and then sending it to the chip. There's a comment in that block of code saying >128 is an issue on chip; @drasmuss noted:If we are making an assumption about error magnitudes, we should be explicit about it. In my experience, error values are very rarely going to be this high, so we might get better performance by looking at a tighter range of error values (say from -0.2 to 0.2).
The text was updated successfully, but these errors were encountered: