You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This divider code is working for us and its output is being displayed at 1165ns. But our requirement time is below 500 ns.
So can you suggest us ways to reduce the simulation time to 500ns?
The text was updated successfully, but these errors were encountered:
The best solution will probably depend on your specific needs. I have a couple of suggestions that may help you.
Firstly, I'm not sure what clock speed you are running at, you might be able to increase the clock speed. Of course the maximum speed that you can clock the core will depend on the device you are targeting.
Secondly, this particular library was written with area in mind to keep the logic utilisation low. The amount of time that it takes to calculate a division is also variable. If you are trying to achieve a strict real-time performance requirement this may not be the best solution.
I have another project that might be a better fit. This divider in my verilog-math library is fully pipelined. It will use much more area, but will give you a much faster performance. The core has a fixed latency of 36 clock cycles, so if you ran at 100MHz (for instance) it would have a latency of 360ns from input to output, but would have a throughput rate of 1 division every 10ns. I'm not sure whether your 500ns requirement applies to the throughput rate or the latency, but this should give you sufficient performance in either case.
This divider code is working for us and its output is being displayed at 1165ns. But our requirement time is below 500 ns.
So can you suggest us ways to reduce the simulation time to 500ns?
The text was updated successfully, but these errors were encountered: