-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory System Rework #289
Comments
Good ideas! The information in the plattform description could also be used to instantiate the memory controllers, so that we do not have to re-implement it for every plattform. With this, we have a list of slave ports (all available memory controllers) and a list of master ports from the PEs and the job file is used for figuring out the connectivity (including the DMA engine). In addition, we should also address the following points:
|
I don't think we can automatically instantiate the memory controllers from the description. From my experience there is always platform specific stuff (e.g. clocks+resets, different IPs for different familys, ...). We can have a look if it makes sense to have some shared stuff (e.g. for devices in the same family) but from my feeling this will not be that much (and thus I'm not sure if we should do this) Regarding status core & default configuration: I absolutely agree. And a smart default should be no problem, each platform can just have it's own default configuration which is used when nothing else is given (e.g. just have one DDR and connect all PEs to it) |
I wrote down some ideas regarding the memory system rework (#181) so we can discuss them. As the new runtime already has support for multiple memories, this is mainly regarding the hardware side.
My ideas consist of three parts:
Description of available memories
This describes all available memory type on this platform with (at least) an ID, the memory size and the number of available instances of this type. I think this could be included in the
platform.json
(or alternatively in a separate file)Example: platform.json
Open questions are which (additional) attributes might be needed here and how to handle special cases like e.g. BRAM (no fixed capacity, count, ....).
Definition of memories
The actual definition (instantiation) is per platform and still done via TCL. Each platform defines a function
create_memory(id, ...)
which takes the ID of memory type as an argument (additional arguments include the number of memories of this type it should instantiate and more) and creates the requested memories in a new hierarchical cell. The cell has a defined interface to the outsided (AXI input, clk, reset). This basically replaces to the currently usedcreate_mig_core
function, the main difference being the ID parameter.Open questions include if it makes sense to have some stuff shared between multiple platforms (e.g. for devices in same family, BRAM, ...).
Connection of memories
This is also done via TCL but is not platform specific. It parses the JSON information which memories are used in a design (more details afterwards). It then uses this information to create the memories (by calling the platform specific
create_memory()
function) and all necessary interconnects, DMA engines, ... required for this.Memories actually used in the design (JSON)
Here the user describes which of the defined memories he wants to use in his design and how the PEs should be connected to them. This will be part of the existing JSON Job-File. If this information is not given, TaPaSCO will use the default configuration for this platform. (So you can still just use the
compose
command and only need to do this if you want non-default behaviour)This is probably the part with the most discussion potential because there is a broad range of possibilites, depending on how complex this should be.
To have a starting point for our discussion I will make a suggestion how this could look like:
There are three parts to this: First, you define the memories you want to use in your design, basically choosing from the available memories for the platform (e.g. I want 2 DDR controllers and 4 HBMs in my design).
In a second step you define "address maps" which is basically what the PEs will "see". Each address map defines which memories can be accessed at which address. So for example you could define just a single address map for your design which includes all memories you use (e.g. DDR0 starts at address 0, DDR1 starts at address X, HBM0 starts at address Y, ...). Or alternatively you could have multiple address maps in your design (e.g. one address map for DDR0, one for DDR1 and one for the HBMs).
And in the final step you define for each PE (or to be more precise for each AXI master of the PE) which of these "address maps" it should use.
And example Job file could look like this: job.json
What do you think of these ideas?
The text was updated successfully, but these errors were encountered: