-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation needed #84
Comments
Hi @devreal , I agree more documentation would be good. :-)
For your other questions:
In general, Task Bench will aggressively check any inputs you give to any tasks. It is pretty much impossible to do this wrong, unless you just don't execute the task graph at all. By default, we'll also check that you execute at least one task from each task graph at the point where we print results. So it's pretty hard to mess up, but if you want you could add a print statement to https://github.com/StanfordLegion/task-bench/blob/master/core/core.cc#L546 and check that it's printing the number of tasks you expect. As long as it does that and does not crash, you're good. You may also benefit from Task Bench's diagnostics about the task graph itself. E.g., for the
Pass Hope that helps and feel free to ask questions if you have any. |
https://arxiv.org/pdf/1908.05790v2.pdf So far, I have only found this one. I would like to learn more about the execution process and the scheduling principles of openmp/region. Have you found any more documentation? |
@1193749292 I'm not sure what you're asking for. In general, the implementations are entirely separate from the core. What OpenMP does, how it schedules, is not something that has anything to do with the description of task graphs. If you have specific questions, feel free to ask them here. Please also read the earlier comments in this thread because there's good information up there as well. |
I am trying to implement the task-bench benchmark on top of a data flow paradigm (TTG) and found it to be a rather frustrating exercise, mostly due to the total lack of documentation. In the
task_graph_t
exactly 3 out of 11 fields have some kind of documentation and only 2 of the 12task_graph_*
functions are blessed with a comment, one of which is aFIXME
... For me, it really comes down to a guessing game and trying to read other implementations to figure out whether my implementation is actually correct in the sense of the benchmark.Particular questions I still don't know the answer to are:
nb_fields
and what is the connection totimesteps
? It appears thatnb_fields
is set to the number of timesteps if not explicitly provided. Why? What does the number of fields have to do with the number of timesteps? Can I ignore it and just use1
?task_graph_[reverse]_dependencies
andtask_graph_offset_at_timestep
/task_graph_width_at_timestep
? Do I have to apply a correction for the offset/width to the dependencies provided bytask_graph_[reverse]_dependencies
? If so, why is that not done intask_graph_[reverse]_dependencies
?x
in consecutive timesteps)? What is the correctness metric? Am I free to distribute the data however I want? What is the minimum number of dependencies I need to support? (clearly, the models mentioned above couldn't support the alltoall pattern at scale and chose a seemingly arbitrary number of 10)Given the state of things, I will go ahead and choose the interpretation most favorable for my model that won't crash or error out during execution. However, that should not be the way to write a benchmark meant to be portable...
The text was updated successfully, but these errors were encountered: