-
Notifications
You must be signed in to change notification settings - Fork 427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Implemented a timer manager that can handle any timer type #2573
base: rolling
Are you sure you want to change the base?
Conversation
Signed-off-by: Janosch Machowinski <[email protected]>
199d354
to
9c77f32
Compare
This pull request has been mentioned on ROS Discourse. There might be relevant details there: |
return; | ||
} | ||
|
||
timer->clear_on_reset_callback(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should first check that the timer is managed by this queue (as the timer could be from another executor) before clearing the callback
|
||
public: | ||
TimerManager() | ||
: timer_queues{RCL_ROS_TIME, RCL_SYSTEM_TIME, RCL_STEADY_TIME} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
queues should be created when someone tries to add the first timer of that type, and (maybe?) destroyed when empty.
if an application only uses timers of a single type, it shouldn't create 2 extra threads per executor
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I don't see any harm in having two idle threads in this case. They would block on the conditional and consume no CPU whatsoever.
Starting threads on runtime, and additional checks inside the executor code I see more as a problem though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Even if there will be no CPU consumption, threads also have a memory impact.
Moreover, even if the impact is likely very small, most systems will only have 1 type of timers (a lot of users use the default RCL_ROS_TIME clock), it seems strange that if you have 10 executors with 1 timer each you will create 40 threads instead of 20.
I agree that stopping and restarting them is probably not great, but at least we should wait to create a thread only if we need it.
This is an issue also with the current implementation, that always creates 1 thread even if you have no timers... But it's worst here because we always create 3 threads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still don't see the issue, memory impact is at most 100 KB per thread. So event for a really big system like ours (106 executors), we are talking 30 MB of memory vs 10MB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In embedded applications, knowing that there's 2-3 MB wasted on threads that are never used is a problem, especially if it's something that can be easily optimized.
I agree that starting and stopping the threads is likely not the correct solution (at least it shouldn't be the default behavior), but I don't see any downside in delaying the creation of the timer queues until they are needed.
The remove_timer
and add_timer
functions below should just look up the timer clock type and create a queue if they don't have one already, this also makes the code simpler because we don't have to "try" to add the timer to all queue, but we already know in which queue it should be added and we can check a return code for success/failures.
public: | ||
TimerQueue(rcl_clock_type_t timer_type) | ||
: timer_type(timer_type), | ||
used_clock_for_timers(timer_type), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This name is not easy to understand.
I propose internal_clock
|
||
void call_ready_timer_callbacks() | ||
{ | ||
auto readd_timer_to_running_map = [this](TimerMap::node_type && e) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this can be static
or a member function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this may not be static, as this would only be copied the first time the lambda is initialized.
auto readd_timer_to_running_map = [this](TimerMap::node_type && e) | ||
{ | ||
const auto & timer_data = e.mapped(); | ||
if(remove_if_dropped(timer_data)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this check here? It's already done at the beginning of each while (!running_timers.empty()) {
loop iteration
This implementation with a multimap rather than a priority queue is a lot simpler, but I would be curious to see the performance difference with respect to the current one. IMO the current approach has benefits only when you have a lot of timers, but I doubt that executors will usually manage more than 10 timers each! |
|
||
public: | ||
TimerQueue(rcl_clock_type_t timer_type) | ||
: timer_type(timer_type), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rather than storing this as an additional member, we could just use the get_clock_type()
method of the clock
} | ||
|
||
// timer is ready, execute callback | ||
running_timers.begin()->second->timer_ready_callback(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a couple of comments on this:
- we should check somewhere that this is callable (e.g. when adding a timer)
- the current timers manager supports two modes of operation: in one mode, when a timer is ready it will push an event and the timer will then be executed by the events-executor thread (after it picks up that event); in the other mode the timers manager will directly execute the timer's user callback (without pushing any event). This second mode can give much more precision in the timer's execution and I see value in preserving it.
Maybe, to support the second mode, we should just set a timer_ready_callback
that executes the timer...
all_timers.erase(it); | ||
} | ||
|
||
used_clock_for_timers.cancel_sleep_or_wait(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we don't need to do this under the mutex lock (i.e. we should release it once we are done working on the timers)
This is a draft commit as a basis for advancing the current timer manager implementation used by the events executor.
@alsora @fujitatomoya