Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added VariableRateCyclicTaskABC and updated ThreadBasedCyclicSendTask #1733

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

sschueth
Copy link

  • Modified bus.py to include an optional argument, period_intra, in send_periodic and _send_periodic_internal
  • Implemented VariableCyclicTaskRateABC within broadcastmanager.py to be inherited by ThreadBasedCyclicTask
  • Added attributes to CyclicSendTaskABC to help with handling a group of messages
  • relaxed arbitration_id requirement within group of messages and updated the _check_modified_messages to check for each updataed message's arbitration_id to preserve that arbitration_id does not change when a message is modified
  • Modified ThreadBasedCyclicTask._run() for variable rate transmission of messages in a group

@sschueth
Copy link
Author

@tysonite @hardbyte @zariiii9003

I referenced PR #785 related to the relaxation of the uniqueness of the arbitration_id in SocketCAN and was looking for some insight on the reason why the arbitration_id uniqueness checking exists for a Sequence of messages within ModifiableCyclicTaskABC.

  • Is this uniqueness restriction related to a hardware / driver limitation?
    • I've only been able to test this on my available hardware (vector and kvaser) and have not had any issues with relaxing the uniqueness of arbitration_id
  • Is this uniqueness restriction to ensure a modified or updated message is not changing an existing message arbitration_id and therefore removing it from the bus?
    • If this is the case, I think my changes within _check_modified_messages() should suffice?

I removed the all_same_id check within _check_and_convert_messages() and converted self.arbitration_id to a list of arbitration_id. To preserve the same behavior of "The arbitration ID of new cyclic messages cannot be changed from when the task was created" I added a series of checks within _check_modified_messages() to check that the new sequence of messages have respecitvely matching arbitration_id of the original sequence of messages.

For context, I'm making this change to allow for messages with different arbitration ids to be passed in the same task ensuring the order of transmission and improved timing. For example, with this update the J1939-76 Safety Header Message and Safety Data Message can be explicitly linked and transmitted as a sequence of messages within the same task.

Any feedback / guidance is greatly appreciated. Thanks!

@hardbyte hardbyte self-requested a review March 15, 2024 19:11
Copy link
Owner

@hardbyte hardbyte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the direction.

Would appreciate this being tested and the user facing docs being updated too.

self.msgs_len = len(messages)
self.messages = messages
# Take the Arbitration ID of each message and put them into a list
self.arbitration_id = [self.messages[idx].arbitration_id for idx in range(self.msgs_len)]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

prefer self.arbitration_ids

Comment on lines +156 to +161
for idx in range(self.msgs_len):
if self.arbitration_id[idx] != messages[idx].arbitration_id:
raise ValueError(
"The arbitration ID of new cyclic messages cannot be changed "
"from when the task was created"
)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't tried but do you know if socketcan supports reordering the messages as well? In which case prefer to count the arbitration ids match.

Comment on lines +239 to +242
:param period_intra:
Period in seconds between each message when sending multiple messages
in a sequence. If not provided, the period will be used for each
message.
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Period in seconds between individual message within the sequence.

I assume no impact for sending individual messages? Could you also update the docstring for period 🙏🏼

@DjMike238
Copy link

DjMike238 commented May 16, 2024

Hi, will this PR ever be merged? I'm trying to use this library to send multipacket J1939 packets (in which the first packet will have a different arbitration ID than the others) periodically, and having the relaxed arbitration_id requirement for this would be very useful for my use case.

I'm currently using a PCAN interface, I just tried commenting out the code that checks all_same_id in _check_and_convert_messages() from broadcastmanager.py and it's working without any issues.

I could also do a smaller PR only with the relaxed arbitration_id requirement if you think it's the best course of action.

@DjMike238
Copy link

Hi, just wanted to say that I've been testing this PR for a while and it properly allows sending multipacket CAN messages periodically (which I'm supposing is the reason behind this PR) both on PCAN (tested on PCAN-USB X6) and Vector (tested on VN1630A).

Is there any possibility to have this merged in a future release? I could also try and take care of any issues, if this PR is not worked on anymore (is that fine with you, @sschueth?).

@sschueth
Copy link
Author

Hi, @DjMike238. Apologies this PR fell behind. Thank you for testing with PCAN. I have successfully tested with Vector and KVaser. I think my biggest hang up going forward is SocketCAN. I will try to make some time this week to clean up this branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants